The article glosses over async Rust and is mostly a rant about how closures are difficult in Rust.
Most of the difficulty comes from Rust not having a GC yet wishing to keep track of object lifetimes precisely: a GC'ed language needs no distinction between an ordinary function pointer and a closure that captures the environment. But Rust being a low-level systems language chose not to have a GC.
Another popular language also doesn't have GC, makes distinctions between ordinary function pointers and closures, and its closures are unnameable types; that language is C++. But instead of using template for closures everywhere (like the STL), you can optionally use std::function<R(Arg)>, a type-erased closure type, which would in principle be similar to Box<dyn Fn(Arg) -> R> in Rust. But such a type doesn't really mention the lifetimes of whatever it captures, so in practice it doesn't work well in Rust (though std::function works well in C++ because the C++ doesn't care about lifetimes).
With that in mind, I see this article as not understanding the goals and tradeoffs of Rust. The author would be happier writing in a higher-level language than Rust.
I think it's not entirely fair to paint problems with rust's async features as an aversion to low-level-ness.
If anything, I think the issue with Rust's async is that it tries to be too high level.
In my experience with async rust, most of the difficulty comes from "spooky errors at a distance". I.e, you're writing some code which feels completely normal, and then suddenly you are hit with a large, obtuse error message about types and traits you didn't even know you were working with. The reason being, something went wrong with all the inference involved in implicitly wrapping everything in futures. And often it's not even related to the line of code you just wrote.
So in these cases, I think the issue is that rust tries very hard to wrap async in a very smooth, sugared syntax. It works fine in a lot of cases, but due to the precise nature of Rust, there are a cases where it doesn't work. And because the system relies on a lot of magic inference, the programmer has to hold a lot of assumptions and rules in their mind to work effectively with async.
I am not an expert, but from my experience it feels like async rust was a bit rushed. It should have been introduced as a more explicit syntax first, with sugar on top once the guard rails had better been figured out through experimentation.
That is under the assumption that future-based async in rust is a good idea at all. Rust's ownership model becomes intuitive very quickly in the single-threaded case, but at times it feels like a square peg in a round hole where async is concerned.
The article is also conflating synchronous single-threaded, synchronous multi-threaded and asynchronous programming.
Each have their own usage, and no, a multi-threaded program is not the same as an asynchronous one. For example, using threads and channels instead of async/await is not a design flaw if your workload is mostly about large, blocking computations on a read-only shared state with no I/O. In that situation, lifetime and closures will not pose any issue and won't get in the way because you're not mutating anything and communicate with owned messages.
The async ecosystem is evolving and will be better tomorrow than it is today. Saying that Rust programming as a whole is a mess because async/await is harder to use than necessary, is short-sighted. It's like saying you don't like apples, but you only chewed on the branch of the tree.
From my very limited expeirience with Rust I noticed that it becomes way more easy and laid back language when you just skip using references and lifetimes nearly completely and just wrap everything in Rc<>. Then you are getting expeirience of fairly high level language with a lot of very cool constructs and features like exhaustive pattern matching and value types with a lot of auto-derived functionality.
C++ lambdas are much better than Rust ones, because you can explicitly decide what to copy inside of the closure.
As for the author being happier writing in a higher level language, it kind of proves the point that Rust's main target is the domain where any sort of automatic memory management aren't a viable option.
Pushing Rust outside of this domain is only trying to fit a square peg into a round hole.
Some of the pain inflicted by async Rust is incidental, not
due to this GC choice. Pin is the biggest culprit.
Pin exists so that references captured in Futures may be implemented via raw pointers. This implies that a Future contains pointers to itself, hence Pin.
The cost of Pin is that it forces you to write unsafe code as a matter of course, the so-called pin-projections. [1] Look at the requirements for structural pinning: they are quite complicated.
References captured in Futures probably could have been implemented differently: base + offset, or relocations, or limit what can be captured, or always Box the state for a Future that captures a reference. Those would have avoided a GC and been even safer, since it wouldn't require unsafe code on the part of struct implementors.
I don't use Rust but was curious if this was a real or imagined problem--it seems to be the latter. Two points:
1. the example could have made a pure callback function that takes a mut& db param and pass thaf param to the 'do'er function. Why is this not required anyway? i.e. How does Rust know/decide who owns the mut& db if both the closure and the function that creates the closure can reference it?
2. as mentioned, the complaints about async seems to apply to closures in general even if used synchronously, AFAICT.
Best to think of this post as quirks when working with closures in Rust. It would have been far better to list each with workarounds.
I agree that this one isn't about async Rust at all. The main comparison is drawn in
> And then fast forward a few years and you have an entire language ecosystem built on top of the idea of making these Future objects that actually have a load of closures inside
And this sentence is wrong. There's next to no `Future` implementations which are built on top of callbacks. And the reason why that's the way it is is exactly the one the author mentions: Callbacks don't work very well with Rusts ownership model.
To be fair withoutboats didn't really address any of the points that people make about the flaws of async Rust (e.g. the cancellation problem).
It may be the case that there's no good way to solve them, but that doesn't mean that they aren't problems and it doesn't really help to say that people who highlight them are totally wrong and don't know what they're talking about.
That said I think your second link does a much better job of explaining the issues than this post does.
Maybe it’s only my take, but from what I understand, the author wants to easily create closures with mutable references and call them from anywhere, asynchronously?
And the complaint is that Rust semantics makes this hard. Well yes, Rust makes it uncomfortable to shoot your own foot, that’s kind of the point.
Maybe it’s only my take, but from what I understand, the author wants to easily create closures with mutable references and call them from anywhere, asynchronously?
I think you are misunderstanding the author's point. I think that they'd probably agree that the (lack of) ergonomics of closures are a necessary result of the safeguards that Rust provides.
They point is more that, given that closures have somewhat frustrating ergonomics, it was probably a bad choice to base Rust's primary concurrency system on closures.
Disclaimer: this is not my opinion, just paraphrasing what I believe the author's point is. I found the post a bit strange, it seems to end rather abruptly. It doesn't seem to address the main question it seems to raise: what does this mean for async Rust and why is this bad?
That was my impression too. Rust “makes it hard” for a reason: it’s not safe. If you work with the type system to prove that it is, you can go about and do your thing but Rust is about guarantees. That’s one great thing about it. You can’t ask it to just not guarantee something.
Why would the first example with the database be "shooting your own foot"? I'm not a Rust dev, but in other languages that code makes perfect sense to me.
I write async rust for my day job, and while it is more complicated to write than synchronous rust, the complications generally make sense once you understand how the compiler is managing the remarkably tricky task of async with no GC. Rust remains significantly more pleasurable to write than e.g. typescript for me.
Some patterns around error handling still a bit awkward, but the FuturesExt and TryFuturesExt traits help a lot there.
The author spends the majority of the post saying how synchronous Rust doesn't work. The main problem? Closures have to be passed as traits. And they either don't know about the `f: impl Fn(i32)` syntax or refuse to use it for some reason.
Then the last paragraph just say "Oh and asynchronous Rust is even worse."
But why is Rust so much harder than any other newish programs language.
Dart is like all of my dreams come true at once, Rust still gives me nightmares. I seriously tried to learn it multiple times and failed repeatedly.
I've created several Dart/ Flutter projects for myself and friends. Multiple C#/Unity projects. Python and JavaScript have paid my rent for the better part of a decade.
> As someone who used to really love Rust, this makes me quite sad.
The current async story is still an MVP and I too dislike it. In the months before async, the ecosystem seemed on halt, waiting for async to land on stable Rust. Since then nothing has changed. The ecosystem "degraded" noticeable and has not recovered since.
Maybe in future async will be great, but right now I try to avoid it.
Sorry to spoil your axe-grinding with hard data, but the Rust async ecosystem has exploded since 2019. It is now about 5-6 times larger than it was before async/await landing:
I think your comment hits the bullseye better than many others I've seen around this. It very much looks like the language stopped progressing with async.
Seems like they just tried to be everything at once, and reached some sort of a critical mass. It probably didn't help how Mozilla dropped lots of their Rust projects and staff at roughly the same time.
It's still a very good C++ replacement. I think its role as a higher level application language is more of an open question.
I run a team at work building a desktop application in Rust which handles live streaming market data and displays it in an arbitrary layout defined by the user. From our experience (and we have a very good Rust team), async Rust works fantastically well.
Honestly, people want to write high level code in low level languages too often. Both Rust and C++ ought to be relegated to high performance cores of larger, "squishier" programs written in higher level languages. The Emacs model is perfect for most programs running on conventional systems.
The key insight here is that garbage collection gives you access to a variety of patterns that are otherwise fiendishly difficult to implement safely, including non-leaking closures, RCU-like resource sharing, and fast (but safe!) bump-pointer allocation of short-lived objects. Honestly, something like typed Python or Typescript is going to be fast enough for most use-cases, especially if any compute-heavy parts thunk out to optimized C++ or Rust. And if those aren't fast enough, you can write against the JVM or the CLR in a variety of beautiful, expressive languages and still gain the benefit of a GCed, managed environment while retaining the ability to accelerate core kernels in native code.
Writing entire big systems in C++, Rust, etc. just doesn't make much sense to me. Yeah, I understand the benefits of technical uniformity, of having to train people to use only one language, and of easy of debugging when there's only one level of the stack --- but still, I think people use low level systems languages for too much, and these articles about asynchronous work being hard to write in Rust are symptomatic of this fundamental mismatch.
> Both Rust and C++ ought to be relegated to high performance cores of larger, "squishier" programs
So, people from Google have explained about this before. The idea you have assumes that if you speed profile the code, 99% of samples land in this function "A()" and so you just re-write that part in C++ and now it's faster, or if you measure allocations you find 99% of RAM was allocated by "B()" and so you just re-write that part in C++ and now it uses less RAM.
Google already did all that low-hanging fruit. When they run the profile it comes back flat. You should rewrite A, B, D, E, F, J, K, L, M, N ... in other words the way to make the system faster is to just write it in C++
Part of this is also scale. If a system I run once week is a little slow, maybe in some sense that costs 40¢ but I don't account for it. At a mid-size non-IT firm maybe a similar performance cost is $1000 per year. Just about worth somebody enquiring if it can be sped up, but not worth arguing about it if the answer is "No". Maybe a mid-size IT firm where you work spends $1k per month on this problem. You could speed it up by rewriting the whole system, this would take some time - how many days work before it's cheaper to leave the problem than pay you to fix it? However, at Google's scale maybe that problem costs them $1M per week. They can justify assigning a whole team to fix that because of scale.
1) the expressive type system, notably the use of optional type instead of null and the use of lifetimes to make reasoning about references tractable, and
2) the community, with a focus on correctness, documentation, and being welcoming.
"Maybe we could just have kept Rust as it was circa 2016, and let the crazy non-blocking folks write hand-crafted epoll() loops like they do in C++. I honestly don’t know, and think it’s a difficult problem to solve."
I think this is the most underappreciated part of this article. Since when did everything have to be async? There are other ways to represent concurrency that more accurately reflect what the computer is actually doing.
For example, an alternative to async is to represent a workstream as a state machine, where state transitions happen between I/O. Then, your state machine can be a struct, and each state transition can be an impl function on that struct that takes one or more completed I/O requests as input and emits one or more I/O requests as output. This saves you from having to implement everything as a closure, which this article rants about. Your top-level epoll loop merely services I/O requests from state machine instances, and invokes your global application logic to start and stop state machines to carry out business logic tasks.
I realize that many complicated workstreams could have many states due to all the I/O they might do, but the task of converting a high-level workstream into a state machine could be automated by the tooling.
> let the crazy non-blocking folks write hand-crafted epoll() loops like they do in C++.
> I think this is the most underappreciated part of this article.
I think it's incredibly silly actually. Abandon all async for a difficult and error prone epoll model?
> Since when did everything have to be async?
It doesn't! No one is forcing anyone to use async. I'm not sure why the author implies that.
But if you do want to use async, Rust is attempting to solve the async problem with the same guarantees it has for blocking code. Turns out that is hard.
I think callbacks do not scale with large projects as they make understanding the flow difficult (at least from my experience of multi million LOC code bases based on callbacks that called callbacks).
Using callbacks in Rust is not idiomatic, at least I haven't seen such code over the last two years writing Rust, and Rust syntax and semantics with life times is not tailored toward that approach.
So the author forces an async style on Rust that it was not build for and then complaints.
It’s a pain with GC as well, coming from a C# background. It’s incredibly easy to write something that intermittently doesn’t work in weird and impossible to debug ways.
Speaking of the four versions of tokio in your build problem: this is definitely an issue with Rust dependencies. Async just magnifies it because there are a lot of async utility crates.
Personally I hate dependencies in general and try to minimize them. Instead of reaching for a utility crate I think “how could I architect this so I don’t need this hack?” There is almost always a way. In the end falling back on Arc<> is still cleaner and probably comes with less overhead than dependency spaghetti.
I don't have any experience with async Rust (but I stuggled a lot with Rust's closures when I dabbled with Rust a while back so I can at least feel the pain the article tries to convey), but one important reason to not build async-await on top of fibers or threads but instead on code transformation (aka 'compiler magic') is 'weird architectures' like WASM, which doesn't have easy access to threading (locked behind COOP/COEP headers), and where the call stack is inaccessible to code running inside the VM.
I didn't quite get the gist of the article though, isn't the whole point of async-await to get rid of passing callback function pointers around? What do closures have to do with async-await in Rust?
I think the article is just... wrong about async and closures. You're right that async Rust simply doesn't deal with them, because (contrary to the article) they're not used in the async compiler transformation.
I think you can implement fibers and continuations in WASM by converting the whole program into a giant switch statement and heap allocating frames. There are a few scheme-to-C compilers that do that I think.
Mind, it is not going to be fast as it is going to be hard to generate efficient code ...
> What do closures have to do with async-await in Rust?
Well, space _blocking runs blocking code in an async context. It takes a closure as it's argument and if you're in a mixed async/sync platform you will his this function a lot. To deal with it you end up needing to use move. And to use move you need to clone your pointers to the data and manage all that guff.
The alternatives are Arc, deep copies, or use C++ where you will invariably get it wrong and end up with corrupted data and a very bad week (or you don't notice and reply to this comment saying that you do this in C++ and never had a problem)
This was why I chose F# instead of Rust for the Darklang rewrite [1] - I just couldn't get async to work well, and the hoops and complexity I had already had to get it even close to working were way too much.
The author spills a lot of ink showing that they really didn't take any time at all to understand the problems they complain of.
The title is clickbait, of course aync Rust works, and the article doesn't talk about it anyway. The only mention of async is an observation that it's CPS under that hood, followed by a wandering rant about CPS that fails to account for the design constraints.
In the end, the author suggests forcing every AIO user to manually write out their polling loops, which is simply a silly idea. At least if their recommendation was a completion-based API without language support they would seem serious.
The author ends this with a now apparently trendy Common Lisp yearning, but ironically, Common Lisp's async story is pretty weak, too. On the other hand, at least it didn't infect the whole ecosystem, just the projects that use certain libraries.
[+] [-] dang|4 years ago|reply
[+] [-] kccqzy|4 years ago|reply
Most of the difficulty comes from Rust not having a GC yet wishing to keep track of object lifetimes precisely: a GC'ed language needs no distinction between an ordinary function pointer and a closure that captures the environment. But Rust being a low-level systems language chose not to have a GC.
Another popular language also doesn't have GC, makes distinctions between ordinary function pointers and closures, and its closures are unnameable types; that language is C++. But instead of using template for closures everywhere (like the STL), you can optionally use std::function<R(Arg)>, a type-erased closure type, which would in principle be similar to Box<dyn Fn(Arg) -> R> in Rust. But such a type doesn't really mention the lifetimes of whatever it captures, so in practice it doesn't work well in Rust (though std::function works well in C++ because the C++ doesn't care about lifetimes).
With that in mind, I see this article as not understanding the goals and tradeoffs of Rust. The author would be happier writing in a higher-level language than Rust.
[+] [-] skohan|4 years ago|reply
If anything, I think the issue with Rust's async is that it tries to be too high level.
In my experience with async rust, most of the difficulty comes from "spooky errors at a distance". I.e, you're writing some code which feels completely normal, and then suddenly you are hit with a large, obtuse error message about types and traits you didn't even know you were working with. The reason being, something went wrong with all the inference involved in implicitly wrapping everything in futures. And often it's not even related to the line of code you just wrote.
So in these cases, I think the issue is that rust tries very hard to wrap async in a very smooth, sugared syntax. It works fine in a lot of cases, but due to the precise nature of Rust, there are a cases where it doesn't work. And because the system relies on a lot of magic inference, the programmer has to hold a lot of assumptions and rules in their mind to work effectively with async.
I am not an expert, but from my experience it feels like async rust was a bit rushed. It should have been introduced as a more explicit syntax first, with sugar on top once the guard rails had better been figured out through experimentation.
That is under the assumption that future-based async in rust is a good idea at all. Rust's ownership model becomes intuitive very quickly in the single-threaded case, but at times it feels like a square peg in a round hole where async is concerned.
[+] [-] Fiahil|4 years ago|reply
Each have their own usage, and no, a multi-threaded program is not the same as an asynchronous one. For example, using threads and channels instead of async/await is not a design flaw if your workload is mostly about large, blocking computations on a read-only shared state with no I/O. In that situation, lifetime and closures will not pose any issue and won't get in the way because you're not mutating anything and communicate with owned messages.
The async ecosystem is evolving and will be better tomorrow than it is today. Saying that Rust programming as a whole is a mess because async/await is harder to use than necessary, is short-sighted. It's like saying you don't like apples, but you only chewed on the branch of the tree.
[+] [-] scotty79|4 years ago|reply
Does Rc<> help this much with async as well?
[+] [-] jmull|4 years ago|reply
That's a pointless conclusion. The author's criticisms of Rust's tradeoffs are invalid because those are the tradeoffs Rust made. A perfect circle!
[+] [-] pjmlp|4 years ago|reply
As for the author being happier writing in a higher level language, it kind of proves the point that Rust's main target is the domain where any sort of automatic memory management aren't a viable option.
Pushing Rust outside of this domain is only trying to fit a square peg into a round hole.
[+] [-] Rusky|4 years ago|reply
It can: Box<dyn Fn(Arg) -> R + 'a>
This is just the general syntax for any trait object with a lifetime.
[+] [-] ridiculous_fish|4 years ago|reply
Pin exists so that references captured in Futures may be implemented via raw pointers. This implies that a Future contains pointers to itself, hence Pin.
The cost of Pin is that it forces you to write unsafe code as a matter of course, the so-called pin-projections. [1] Look at the requirements for structural pinning: they are quite complicated.
References captured in Futures probably could have been implemented differently: base + offset, or relocations, or limit what can be captured, or always Box the state for a Future that captures a reference. Those would have avoided a GC and been even safer, since it wouldn't require unsafe code on the part of struct implementors.
1: https://doc.rust-lang.org/std/pin/#projections-and-structura...
[+] [-] karmakaze|4 years ago|reply
1. the example could have made a pure callback function that takes a mut& db param and pass thaf param to the 'do'er function. Why is this not required anyway? i.e. How does Rust know/decide who owns the mut& db if both the closure and the function that creates the closure can reference it?
2. as mentioned, the complaints about async seems to apply to closures in general even if used synchronously, AFAICT.
Best to think of this post as quirks when working with closures in Rust. It would have been far better to list each with workarounds.
[+] [-] Matthias247|4 years ago|reply
> And then fast forward a few years and you have an entire language ecosystem built on top of the idea of making these Future objects that actually have a load of closures inside
And this sentence is wrong. There's next to no `Future` implementations which are built on top of callbacks. And the reason why that's the way it is is exactly the one the author mentions: Callbacks don't work very well with Rusts ownership model.
[+] [-] Lhiw|4 years ago|reply
[deleted]
[+] [-] Aissen|4 years ago|reply
And this blog post from someone who did spend a lot of time working with async rust: https://tomaka.medium.com/a-look-back-at-asynchronous-rust-d...
[+] [-] IshKebab|4 years ago|reply
It may be the case that there's no good way to solve them, but that doesn't mean that they aren't problems and it doesn't really help to say that people who highlight them are totally wrong and don't know what they're talking about.
That said I think your second link does a much better job of explaining the issues than this post does.
[+] [-] tptacek|4 years ago|reply
[+] [-] xondono|4 years ago|reply
And the complaint is that Rust semantics makes this hard. Well yes, Rust makes it uncomfortable to shoot your own foot, that’s kind of the point.
[+] [-] microtonal|4 years ago|reply
I think you are misunderstanding the author's point. I think that they'd probably agree that the (lack of) ergonomics of closures are a necessary result of the safeguards that Rust provides.
They point is more that, given that closures have somewhat frustrating ergonomics, it was probably a bad choice to base Rust's primary concurrency system on closures.
Disclaimer: this is not my opinion, just paraphrasing what I believe the author's point is. I found the post a bit strange, it seems to end rather abruptly. It doesn't seem to address the main question it seems to raise: what does this mean for async Rust and why is this bad?
[+] [-] uncomputation|4 years ago|reply
[+] [-] valenterry|4 years ago|reply
[+] [-] api|4 years ago|reply
[+] [-] mplanchard|4 years ago|reply
Some patterns around error handling still a bit awkward, but the FuturesExt and TryFuturesExt traits help a lot there.
My only real “I have no clue what’s going on” moment so far has been with an error about traits not being general enough, but someone on the rust forums helped me out: https://users.rust-lang.org/t/trait-is-not-general-enough-fo...
Setting up clippy to disallow non-send-sync futures throughout the codebase has prevented that particular thing from recurring.
[+] [-] Noughmad|4 years ago|reply
Then the last paragraph just say "Oh and asynchronous Rust is even worse."
[+] [-] loeg|4 years ago|reply
[+] [-] 999900000999|4 years ago|reply
But why is Rust so much harder than any other newish programs language.
Dart is like all of my dreams come true at once, Rust still gives me nightmares. I seriously tried to learn it multiple times and failed repeatedly.
I've created several Dart/ Flutter projects for myself and friends. Multiple C#/Unity projects. Python and JavaScript have paid my rent for the better part of a decade.
But Rust, I can't grasp basic concepts.
[+] [-] thibran|4 years ago|reply
The current async story is still an MVP and I too dislike it. In the months before async, the ecosystem seemed on halt, waiting for async to land on stable Rust. Since then nothing has changed. The ecosystem "degraded" noticeable and has not recovered since. Maybe in future async will be great, but right now I try to avoid it.
... I still love Rust
[+] [-] pornel|4 years ago|reply
https://lib.rs/crates/tokio/rev
And Rust as a whole keeps growing exponentially:
https://lib.rs/stats
[+] [-] darthrupert|4 years ago|reply
Seems like they just tried to be everything at once, and reached some sort of a critical mass. It probably didn't help how Mozilla dropped lots of their Rust projects and staff at roughly the same time.
It's still a very good C++ replacement. I think its role as a higher level application language is more of an open question.
[+] [-] artursapek|4 years ago|reply
https://cryptowat.ch/desktop
[+] [-] quotemstr|4 years ago|reply
The key insight here is that garbage collection gives you access to a variety of patterns that are otherwise fiendishly difficult to implement safely, including non-leaking closures, RCU-like resource sharing, and fast (but safe!) bump-pointer allocation of short-lived objects. Honestly, something like typed Python or Typescript is going to be fast enough for most use-cases, especially if any compute-heavy parts thunk out to optimized C++ or Rust. And if those aren't fast enough, you can write against the JVM or the CLR in a variety of beautiful, expressive languages and still gain the benefit of a GCed, managed environment while retaining the ability to accelerate core kernels in native code.
Writing entire big systems in C++, Rust, etc. just doesn't make much sense to me. Yeah, I understand the benefits of technical uniformity, of having to train people to use only one language, and of easy of debugging when there's only one level of the stack --- but still, I think people use low level systems languages for too much, and these articles about asynchronous work being hard to write in Rust are symptomatic of this fundamental mismatch.
[+] [-] tialaramex|4 years ago|reply
So, people from Google have explained about this before. The idea you have assumes that if you speed profile the code, 99% of samples land in this function "A()" and so you just re-write that part in C++ and now it's faster, or if you measure allocations you find 99% of RAM was allocated by "B()" and so you just re-write that part in C++ and now it uses less RAM.
Google already did all that low-hanging fruit. When they run the profile it comes back flat. You should rewrite A, B, D, E, F, J, K, L, M, N ... in other words the way to make the system faster is to just write it in C++
Part of this is also scale. If a system I run once week is a little slow, maybe in some sense that costs 40¢ but I don't account for it. At a mid-size non-IT firm maybe a similar performance cost is $1000 per year. Just about worth somebody enquiring if it can be sped up, but not worth arguing about it if the answer is "No". Maybe a mid-size IT firm where you work spends $1k per month on this problem. You could speed it up by rewriting the whole system, this would take some time - how many days work before it's cheaper to leave the problem than pay you to fix it? However, at Google's scale maybe that problem costs them $1M per week. They can justify assigning a whole team to fix that because of scale.
[+] [-] couchand|4 years ago|reply
1) the expressive type system, notably the use of optional type instead of null and the use of lifetimes to make reasoning about references tractable, and 2) the community, with a focus on correctness, documentation, and being welcoming.
[+] [-] jude-|4 years ago|reply
I think this is the most underappreciated part of this article. Since when did everything have to be async? There are other ways to represent concurrency that more accurately reflect what the computer is actually doing.
For example, an alternative to async is to represent a workstream as a state machine, where state transitions happen between I/O. Then, your state machine can be a struct, and each state transition can be an impl function on that struct that takes one or more completed I/O requests as input and emits one or more I/O requests as output. This saves you from having to implement everything as a closure, which this article rants about. Your top-level epoll loop merely services I/O requests from state machine instances, and invokes your global application logic to start and stop state machines to carry out business logic tasks.
I realize that many complicated workstreams could have many states due to all the I/O they might do, but the task of converting a high-level workstream into a state machine could be automated by the tooling.
[+] [-] __s|4 years ago|reply
That tooling is literally async/await. async/await transforms functions into state machines
[+] [-] eximius|4 years ago|reply
> I think this is the most underappreciated part of this article.
I think it's incredibly silly actually. Abandon all async for a difficult and error prone epoll model?
> Since when did everything have to be async?
It doesn't! No one is forcing anyone to use async. I'm not sure why the author implies that.
But if you do want to use async, Rust is attempting to solve the async problem with the same guarantees it has for blocking code. Turns out that is hard.
[+] [-] KingOfCoders|4 years ago|reply
Using callbacks in Rust is not idiomatic, at least I haven't seen such code over the last two years writing Rust, and Rust syntax and semantics with life times is not tailored toward that approach.
So the author forces an async style on Rust that it was not build for and then complaints.
[+] [-] vbg|4 years ago|reply
I guess I’ll have to become an expert to find my disappointment.
[+] [-] kohlerm|4 years ago|reply
[+] [-] hmrr|4 years ago|reply
[+] [-] api|4 years ago|reply
Personally I hate dependencies in general and try to minimize them. Instead of reaching for a utility crate I think “how could I architect this so I don’t need this hack?” There is almost always a way. In the end falling back on Arc<> is still cleaner and probably comes with less overhead than dependency spaghetti.
[+] [-] ungamedplayer|4 years ago|reply
[+] [-] jsiepkes|4 years ago|reply
But yeah, the way Erlang embededded the actor pattern in the VM (Beam) and the language itself is great.
Tough personally I would have liked it if it was more statically typed. I still need to take a look at Gleam...
[+] [-] pphysch|4 years ago|reply
[+] [-] eska|4 years ago|reply
[+] [-] flohofwoe|4 years ago|reply
I didn't quite get the gist of the article though, isn't the whole point of async-await to get rid of passing callback function pointers around? What do closures have to do with async-await in Rust?
[+] [-] Rusky|4 years ago|reply
[+] [-] gpderetta|4 years ago|reply
Mind, it is not going to be fast as it is going to be hard to generate efficient code ...
[+] [-] fnord123|4 years ago|reply
Well, space _blocking runs blocking code in an async context. It takes a closure as it's argument and if you're in a mixed async/sync platform you will his this function a lot. To deal with it you end up needing to use move. And to use move you need to clone your pointers to the data and manage all that guff.
The alternatives are Arc, deep copies, or use C++ where you will invariably get it wrong and end up with corrupted data and a very bad week (or you don't notice and reply to this comment saying that you do this in C++ and never had a problem)
[+] [-] pbiggar|4 years ago|reply
[1] https://blog.darklang.com/why-dark-didnt-choose-rust/
[+] [-] couchand|4 years ago|reply
The title is clickbait, of course aync Rust works, and the article doesn't talk about it anyway. The only mention of async is an observation that it's CPS under that hood, followed by a wandering rant about CPS that fails to account for the design constraints.
In the end, the author suggests forcing every AIO user to manually write out their polling loops, which is simply a silly idea. At least if their recommendation was a completion-based API without language support they would seem serious.
[+] [-] darthrupert|4 years ago|reply