This is good info, and will be very useful for people porting code from other languages like Javascript. But I'm still in mourning that async took over the world.
I grew up with cooperative multitasking on Mac OS and used Apple's OpenTransport heavily in the mid-90s before Mac OS X provided sockets. Then I spent several years working on various nonblocking networking approaches like coroutines for games before the web figured out async. I went about as far down the nonblocking IO rabbit hole as anyone would dare.
But there's no there there. After I learned Unix sockets (everything is a stream, even files) it took me to a different level of abstraction where now I literally don't even think about async. I put it in the same mental bin as mutexes, locking IO, busy waiting, polling, even mutability. That's because no matter how it's structured, async code can never get away from the fact that it's a monad. The thing it's returning changes value at some point in the future, which can quickly lead to nondeterministic behavior without constant vigilance. Now maybe my terminology here is not quite right, but this concept is critical to grasp, or else determinism will be difficult to achieve.
I think a far better programming pattern is the Actor model, which is basically the Unix model and piping immutable data around. This is more similar to how Go and Erlang work, although I'm disappointed in pretty much all languages for not enforcing process separation strongly enough.
Until someone really understands everything I just said, I would be very wary of using async and would only use it for porting purposes, never for new development. I feel rather strongly that async is something that we'll be dealing with and cleaning up after for the next couple of decades, at least.
> But I'm still in mourning that async took over the world.
I agree, it does seem like a step backwards in general. However, for Rust it makes sense. There is no runtime, so there is nothing to preempt the green threads/lightweight processes etc. But yeah, with higher level languages like Python, I was disappointed to see how async was emphasized in 3.x over green threads which were already used by a number of projects.
Rust is hard at the start, but easy after. But I feel async keep it hard. The sad part is that async is SO infectious that you are forced to move all on it to align with the rest of the ecosystem.
I also believe the way all of this is presented is not the right abstraction. Actors + CSP is probably the best way. Plus, even if concurrency <> parallelism I think the parallelism idioms make more sense (pin to the "thread", do fork-joins, use ring-buffer for channels, etc).
However, I suppose the whole issue is that async as-is is easier for the mechanical support that work for the compilers and allows to squeze the performance/resource usage, that is important for Rust.
But maybe keep it hidden and surface another kind of api?
OCaml is currently going through something similar, with some solutions in sight. The "motivation" part of the eio (https://github.com/ocaml-multicore/eio) documentation is a great introduction:
"The Unix library provided with OCaml uses blocking IO operations, and is not well suited to concurrent programs such as network services or interactive applications. For many years, the solution to this has been libraries such as Lwt and Async, which provide a monadic interface. These libraries allow writing code as if there were multiple threads of execution, each with their own stack, but the stacks are simulated using the heap.
The multicore version of OCaml adds support for "effects", removing the need for monadic code here. Using effects brings several advantages:
1. It's faster, because no heap allocations are needed to simulate a stack.
2. Concurrent code can be written in the same style as plain non-concurrent code.
3. Because a real stack is used, backtraces from exceptions work as expected.
4. Other features of the language (such as try ... with ...) can be used in concurrent code.
Additionally, modern operating systems provide high-performance alternatives to the old Unix select call. For example, Linux's io-uring system has applications write the operations they want to perform to a ring buffer, which Linux handles asynchronously."
How does programming with "effects" actually work? I've read the linked page, and I understand the advantages they're claiming, but I don't see any explanation of what effects actually are.
"Wenn man nicht mehr weiter weiß, gründet man einen Arbeitskreis." - Ancient German proverb. (Translation: "If you don't know what to do next, you set up a working group")
Well Rust is always about dozens of teams, myriad committees, sub-committees , groups, sub groups, work groups, governance boards, foundations and so on. Although I don’t know they have such huge budget to run fortune 500 or government style bureaucracy. Or is that same people appear at ten different places.
I take the opposite tack. Who, precisely is clamoring for this? Why not "let 100 flowers grow" (present condition) and allow the various solutions to mature to the point that a de facto standard emerges? The claim is made: "choosing a runtime locks you into a subset of the ecosystem," to which I answer, "So, what?" If I want to log my server events or take advantage of a protocol encoding method or compress my data -- all of these and every other "big" choice I make locks me into a similar library ecosystem niche. I despise this "everything's amazing and nobody's happy" vibe. The async library authors have plenty on their plates without some sub-committee crashing in and dictating their features and release schedules.
By the way, using Go as an example is a joke since -- from the early Go bootcamp I attended in 2014, the best practice has been to use a 3rd-party http router (these days: gorilla? httprouter? chi? etc) instead of the one provided in the standard library. Instead of being _told_ what to use, let's get back to being interested enough that we read the docs, take in the reviews & benchmarks, and decide for ourselves.
The issue is that the async await keywords are part of the standard language but then you are forced to pick an non-standard runtime.
Your Go example is not quite comparable:
Go: (Lang, libs)
- You can mix and match any libraries.
Rust: (Lang, runtime, libs)
- Now you can only choose the libraries for your runtime. This dilutes the time investment of crate developers and the utility of Cargo crates, as you want a general async thing but it is tied to a specific runtime.
I think the Rust team should of included a solid zero config runtime but allow it to be replaced.
The portability is needed to let many runtimes grow without pains of fragmentation. Currently tokio dominates, and you either use tokio, or you lose access to a large portion of the ecosystem.
This doesn't have to be a blessed runtime in std, but could be just a set of common interfaces (basics like AsyncRead, sleep, and spawn), so that async crates don't have to directly depend on a specific runtime.
> By the way, using Go as an example is a joke since -- from the early Go bootcamp I attended in 2014, the best practice has been to use a 3rd-party http router (these days: gorilla? httprouter? chi? etc) instead of the one provided in the standard library.
You're mistaken. Using a third party router doesn't lock you into a particular subset of the ecosystem. E.g., I tend to use the Gorilla router by default, but I can use it with any middleware that implements the standard http.Handler interface.
A 'toy' executor is worthless for those who need to program in async and worthless for those who don't. Toy functionality has always been, and should continue to be, supplied by libraries.
Why has Rust struggled so much with this, where Go has succeeded from the start with its language-level “goroutine” concept and runtime? Maybe it just wasn’t a focal area for the original Rust designers?
>Why has Rust struggled so much with this, where Go has succeeded from the start with its language-level “goroutine” concept and runtime?
Rust made the deliberate decision to avoid the heavier Go goroutines runtime model after early alpha/beta experiments showed it conflicted with Rust's low-level design. I found 3 links to some history of that rationale in a previous comment:
Early on in Rust's history, it had something similar to Go's goroutines with n:m green thread scheduling and libuv for async everything. Some other languages (e.g. Haskell/GHC) also have this kind of system.
But this practically requires some kind of garbage collection and a fat runtime.
I think it was a good decision on the Rust team to abandon this and go for a low level systems programming language. Otherwise it would've been just another Go-like language that isn't really usable in low level systems programming.
Implementing portable async language features without a fat runtime or garbage collection is novel work so it's no wonder that it's taking its own sweet time to reach maturity.
The saddest part of learning Rust was discovering that there are no goroutines and that async works like Python and everything needs to be written twice to support both async and blocking styles. Like it was 20 years ago all over again and I'm still trying to mix Twisted and stdlib Python. I got all excited thinking of how the borrow checker would work so well with coroutines, only to discover it got nobbled because Rust's use cases include embedded systems and no runtime (like in a golang 9MB hello_world.exe). I have no idea if Rust could evolve its concurrent programming support to something better than Go's, even if it did drop some of its shackles.
It's extremely challenging to make async that is usable without introducing a garbage collector or a whole lot of runtime overhead.
A better comparison would be between the Rust and C++ paths to async - C++ also spent years designing their async system, and the end result is divisive at best.
Go is a very different language than rust. Go has automatic memory management & garbage collection. This automatically disqualifies it from being used in many scenarios that rust is designed to support, like embedded systems.
Go’s runtime model just makes stuff like this vastly simpler. Rust can’t impose the same kind of runtime model that go has.
Sibling comments have explained the detail. The pithier explanation perhaps is that rust is intended as a 'systems language', and it interprets that as meaning there should be no runtime. Or, more simply, it ought to be possible to call a rust function from C without providing an additional argument that encapsulates rust 'environment'. (C effectively defining this area)
Go and Java (with Loom) have these lovely facilities, but it is hard to interface with them if your language lacks these features. I find it odd that C#, javascript, and python don't provide the smoother async Go experience despite having runtimes / VMs.
Funny you should ask that... about 6 years ago, Mozilla released an event-handling library written in go named Heka. It made use of go's built-in goroutines and channels and what made it really cool was that it had an embedded lua interpreter so people could update lua scripts in their event processing systems to alter the behavior (such as reformatting dates, etc) without needing to re-compile the solution. It got pretty popular and you can still find YouTube tutorials on using it to this day.
Unfortunately, according to one of the lead developers, the system couldn't keep up with Mozilla's throughput and reliability requirements due to limitations of go's built-in features.[0] They announced they would re-write a new solution in c ("Hindsight") and they basically left an entire community of users high and dry due to not being able to salvage the go-based project, since it relied so heavily on the built-in features.
> Why has Rust struggled so much with this, where Go has succeeded from the start with its language-level “goroutine” concept and runtime?
It’s not that Rust has struggled, it was never Rust’s priority to have a runtime or high level async code. It had very different goals to Go.
It’s like asking “why has C struggled to implement
Promises like JavaScript”? The languages serve different purposes.
You’re right it wasn’t their initial focal point but later on Rust wanted to offer the chance of having a go-like runtime without destabilising the low-level performance at the core level. I.E only those that use it pay for it and those that don’t use it aren’t affected.
Offering “zero cost” futures etc is very difficult to do.
Go has a single runtime, Rust supports multiple async runtimes. Rust needs to support multiple runtimes because in low level space there is no one size fits all solution.
For example, Go runtime imposes unavoidable overhead in memory usage, because each goroutine must have its own allocated stack (Rust futures, on the other hand, are stackless). Rust runs on low memory platforms where Go isn't really suitable.
Rust doesn't want to impose its own one-size-fits-all runtime on all users of the language, because Rust wants to work in places where Go isn't a good fit, e.g. microcontrollers, kernels, or seamlessly on top of other languages' runtimes.
Architecture of an efficient async runtime is going to be different for 128-core server vs single-threaded chip with barely any RAM. In Rust you can write your own runtime to your needs, rather than fight overhead of a big runtime on a small device, or struggle to scale a dumb runtime to complex workloads.
I think part of it is because computer science hasn't really nailed the right abstraction for concurrent code execution.
For instance, C is a great abstraction. You take assembly language, abstract away manual management of registers with variables and pointers, add structured types to describe memory layout, standardize flow of control operations, and add functions to enable code reusability, and you have something which is very easy to work with and also to understand. It's not 100% on par with assembly in terms of performance, but it's pretty darned close, and with a little bit of practice it's very easy to look at a block of C code and basically understand what equivalent assembly it compiles to. It's a great abstraction, and it's no wonder that a vast majority of the languages which have come after it have borrowed most of its major features.
I would argue we haven't really had a "great abstraction" to the same level since then*. There have been efforts to abstract away memory management the way register management has been abstracted away, and many of them have been successful for a lot of use-cases, but not to the point that everyone can forget about memory management the way the vast majority of us can forget about register management. Garbage collectors can be too slow or too wasteful for a lot of use-cases, and you need essentially another program you didn't write to pull it off. In a GC'd language it's not so trivial to look at a block of high-level code and predict what your CPU will do. There are other approaches: like the structured approaches of Rust and Swift which are quite interesting, but they're far from proven at this point.
Similarly I think we're not quite there yet with concurrent programming. As far as the transparency topic, a lot of async implementations are more in the direction of garbage collectors, where the compiler rips apart your code and builds a state machine in its place. It's not hard to believe that the result will be difficult to work with and reason about in some cases.
And maybe the problem is that most approaches to async are trying to cram concurrent execution into that C-like abstraction, which is such an elegant abstraction for single-threaded execution exactly. Maybe concurrent programming needs to be re-thought from first principals, with different primitives involved.
*Aside: if there is another "great abstraction" on the horizon, I believe it to be ADT's. That is a feature of programming which feels like a clear step forward with no clear downsides. It's a shame that they haven't been included in Zig.
zackmorris|4 years ago
I grew up with cooperative multitasking on Mac OS and used Apple's OpenTransport heavily in the mid-90s before Mac OS X provided sockets. Then I spent several years working on various nonblocking networking approaches like coroutines for games before the web figured out async. I went about as far down the nonblocking IO rabbit hole as anyone would dare.
But there's no there there. After I learned Unix sockets (everything is a stream, even files) it took me to a different level of abstraction where now I literally don't even think about async. I put it in the same mental bin as mutexes, locking IO, busy waiting, polling, even mutability. That's because no matter how it's structured, async code can never get away from the fact that it's a monad. The thing it's returning changes value at some point in the future, which can quickly lead to nondeterministic behavior without constant vigilance. Now maybe my terminology here is not quite right, but this concept is critical to grasp, or else determinism will be difficult to achieve.
I think a far better programming pattern is the Actor model, which is basically the Unix model and piping immutable data around. This is more similar to how Go and Erlang work, although I'm disappointed in pretty much all languages for not enforcing process separation strongly enough.
Until someone really understands everything I just said, I would be very wary of using async and would only use it for porting purposes, never for new development. I feel rather strongly that async is something that we'll be dealing with and cleaning up after for the next couple of decades, at least.
rdtsc|4 years ago
I agree, it does seem like a step backwards in general. However, for Rust it makes sense. There is no runtime, so there is nothing to preempt the green threads/lightweight processes etc. But yeah, with higher level languages like Python, I was disappointed to see how async was emphasized in 3.x over green threads which were already used by a number of projects.
mamcx|4 years ago
I also believe the way all of this is presented is not the right abstraction. Actors + CSP is probably the best way. Plus, even if concurrency <> parallelism I think the parallelism idioms make more sense (pin to the "thread", do fork-joins, use ring-buffer for channels, etc).
However, I suppose the whole issue is that async as-is is easier for the mechanical support that work for the compilers and allows to squeze the performance/resource usage, that is important for Rust.
But maybe keep it hidden and surface another kind of api?
Zababa|4 years ago
"The Unix library provided with OCaml uses blocking IO operations, and is not well suited to concurrent programs such as network services or interactive applications. For many years, the solution to this has been libraries such as Lwt and Async, which provide a monadic interface. These libraries allow writing code as if there were multiple threads of execution, each with their own stack, but the stacks are simulated using the heap.
The multicore version of OCaml adds support for "effects", removing the need for monadic code here. Using effects brings several advantages:
1. It's faster, because no heap allocations are needed to simulate a stack.
2. Concurrent code can be written in the same style as plain non-concurrent code.
3. Because a real stack is used, backtraces from exceptions work as expected.
4. Other features of the language (such as try ... with ...) can be used in concurrent code.
Additionally, modern operating systems provide high-performance alternatives to the old Unix select call. For example, Linux's io-uring system has applications write the operations they want to perform to a ring buffer, which Linux handles asynchronously."
skohan|4 years ago
bishop_mandible|4 years ago
geodel|4 years ago
crtc|4 years ago
But I haven't seen any public discussions on the future of Rust governance, how to make the core team accountable, or other consequences since.
epage|4 years ago
nojito|4 years ago
natded|4 years ago
[deleted]
ezekiel68|4 years ago
By the way, using Go as an example is a joke since -- from the early Go bootcamp I attended in 2014, the best practice has been to use a 3rd-party http router (these days: gorilla? httprouter? chi? etc) instead of the one provided in the standard library. Instead of being _told_ what to use, let's get back to being interested enough that we read the docs, take in the reviews & benchmarks, and decide for ourselves.
justsomeuser|4 years ago
Your Go example is not quite comparable:
Go: (Lang, libs)
- You can mix and match any libraries.
Rust: (Lang, runtime, libs)
- Now you can only choose the libraries for your runtime. This dilutes the time investment of crate developers and the utility of Cargo crates, as you want a general async thing but it is tied to a specific runtime.
I think the Rust team should of included a solid zero config runtime but allow it to be replaced.
pornel|4 years ago
This doesn't have to be a blessed runtime in std, but could be just a set of common interfaces (basics like AsyncRead, sleep, and spawn), so that async crates don't have to directly depend on a specific runtime.
throwaway894345|4 years ago
You're mistaken. Using a third party router doesn't lock you into a particular subset of the ecosystem. E.g., I tend to use the Gorilla router by default, but I can use it with any middleware that implements the standard http.Handler interface.
pie_flavor|4 years ago
camdenlock|4 years ago
jasode|4 years ago
Rust made the deliberate decision to avoid the heavier Go goroutines runtime model after early alpha/beta experiments showed it conflicted with Rust's low-level design. I found 3 links to some history of that rationale in a previous comment:
https://news.ycombinator.com/item?id=28660089
And some more links:
https://stackoverflow.com/questions/29428318/why-did-rust-re...
https://github.com/rust-lang/rfcs/blob/master/text/0230-remo...
And lots of debate in this previous thread: https://news.ycombinator.com/item?id=10225903
exDM69|4 years ago
But this practically requires some kind of garbage collection and a fat runtime.
I think it was a good decision on the Rust team to abandon this and go for a low level systems programming language. Otherwise it would've been just another Go-like language that isn't really usable in low level systems programming.
Implementing portable async language features without a fat runtime or garbage collection is novel work so it's no wonder that it's taking its own sweet time to reach maturity.
stubish|4 years ago
swiftcoder|4 years ago
A better comparison would be between the Rust and C++ paths to async - C++ also spent years designing their async system, and the end result is divisive at best.
tcoff91|4 years ago
Go’s runtime model just makes stuff like this vastly simpler. Rust can’t impose the same kind of runtime model that go has.
shellac|4 years ago
Go and Java (with Loom) have these lovely facilities, but it is hard to interface with them if your language lacks these features. I find it odd that C#, javascript, and python don't provide the smoother async Go experience despite having runtimes / VMs.
ezekiel68|4 years ago
Unfortunately, according to one of the lead developers, the system couldn't keep up with Mozilla's throughput and reliability requirements due to limitations of go's built-in features.[0] They announced they would re-write a new solution in c ("Hindsight") and they basically left an entire community of users high and dry due to not being able to salvage the go-based project, since it relied so heavily on the built-in features.
[0] https://heka.mozilla.narkive.com/9heQ11hz/state-and-future-o...
jayflux|4 years ago
It’s not that Rust has struggled, it was never Rust’s priority to have a runtime or high level async code. It had very different goals to Go.
It’s like asking “why has C struggled to implement Promises like JavaScript”? The languages serve different purposes.
You’re right it wasn’t their initial focal point but later on Rust wanted to offer the chance of having a go-like runtime without destabilising the low-level performance at the core level. I.E only those that use it pay for it and those that don’t use it aren’t affected.
Offering “zero cost” futures etc is very difficult to do.
See https://aturon.github.io/blog/2016/08/11/futures/ for more info, old but still relevant (including the chart)
cute_boi|4 years ago
nextaccountic|4 years ago
For example, Go runtime imposes unavoidable overhead in memory usage, because each goroutine must have its own allocated stack (Rust futures, on the other hand, are stackless). Rust runs on low memory platforms where Go isn't really suitable.
pornel|4 years ago
Architecture of an efficient async runtime is going to be different for 128-core server vs single-threaded chip with barely any RAM. In Rust you can write your own runtime to your needs, rather than fight overhead of a big runtime on a small device, or struggle to scale a dumb runtime to complex workloads.
skohan|4 years ago
For instance, C is a great abstraction. You take assembly language, abstract away manual management of registers with variables and pointers, add structured types to describe memory layout, standardize flow of control operations, and add functions to enable code reusability, and you have something which is very easy to work with and also to understand. It's not 100% on par with assembly in terms of performance, but it's pretty darned close, and with a little bit of practice it's very easy to look at a block of C code and basically understand what equivalent assembly it compiles to. It's a great abstraction, and it's no wonder that a vast majority of the languages which have come after it have borrowed most of its major features.
I would argue we haven't really had a "great abstraction" to the same level since then*. There have been efforts to abstract away memory management the way register management has been abstracted away, and many of them have been successful for a lot of use-cases, but not to the point that everyone can forget about memory management the way the vast majority of us can forget about register management. Garbage collectors can be too slow or too wasteful for a lot of use-cases, and you need essentially another program you didn't write to pull it off. In a GC'd language it's not so trivial to look at a block of high-level code and predict what your CPU will do. There are other approaches: like the structured approaches of Rust and Swift which are quite interesting, but they're far from proven at this point.
Similarly I think we're not quite there yet with concurrent programming. As far as the transparency topic, a lot of async implementations are more in the direction of garbage collectors, where the compiler rips apart your code and builds a state machine in its place. It's not hard to believe that the result will be difficult to work with and reason about in some cases.
And maybe the problem is that most approaches to async are trying to cram concurrent execution into that C-like abstraction, which is such an elegant abstraction for single-threaded execution exactly. Maybe concurrent programming needs to be re-thought from first principals, with different primitives involved.
*Aside: if there is another "great abstraction" on the horizon, I believe it to be ADT's. That is a feature of programming which feels like a clear step forward with no clear downsides. It's a shame that they haven't been included in Zig.
npigrounet|4 years ago
newaccount2021|4 years ago
[deleted]