top | item 8573347

Why I wish C# never got async/await

53 points| nullymcnull | 11 years ago |mrange.wordpress.com | reply

33 comments

order
[+] Strilanc|11 years ago|reply
Async/await makes a lot of asynchronous code easier to write, which is great, but it amplifies some of the mistakes people were already making. The two examples in the post are:

- Developers won't anticipate that accessing a task's Result property can introduce a deadlock.

- Developers won't anticipate race conditions due to a concurrent SynchronizationContext.

The problem from the first example actually does happen quite often. There are plenty of stackoverflow questions and blog posts about it. IMO there should be compiler warnings whenever you use Task.Result or Task.Wait. Actually I would prefer compile errors instead of just warnings, and for the implementations of Result and Wait to just be `throw new WHAT_IS_WRONG_WITH_YOU_DO_NOT_BLOCK_ON_TASKS_AAARGH_EXCEPTION()`, but that might be a bit harsh.

The problem from the second example is not really an issue with await/async as opposed to a deeper language issue. Any concurrency mechanism in C# will be vulnerable to the fact that you can define and access shared mutable variables (including the solution proposed in the post).

[+] final|11 years ago|reply
Yield, foreach, LINQ all have many similar problems, compared to a for loop, but nonetheless greatly increase productivity. Given a little time, the devs will learn to use async properly.
[+] Meai|11 years ago|reply
Yes, I agree. All async/await seems to be is an abstraction around coroutines. Which is a little baffling, because coroutines are already a fairly standard way to solve this problem, so why introduce this weird async/await syntax and pretend that it's magic to solve "concurrency" in general.

I'm sure I just dont understand it, but hearing about leaky abstractions like he describes in the article just makes me that much more determined to never touch them. Too much magic, expertly hidden inside a closed .net runtime.

[+] pjmlp|11 years ago|reply
> so why introduce this weird async/await syntax and pretend that it's magic to solve "concurrency" in general.

Putting cool technology in the hands of blue collar developers.

[+] blkhp19|11 years ago|reply
I really like Grand Central Dispatch on iOS and OS X. You specify a task to run on a background queue or a main queue by passing a block of code. This block of code gets executed while the rest of the program continues. You can have completion blocks so you can respond to the completed task. The syntax is a little weird at first, but it's pretty powerful and I find it to be very explicit. Async/Await always confused me a bit.
[+] tonysuper|11 years ago|reply
LibDispatch is actually FOSS, and it kind of works on Linux. Sadly, nobody seems to bother with supporting it in any way, shape, or form.
[+] Khao|11 years ago|reply
async/await is there to reduce the massive amount of boilerplate code you need to write when doing async. Of course there will be corner case bugs or weird behaviors sometimes, but that doesn't mean it's all bad. If async/await works for most cases, then it's something that's beneficial for a software developer like me.
[+] zwieback|11 years ago|reply
I agree, mostly. I really like async/await but I didn't dare touch it until I read the articles that explain what kind of state machine the compiler constructs and how the synchronization context works. I doubt the average developer would go through that trouble.

I think it's great if you follow standard design patterns but it could be a real source of problems for inexperienced programmers once a race condition emerges.

[+] skrebbel|11 years ago|reply
Having recently learned Elixir, I really wonder why the C# people decided to add async/await rather than coroutines (like Erlang processes or Goroutines).

It feels to me like async/await is the malloc of concurrent programming, and coroutines are the garbage collection of concurrent programming. You hand in a little performance in exchange for a lot less complexity. Go has shown you can also do this without only-immutable data.

Am I missing something?

[+] Locke1689|11 years ago|reply
Mainly because the CLR and Windows do not support segmented stacks, and supposedly supporting them would be almost impossible while preserving interop with native code.

Without segmented stacks you have to conserve threads, which basically leads to Task and async/await.

Edit: Almost forgot a important part: your COM thread is hugely important to how Windows UI pumping works. Async guarantees what thread you get to run on, where coroutines often just say, "you get the thread that you get."

[+] final|11 years ago|reply
C# await is a low level level construct that you can use to efficiently implement coroutines. The opposite is not true. Anders&Co designed a tool that has some rough edges (compared to say F# async computation expressions), but compiles to a small amount of low overhead Task operations.
[+] ghodss|11 years ago|reply
"By studying the output of the TraceThreadId method we see that in ASP.NET/GUI it’s the same thread that enters ReadTask and that exits ReadTask ie no problems. When we run it as a Console application we see that ReadTask is entered by one thread and exited by another ie readingFiles is accessed by two separate threads with no synchronization primitives which mean we have a race-condition."

This is not entirely true - the code as written does not have a race condition because the two accesses are run sequentially. Accessing the same variable by two separate threads with no synchronization primitives is actually okay if those two threads never run in parallel. Now, if you called many ReadTask()'s in a row and you had thread_pool > 1 (as in the GUI/ASP.NET application), then you would have a race condition. But if you're accessing a shared variable from a multithreaded context that should be somewhat obvious. It would depend on the programmer's understanding of the async/await paradigm, which I think is the author's point. ;)

[+] jasallen|11 years ago|reply
I continue to find using explicit "Task<T>" types directly more intuitive. I've wedged in ASync/Await here and there to see if I felt good about it, but I never did. I would always find declaring a Task and attaching to it's ContinueWith (or whatever I needed ) to be more to my liking.
[+] reubenbond|11 years ago|reply
ContinueWith is fine for simple flows, but things fall apart when you need to perform multiple async operations with the same error handling. You either repeat yourself or you put the error handling code into a lambda and make sure you always pass it along.

Assuming you just want to propagate exceptions, you have to mess with TaskContinuationSource (I.e, promises) and be damn certain you have wired it into all the right places.

Async/await makes error handling much more uniform and "obviously correct" than stringing callbacks together.

[+] mythz|11 years ago|reply
Works for a while, but then trying to do something like processing serial async responses in a loop gets painful without async/await, i.e:

    while (await reader.ReadAsync(cts)) { ... }
[+] zebracanevra|11 years ago|reply
The solution to deadlocking is using async all the way down - i.e. don't use .Result. you can also tell C# not to resume on the same context after an async call by using .ConfigureAwait(false) on the tasks, thus preventing two locks on a context, and the deadlock.

The author states two threads accessing a single resource is a race-condition, but that's only true when the threads are competing for that resource - in the case outlined in the article, it would be sequential, and only one thread would be accessing the resource at one time. No race condition.

I think it's not too wise to think of async/await as a special coroutine which never uses multiple threads, but as an easy way to write synchronous code and have it execute asyncronously. If you need fine control over what threads are used, maybe you're better off controlling it manually.

[+] blisse|11 years ago|reply
Is it really a problem that you have to be explicit in where you begin the top of the async/await operation? You can't just start the calls at a random point in code where you don't understand the threading situation and expect to get consistent results.
[+] youngthugger|11 years ago|reply
How does F# or C# async compare to GCD on Objective-C / Swift ? Is it similar?
[+] Strilanc|11 years ago|reply
C#'s equivalent of GCD is, very roughly, Task. You can create tasks that compute results on the thread pool, give them continuations to run after, etc.

Async/await is an abstraction on top of tasks, where you write code with do-while loops and try-catch and it gets translated into GCD-esque continuation-using code. It lets you write the asynchronous code in an imperative style, so it looks more like the rest of the code you write in C#.

[+] the_mitsuhiko|11 years ago|reply
The analogy to COM apartments is completely to the point. In fact, if you look at the PPL header files for C++/CX you will notice that the PPL system explicitly refers to COM apartments. Since IAsyncAction and friends can be used from C++/CX and .NET alike they are actually sharing very much of the same principles.
[+] ziahamza|11 years ago|reply
C# already has basic support for computation expressions which they implemented to support LINQ and later extended for DLR, but I cant see why they couldnt extend it further for async tasks.