top | item 45197608

Performance Improvements in .NET 10

192 points| benaadams | 5 months ago |devblogs.microsoft.com

92 comments

order

throwaway13337|5 months ago

C# is definitely fast.

There are some benchmark games that I relied on in the past as a quick check and saw it as underwelming vs rust/c++.

For example:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

We see that the fastest C# version is 6 times slower than the rust/c++ implementation.

But that's super deceiving because those versions use arena allocators. Doing the same (wrote this morning, actually) yielded a ~20% difference vs the fastest rust implementation.

This was with dotnet 9.

I think the model of using GC by default and managing the memory when it's important is the sanest approach. Requiring everything to be manually managed seems like a waste of time. C# is perfect for managing memory when you need it only.

I like rust syntactically. I think C# is too object-oriented. But with a very solid standard lib, practical design, good tools, and speed when you need it, C# remains super underrated.

anonymars|5 months ago

Even way back when: https://devblogs.microsoft.com/oldnewthing/20060731-15/?p=30...

> The fact that Rico Mariani was able to do a literal translation of the original C++ version into C# and blow the socks off it is a testament to the power and performance of managed code. It took me several days of painful optimization to catch up, including one optimization that introduced a bug, and then Rico simply had to do a little tweaking with one hand tied behind his back to regain the lead. Sure, I eventually won but look at the cost of that victory

arresin|5 months ago

You summarise it perfectly. Exactly my thoughts as well.

Imagine if we had something with rust syntax but csharp’s support and memory management trade off with escape hatch

PaulHoule|5 months ago

This kinda stuff brings languages like C# and Java closer to Rust in performance, thinking like the "borrow checker" it understands the scope of some objects and puts them on the stack and avoids garbage collection and allocation overhead.

It keeps the unsung benefit of garbage collection for "programming in the large" in which memory allocation is treated as a global concern independent of everything else instead of a global concern that has to be managed locally in every line of code.

Rust's strategy is problematic for code reuse just as C/C++'s strategy is problematic. Without garbage collection a library has to know how it fits into the memory allocation strategies of the application as a whole. In general a library doesn't know if the application still needs a buffer and the application doesn't know if the library needs it, but... the garbage collector does.

Sure you can "RC all the things" but then you might as well have a garbage collector.

In the Java world we are still waiting for

https://openjdk.org/projects/valhalla/

gpderetta|5 months ago

GC is problematic for cross-language foundational libraries though (unless they run on the same VM of course).

SkiFire13|5 months ago

> thinking like the "borrow checker" it understands the scope of some objects and puts them on the stack and avoids garbage collection and allocation overhead.

On the other side however if you don't write code that the borrow checker would accept you likely won't get these optimizations. And even if it was accepted there's a chance the analysis needs to be too deep or complex for the escape analysis to work. Ultimately this is a nice speed up in practice but not something I would rely on.

rudedogg|5 months ago

This is wishful thinking. It’s the same as other layers we have like auto-vectorization where you don’t know if it’s working without performance analysis. The complexity compounds and reasoning about performance gets harder because the interactions get more complex with abstractions like these.

Also, the more I work with this stuff the more I think trying to avoid memory management is foolish. You end up having to think about it, even at the highest of levels like a React app. It takes some experience, but I’d rather just manage the memory myself and confront the issue from the start. It’s slower at first, but leads to better designs. And it’s simpler, you just have to do more work upfront.

Edit:

> Rust's strategy is problematic for code reuse just as C/C++'s strategy is problematic. Without garbage collection a library has to know how it fits into the memory allocation strategies of the application as a whole. In general a library doesn't know if the application still needs a buffer and the application doesn't know if the library needs it, but... the garbage collector does.

Should have noted that Zig solves this by making the convention be to pass an allocator in to any function that allocates. So the boundaries/responsibilities become very clear.

nly|5 months ago

Efficient memory allocation is part of a well written designed API.

Languages like C++ give you a tonne of options here, from passing in scratch buffers to libraries, passing in reusable containers, move semantics, to type erased primitives like std::memory_resource and std::shared_ptr

qingcharles|5 months ago

The big benefit of this for my work is being able to run sites on smaller and cheaper boxes every year. It's not just the speed, but they've made huge strides in the last couple of versions in massively reducing the memory use of different object types.

This is only being compared to last year's v9, but if you compare against v7 from a couple of years ago, the changes are huge.

And this only reflects the changes to the underlying framework compilation, and doesn't factor in changes to say the Kestrel web server and static asset delivery that have taken a ton of load away.

Intel are also regularly checking in changes before they release new CPUs now so that the framework is ahead of their releases and takes advantage of new features.

yread|5 months ago

Amazing progress, some LINQ constructions were made 300 times faster. But reasoning over what code does heap allocations is getting more and more complicated. I guess intuition has to give even more way to benchmarking

kg|5 months ago

(Disclosure: I get paid to work on .NET)

The good thing is that the older techniques to minimize allocations still work, and tools like refs and ref structs make it easier to write zero-allocation code than it used to be.

But it's definitely harder to reason about whether optimizations will 'light up' for code that uses heap-allocated classes, closures, etc. than it was in the past, even if it's harder for a nice reason.

BenchmarkDotNet is a fantastic piece of tech at least, I have found it easier to benchmark in C# than in most other ecosystems I work with.

jsmith45|5 months ago

My view, which I suspect even Toub would agree with is that if being allocation free or even just extremely low allocation is critical to you, then go ahead and use structure and stackalloc, etc that guarentee no allocations.

It is far more guarenteed that that will work in all circumstances than these JIT optimizations, which could have some edge cases where they won't function as expected. If stopwatch allocations were a major concern (as opposed to just feeling like a possible perf bottleneck) then a modern ValueStopwatch struct that consists of two longs (accumulatedDuration, and startTimestamp, which if non-zero means the watch is running) plus calling into the stopwatch static methods is still simple and unambiguous.

But in cases where being low/no allocation is less critical, but your are still concerned about the impacts of the allocations, then these sort of optimizations certainly do help. Plus they even help when you don't really care about allocations, just raw perf, since the optimizations improve raw performance too.

andix|5 months ago

Whenever I read about those yearly performance improvements, I wonder if there are some real world benchmarks. Some applications that were benchmarked on all .NET versions up from Framework 4.7 as a baseline. And all the .NET versions since as a comparison.

verdie-g|5 months ago

In my company running maybe 20K servers on .NET, we get a 10-20% CPU decrease every time we upgrade to the next major.

jcmontx|5 months ago

Native AOT brings C# closer to Go, pretty nice feature. BTW, Upgrading .NET apps for the last few years has been such a breeze. It won't take more than a few minutes + adjusting a couple build pipelines

cogman10|5 months ago

This sort of post really makes me appreciate what's gone into the JVM. A lot of these optimizations are things that the JVM has long implemented. That's not a knock on C# either. Besides the JVM the only other place you'll see these sorts of optimizations are the likes of V8.

It makes me happy to see MS investing in C# like this. I love the notion of having competing VMed languages.

markjgx|5 months ago

I was wondering how the JVM and V8 stack up. Do you have a source for that claim? Genuinely curious.

Coming from the game dev world, I’ve grown more and more convinced that managed languages are the right move for most code. My reasoning is simple: most game developers don’t have the time or patience to deeply understand allocation strategy, span usage, and memory access patterns, even though those are some of the most performance-critical and time-consuming parts of programming to get right.

Managed languages hide a lot of that complexity. Instead of explaining to someone, “you were supposed to use this specialized allocator for your array and make sure your functions were array-view compatible”—something that’s notoriously tedious to guarantee in game engines given how few developers even think about array views—you just let developers write code and most of those problems go away.

I’m not saying everything should be managed. Core engine code should still live in the predictable, statically compiled world. But history shows it can work: projects like Jak and Daxter were written primarily in a custom LISPy scripting language, and even Ryujinx (RIP), the excellent Nintendo Switch emulator, is written entirely in C#.

Another strong technical reason is that managed JIT languages can profile at runtime and keep optimizing call sites based on actual usage patterns. Normally, developers would have to do this by hand or rely on PGO, which works but is painful to set up.

Industry standards make this harder to adopt since platforms like Sony still block JIT, but I think this is the direction we should be moving.

jitbit|5 months ago

We run a (pretty) big multi-tenant SaaS app on dotnet and I was literally able to downgrade our production servers from 4-core-16GB vms to 2-core-8GB on AWS when going from .NET 6 to .NET 8 (we only use LTS releases b/c compliance, don't ask). Super excited to try .NET 10. Also, almost zero breaking changes when bumping versions, which is very refreshing compared to the front-end world.

That said, while C# (and the dotnet runtime) are awesome, MS is doing it a disservice lately (poor tooling, Cursor/VSCode controversy etc. etc.) C# could've been so much bigger...

whalesalad|5 months ago

This reads like one of those recipe blogs where you first need to hear about great grandpappy's migration during the potato famine before you can get to the details on how to make cupcakes. First 5 paragraphs are just noise.

hvb2|5 months ago

Try to imagine the hours going into a post like this.

These posts are among the very best, digging into details explaining why things work and why changes were made.

Every time they get released I'm happy because no one killed it...

yread|5 months ago

To be fair there is like 500 paragraphs of content after it

jiggawatts|5 months ago

The first five paragraphs tell a very relevant story to drive a key point home: performance is often about many small things shaved down, not one giant silver bullet.

I’ve lost count of the number of times I’ve seen customers immediately “double down” on the size of their servers as a quick fix… and achieving nothing other than increasing their cloud provider’s revenue.

Performance comes from a long series of individually small fixes.

wiseowise|5 months ago

This whole post alone deserves a big kudos. Very detailed, true engineering culture.

kristianp|5 months ago

Interesting that they've introduced LeftJoin and RightJoin into Linq with this version, 18 years after the 1st version of LinqToSQL with .net 3.5. Left and right joins aren't a particularly obscure thing.

sidkshatriya|5 months ago

Now if only they could bring their focus on performance to windows 11 as a whole.

It’s just shocking how much faster vanilla Linux is compared to vanilla windows 11.

Edit: by vanilla Linux I mean out of the box installation of your typical distribution e.g. Ubuntu without any explicit optimisation or tuning for performance

tracker1|5 months ago

What is "vanilla Linux"? Ubuntu+Gnome, Mint+Cinnamon, Fedora+KDE, Arch+COSMIC ..?

Each distro, platform and desktop manager and related apps are relatively different, though all work pretty well on modern hardware. I'm currently running PopOS COSMIC alpha with the 6.16-4 kernel via mainline. It's been pretty good, though there have been many rough edges regarding keyboard navigation/support in the new apps.

Varelion|5 months ago

If only they could fix the ecosystem's stability; I feel like anything written with C#'s staple packages becomes outdated considerably faster than any other options.

homebrewer|5 months ago

TBH aspnet core has been the most stable web framework I've worked in my life. If you have good test coverage, upgrading projects between major versions often takes minutes — because nothing, or almost nothing, gets broken. Some things might get deprecated, but the old way keeps working for years, you can chip away at it slowly over the next year or two.

You still need to make sure that everything works, but that's what tests are for, and this has to be checked regardless of your tech stack.

Of course, they had a massive backwards compat break when moving from the regular aspnet to aspnet core, here's hoping nothing like that happens in the next 10-15 years..

PaulHoule|5 months ago

I haven't written C# professionally since the early 2010s but back then the language had a big problem in that the old Container classes were not compatible with the Container<X> classes that were added when they added generics to C#. This created an ugly split in the ecosystem because if you were using Container<X> you could not pass it to an old API that expected a Container.

Java on the other hand had an implementation of generics that made Container<X> just a Container so you could mix your old containers with generic containers.

Now Java's approach used type erasure and had some limitations, but the C# incompatibility made me suffer every day, that's the cultural difference between Java and a lot of other languages.

It's funny because when I am coding Java and thinking just about Java I really enjoy the type system and rarely feel myself limited by type erasure and when I do I can unerase types easily by

- statically subclassing GenericType<X> to GenericType<ConcreteClass>

- dynamically by adding a type argument to the constructor

- mangling names (say you're writing out stubs to generate code to call a library, you can't use polymorphism to differentiate between

  Expression<Result> someMethod(Expression<Integer> x)
and

  Expression<Result> someMethod(Expression<Double> x)
since after erasure the signature is the same so you just gotta grit your teeth and mangle the method names)

but whenever I spend some time coding hard in a language that doesn't erase generic parameters I come back and I am not in my comfortable Java groove and it hurts.

CyanLite2|5 months ago

lol, whut?

Microsoft created the ".NET Standard" for this. Literally anything that targets .NET Standard 1.0 should work from circa 2001 through modern day 2025. You still get the (perf) benefits up the runtime upgrade which is what the blog post is about.

giancarlostoro|5 months ago

Can you provide more examples? I've taken a Win32 application from .NET 3.5, converted it to a .NET Console Application (it was a server with a GUI) to run on .NET 8 with minimal friction, a lot of it wound up me just importing .NET Framework packages anew from NuGet.

What are you looking for out of .NET? The staple packages don't go away as often as in languages like NodeJS

9cb14c1ec0|5 months ago

If you are not on the bleeding edge of whatever new framework Microsoft is promoting today, the ecosystem is incredibly stable.

porridgeraisin|5 months ago

Yep. A breaking change that makes my code a gajillion times faster is still always just a dirty breaking change that I'll hate.