top | item 45358527

Preparing for the .NET 10 GC

102 points| benaadams | 5 months ago |maoni0.medium.com

60 comments

order

orphea|5 months ago

For those who like me was left wondering what DATAS is, here is the link:

https://learn.microsoft.com/en-us/dotnet/standard/garbage-co...

gwbas1c|5 months ago

Yeah, I kept scrolling to the top to see if I overlooked something.

Then I realized, "oh, it's hosted on Medium." (I generally find Medium posts to be very low quality.) In this case, the author implies that they are on the .Net team, so I'm continuing to read.

(At least I hope the author actually is on the .Net team and isn't blowing hot air, because it's a Medium post and not something from an official MS blog.)

bob1029|5 months ago

> Maximum throughput (measured in RPS) shows a 2-3% reduction, but with a working set improvement of over 80%.

I have a hard time finding this approach compelling. The amount of additional GC required in their example seems extreme to me.

gwbas1c|5 months ago

This post would carry a lot more authority if it was on an official MS or .net blog; instead of Medium. (I typically associate Medium with low-quality blog entries and don't read them.)

pjmlp|5 months ago

The author is one of the main GC architects on .NET, so we in the known are aware of who she is.

Here is an interview with her,

https://www.youtube.com/watch?v=ujkSnko0JNQ

Having said this, I agree with you, the Aspire/MAUI architects do the same, I really don't get why we have to search for this kind of blog posts on other platforms instead of DevBlogs.

omnicognate|5 months ago

Microsoft have a terrible track record for moving and deleting technical content, to the extent I think I'd rather their developers host their articles almost anywhere else.

Maoni Stephens is the lead developer on the .NET garbage collector. An "About" entry would probably help, but she has a lot of name recognition in the .NET community and in the article it's clear from the first sentence that she's talking from the perspective of owning the GC.

nu11ptr|5 months ago

I don't generally find them low quality, but I do wish people wouldn't use it since I don't subscribe to it.

justin66|5 months ago

Or if the author used their real name.

giancarlostoro|5 months ago

Moreso if the authors profile picture wasn't what looks like a memecat. I can't exactly share this around without feeling like they'll judge it based on that alone.

kg|5 months ago

Some translations for acronyms and terms from this post (sourced from the glossary in dotnet/runtime along with source code grepping):

GC: Garbage Collector

DATAS: Dynamic adaptation to application sizes

UOH: User Old Heap. I can't find an explanation for what this is.

LOH: Large Object Heap. This is where allocations over a size threshold go in .NET.

POH: Pinned Object Heap. Pinning is used to stop an object in the GC's memory from being moved around by the GC (for compaction).

ASP.net: Active Server Pages for .NET. This is a framework for building web applications using .NET, a successor to the classic ASP which was built on COM and scripting languages like JScript/VBScript.

Workstation / Server GC: .NET has two major GC modes which have different configurations for things like having per-cpu-core segregated heaps, doing background or foreground GCs, etc. This is designed to optimize for different workloads, like running a webserver vs a graphical application.

Ephemeral GC / Ephemeral generation: To quote the docs:

> For small objects the heap is divided into 3 generations: gen0, gen1 and gen2. For large objects there's one generation – gen3. Gen0 and gen1 are referred to as ephemeral (objects lasting for a short time) generations.

Essentially, generation 0 or gen0 is where brand new objects live. If the GC sees that gen0 objects have survived when it does a collection, it promotes them to gen1, and then they will eventually get promoted to gen2. Most temporary objects live and die in gen0.

Pause time: Most garbage collectors will need to pause the whole application in order to run, though they may not need the application to stay paused the whole time they are working. So pause time and % pause time track how much time the application spends paused for the GC to do its job; ideally these values are low.

BCD: Quoting the post:

> 1) introduced a concept of “Budget Computed via DATAS (BCD)” which is calculated based on the application size and gives us an upper bound of the gen0 budget for that size, which can approximate the generation size for gen0

Essentially, this is an estimate of how much space the ephemeral generation (temporary objects plus some extras) is using.

TCP: Quoting the post again:

> 2) within this upper bound, we can further reduce memory if we can still maintain reasonable performance. And we define this “reasonable performance” with a target Throughput Cost Percentage (TCP). This takes into consideration both GC pauses and how much allocating threads have to wait.

moomin|5 months ago

Good guide. I _think_ UOH is “unpinned object heap”, which is a variant of the large object heap that allows compaction. So the only things going into the LOH these days are both large and pinned. But I’m not 100% on this.

gwbas1c|5 months ago

One anecdote from working with .Net for over 20 years: I've had a few situations where someone (who isn't a programmer and/or doesn't work with .Net) insists that the application has a memory leak.

First, I explain that garbage collected applications don't release memory immediately. Then I get sucked into a wild goose chase looking for a memory leak that doesn't exist. Finally, I point out that the behavior they see is normal, usually to some grumbling.

From what I can tell, DATAS basically makes a .Net application have a normal memory footprint. Otherwise, .Net is quite a pig when it comes to memory. https://github.com/GWBasic/soft_matrix, implemented in Rust, generally has very low memory consumption. An earlier version that I wrote in C# would consume gigabytes of memory (and often run out of memory when run on Mono with the Bohem garbage collector.)

---

> If startup perf is critical, DATAS is not for you

This is one of my big frustrations with .net, (although I tend to look at how dependency injection is implemented as a bigger culprit.)

It does make me wonder: How practical is it to just use traditional reference counting and then periodically do a mark-and-sweep? I know it's a very different approach than .net was designed for. (Because they deliberately decided that dereferencing an object should have no computational cost.) It's more of a rhetorical question.

bob1029|5 months ago

To be fair, there is an entire class of GC/memory problems that aren't technically a leak but manifest in effectively the same way.

The most common one I see is LOH (Large Object Heap) fragmentation. When objects are promoted to the LOH the runtime doesnt bother with moving them around anymore. There is a way to explicitly compact the LOH but it can be a non-starter for a lot of applications.

https://learn.microsoft.com/en-us/dotnet/api/system.runtime....

I've once exposed this as a button that a customer's IT department could click whenever they received an alert on memory utilization. The actual solution would have been to refactor the entire product to not pass gigantic blobs around all the time, but that wasn't in the cards for us.

kg|5 months ago

One of the main problems with refcounting is that unless your compiler/JIT are able to safely, aggressively optimize out reference increment/decrements, you can spend a ton of CPU time pointlessly bumping a counter up and down every time you enter a new function/method. This has been a problem for ObjC and Swift applications in the past AFAIK, though both of those compilers do a great job of optimizing that stuff out where possible.

There are some other things that would probably be improvements coming along with refcounting though - you might be able to get rid of GC write barriers.

nu11ptr|5 months ago

> It does make me wonder: How practical is it to just use traditional reference counting and then periodically do a mark-and-sweep? I know it's a very different approach than .net was designed for. (Because they deliberately decided that dereferencing an object should have no computational cost.) It's more of a rhetorical question.

This is what CPython does. The trade off is solidly worse allocator performance, however. You also have the reference counting overhead, which is not trivial unless it is deferred.

There is always a connection between the allocator and collector. If you use a compacting collector (which I assumed .NET does), you get bump pointer allocation, which is very fast. However, if you use a non-compacting collector (mark-and-sweep is non-compacting), you would then fallback to a normal free list allocator (aka as "malloc") which has solidly higher overhead. You can see the impact of this (and reference counting) in any benchmark that builds a tree (and therefore is highly contended on allocation). This is also why languages that use free list allocation often have some sort of "arena" library, so they can have high speed bump pointer allocation in hot spots (and then free all that memory at once later on).

BTW, reference counting, malloc/free performance also impact Rust, but given Rust's heavy reliance on the stack it often doesn't impact performance much (aka just doing less allocations). For allocation heavy code, many of us use MiMalloc one of the better malloc/free implementations.

SideburnsOfDoom|5 months ago

> First, I explain that garbage collected applications don't release memory immediately. ... I point out that the behavior they see is normal

yes, this is an easily overlooked point: Using memory when it going free is by design. It is often better to use use up cheap, unused memory instead of expensive CPU doing a GC. When memory is plentiful as it often is, then it is faster to just not run a GC yet.

You're not in trouble unless you run short of memory, and a necessary GC does not free up enough. Then only can you call it an issue.

WorldMaker|5 months ago

> From what I can tell, DATAS basically makes a .Net application have a normal memory footprint.

In Server environments. DATAS is an upgrade to garbage collection in "Server mode". Server GC assumed it could be the only thing running on a machine and could use as much memory as it wanted and so would just easily over-allocate memory much more than what it immediately needed. (As the article points out, it would start at a large fixed amount of memory times the number of CPU cores.)

(As opposed to "Workstation GC" which has always tried to minimize memory consumption because it assumes it is running as only one of many apps on an end user system.)

> (and often run out of memory when run on Mono with the Bohem garbage collector.)

Not exactly a fair comparison between .NET's actual GC and Mono's old simpler GC before the merger. (Today's .NET shares the same GC on Windows and Linux [and macOS].)

> This is one of my big frustrations with .net, (although I tend to look at how dependency injection is implemented as a bigger culprit.)

Startup times have gotten a lot better in recent versions of .NET, AOT compiling has much improved (especially compared to the ancient ngen for anyone old enough to remember needing to use that for startup optimization), and while I agree .NET has seen a lot of terrible DI implementations the out-of-the-box one in Microsoft.Extensions does a lot of things right now, including avoiding a lot of Reflection in standard usage which was the big thing slowing down older DI systems. (I've seen people add Reflection based "helpers" back on top of the Microsoft.Extensions DI, but at that point that is a user problem more than a DI problem.)

> It does make me wonder: How practical is it to just use traditional reference counting and then periodically do a mark-and-sweep?

Technically the "mark" of "mark-and-sweep" can be implemented as traditional reference counting (and some of the earliest "mark-and-sweep" implementations did just that). It still only solves half the problem, though. Also, the optimizations made by modern "mark" systems come from that you don't need detailed counts, you just need tools equivalent to Bloom filters (what's the probability this is referenced at least once) and those can be much faster/more efficient to compute and use a lot less memory space than reference counters while doing that.

If your concern is total memory consumption, traditional reference counting uses more space (if only just to store counts), and by itself doesn't solve fragmentation (the "sweep" part of "mark-and-sweep"). From a practical standpoint, combining "traditional reference counting" and a "mark-and-sweep" sounds to me like asking for a less efficient "mark-twice-and-sweep" algorithm.

daxfohl|5 months ago

Maybe I missed it, but is there a shadow mode to estimate the memory and perf impact without actually enabling the feature? Or better yet, a way to analyze existing dotnet 8 GC logs to understand the approx impact?

bilekas|5 months ago

It's incredibly frustrating the author doesn't actually say "Garbage Collector (GC)" I'm aware but something niggling in the back of my head had me second guessing.

nu11ptr|5 months ago

Even worse: they don't explain what the DATAS acronym means. Seems like the author makes too many assumptions about the knowledge base of their reader IMO.

jcmontx|5 months ago

But don't you take a hit in performance by running the GC more often?

stonemetal12|5 months ago

Maybe, maybe not. If GC is a O(n^2) then running it twice at n=5 is a much shorter run time than once at n=10.

NetMageSCW|5 months ago

Not necessarily if you have more (so smaller) heaps so each GC takes less time.

graycat|5 months ago

For the author, some definitions:

GC? -- Maybe "Garbage Collection", i.e., have some memory (mainly computer main memory) allocated, don't need it (just now or forever), and want to release it, i.e., no longer have it allocated for its original purpose. By releasing can make it available for other purposes, software threads, programs, virtual machines, etc.

DATAS? -- Not a spelling error or about any usual meaning for data and instead is as in

https://learn.microsoft.com/en-us/dotnet/standard/garbage-co...

for "Dynamic adaptation to application sizes"

So, we're trying to take actions over time in response to some inputs that are in some respects unpredictable.

Okay, what is the objective, i.e., the reason, what we hope to gain, or why bother?

And for the part that is somewhat unpredictable over time, that's one or more stochastic processes (or one multidimensional stochastic process?).

So, in broad terms, we are interested in stochastic optimal control. "Dynamic adaptation", is close and also close to one method, dynamic programming -- in an earlier thread at Hacker News, gave a list of references. Confession, wrote my applied math Ph.D. dissertation in that subject.

Hmm, how to proceed??? Maybe, (A) Know more about the context, e.g., what the computer is doing, what's to be minimized or maximized. (B) Collect some data on the way to knowing more about the stochastic processes involved.

For me, how to get paid? If tried to make a living from applied stochastic optimal control, would have died from starvation. Got the Ph.D. JUST to be better prepared as an employee for such problems and had to learn that NO one, not even one in the galaxy, cares as much as one photon of ~1 Hz light.

So, am starting a business heavily in computing and applied math. The code from Microsoft tools is all in .NET, ASP.NET, ADO.NET, etc. Code runs fine. The .NET software, via the VB.NET syntactic sugar, is GREAT for writing the code.

So, MUST keep up on Microsoft tools, and here just did that. Since .NET 10 is changing some versions of Windows, my reaction is (i) add a lot of main memory until GC is nearly irrelevant, (ii) in general, wait a few years to give Microsoft time to fix problems, i.e., usually be a few years behind the latest versions, i.e., to "Prepare for .NET 10", first wait a few years.

Experience: At one time, saw some server farms big on reliability. One site had two of everything, one for the real work and another to test the latest for bugs before being used for real work. Another had their own electrical power, Diesel generators ~30 feet high, a second site duplicating everything, ~400 miles away, with every site with lots of redundancy. In such contexts, working hard and taking risks trying to save money on main memory seem unwise.