top | item 42708904

Rust: Investigating an Out of Memory Error

111 points| erebe__ | 1 year ago |qovery.com

29 comments

order

CodesInChaos|1 year ago

The analysis looks rather half-finished. They did not analyze why so much memory was consumed. If this is the cache which persists after the first call, if it's temporary working memory, or if it's an accumulating memory leak. And why it uses so much memory at all.

I couldn't find any other complaints about rust backtrace printing consuming a lot of memory, which I would have expected if this was normal behaviour. So I wonder if there is anything special about their environment or usecase?

I would assume that the same OOM problem would arise when printing a panic backtrace. Either their instance has enough memory to print backtraces, or it doesn't. So I don't understand why they only disable lib backtraces.

erebe__|1 year ago

Hello,

You can see my other comment https://news.ycombinator.com/item?id=42708904#42756072 for more details.

But yes, the cache does persist after the first call, the resolved symbols stay in the cache to speed up the resolution of next calls.

Regarding the why, it is mainly because

1. this app is a gRPC server and contains a lot of generated code (you can investigate binary bloat with rust with https://github.com/RazrFalcon/cargo-bloat)

2. and that we ship our binary with debug symbols, with those options ``` ENV RUSTFLAGS="-C link-arg=-Wl,--compress-debug-sections=zlib -C force-frame-pointers=yes" ```

For the panic, indeed, I had the same question on Reddit. For this particular service, we don't expect panics at all, it is just that by default we ship all our rust binaries with backtrace enabled. And we have added an extra api endpoint to trigger a catched panic on purpose for other apps to be sure our sizing is correct.

xgb84j|1 year ago

In the article they talk about how printing an error from the anyhow crate in debug format creates a full backtrace, which leads to an OOM error. This happens even with 4 GB of memory.

Why does creating a backtrace need such a large amount of memory? Is there a memory leak involved as well?

CodesInChaos|1 year ago

I don't think the 4 GiB instance actually ran into an OOM error. They merely observed a 400 MiB memory spike. The crashing instances were limited to 256 and later 512 MiB.

(Assuming that the article incorrectly used Mib when they meant MiB. Used correctly b=bit, B=byte)

prerok|1 year ago

Well, based on the article, if there was a memory leak then they should see the steady increase in memory consumption, which was not the case.

The only explanation I can see (if their conclusion is accurate) is that the end result of the symbolization is more than 400MB additional memory consumption (which is a lot in my opinion), however the process of the symbolization requires more than 2GB additional memory (which is incredibly a lot).

erebe__|1 year ago

Sorry if the article is misleading.

The first increase of the memory limit was not 4G, but something roughly around 300Mb/400Mb, and the OOM did happen again with this setting.

Thus leading to a 2nd increase to 4Gi to be sure the app would not get OOM killed when the behavior get triggered. We needed the app to be alive/running for us to investigate the memory profiling.

Regarding the increase of 400MiB, yeah it is a lot, and it was a surprise to us too. We were not expecting such increase. There are, I think 2 reasons behind this.

1. This service is a grpc server, which has a lot of code generated, so lots of symbols

2. we compile the binary with debug symbols and a flag to compress the debug symbols sections to avoid having huge binary. Which may part be of this issue.

malkia|1 year ago

Can't they print something that llvm-symbolizer would pick up offline?

dgrunwald|1 year ago

Yes, that is typically the way to go.

Collecting a call stack only requires unwinding information (which is usually already present for C++ exceptions / Rust panics), not full debug symbols. This gives you a list of instruction pointers. (on Linux, the glibc `backtrace` function can help with this)

Print those instruction pointers in a relative form (e.g. "my_binary+0x1234") so that the output is independent of ASLR.

The above is all that needs to happen on the production/customer machines, so you don't need to ship debug symbols -- you can ship `strip`ped binaries.

On your own infrastructure, keep the original un-stripped binaries around. We use a script involving elfutil's eu-addr2line with those original binaries to turn the module+relative_address stack trace into a readable symbolized stack trace. I wasn't aware of llvm-symbolizer yet, seems like that can do the same job as eu-addr2line. (There's also binutil's addr2line but in my experience that didn't work as well as eu-addr2line)

BimJeam|1 year ago

I once had a faulty python based ai image generator running on my machine that used all 64 gigs of ram and oomed with a memory dump written to fs. This is no fun when that happens. But mostly these kind of bugs are misconfigurations or bad code, never ending while loops, whatever.

submeta|1 year ago

Ahh, I did this in Python before I learned about Cursor and Sourceforge‘s Cody. I‘d use a template where I provide a tree of my project structure, and then put code file contents in my template file, and then have a full repo in one giant markdown file. This only worked for smaller projects, but it worked damn well to provide the full context to an LLM to then ask questions about my code :)

loeg|1 year ago

It's kind of shocking that debug-formatting a backtrace allocates enough memory to OOM the process, though. What's going on there?