top | item 42755118

(no title)

xgb84j | 1 year ago

In the article they talk about how printing an error from the anyhow crate in debug format creates a full backtrace, which leads to an OOM error. This happens even with 4 GB of memory.

Why does creating a backtrace need such a large amount of memory? Is there a memory leak involved as well?

discuss

order

CodesInChaos|1 year ago

I don't think the 4 GiB instance actually ran into an OOM error. They merely observed a 400 MiB memory spike. The crashing instances were limited to 256 and later 512 MiB.

(Assuming that the article incorrectly used Mib when they meant MiB. Used correctly b=bit, B=byte)

prerok|1 year ago

Well, based on the article, if there was a memory leak then they should see the steady increase in memory consumption, which was not the case.

The only explanation I can see (if their conclusion is accurate) is that the end result of the symbolization is more than 400MB additional memory consumption (which is a lot in my opinion), however the process of the symbolization requires more than 2GB additional memory (which is incredibly a lot).

prerok|1 year ago

The author replied with additional explanations, so it seems that the additional 400MB were needed because the debug symbols were compressed.

erebe__|1 year ago

Sorry if the article is misleading.

The first increase of the memory limit was not 4G, but something roughly around 300Mb/400Mb, and the OOM did happen again with this setting.

Thus leading to a 2nd increase to 4Gi to be sure the app would not get OOM killed when the behavior get triggered. We needed the app to be alive/running for us to investigate the memory profiling.

Regarding the increase of 400MiB, yeah it is a lot, and it was a surprise to us too. We were not expecting such increase. There are, I think 2 reasons behind this.

1. This service is a grpc server, which has a lot of code generated, so lots of symbols

2. we compile the binary with debug symbols and a flag to compress the debug symbols sections to avoid having huge binary. Which may part be of this issue.

delusional|1 year ago

> Sorry if the article is misleading.

I don't think the article is misleading, but I do think it's a shame that all the interesting info is saved for this hackernews comment. I think it would make for a more exciting article if you included more of the analysis along with the facts. Remember, as readers we don't know anything about your constraints/system.

CodesInChaos|1 year ago

> we compile the binary with debug symbols and a flag to compress the debug symbols sections to avoid having huge binary.

How big are the uncompressed debug symbols? I'd expected processing uncompressed debug symbols to happen via a memory mapped file, while compressed debug symbols probably need to be extracted to anonymous memory.

https://github.com/llvm/llvm-project/issues/63290

xgb84j|1 year ago

Thank you for this in-depth reply! Your answer makes a lot of sense. Also thank you for writing the article!