All systems crash "under memory pressure" but there are no details provided that show what the actual issues are? You can write software that is very robust under memory pressure in a low level language, for instance by forking into multiple worker processes. If a process dies because of OOM this would then not affect any of the other processes. The kernel will do all necessary cleanup and do so nearly instantly.I also don't understand why under "extreme load" there would be excessive memory pressure in the first place. When a server can't keep up with incoming requests it doesn't need to continue spawning new workers/goroutines. You don't need to .accept() when you don't have the resources to process the incoming request.
Very strange article.
jiggawatts|9 months ago
It's a worthless article anyway for the simple reason that there are no graphs, numbers, or reproducible experiments. The code snippets aren't the whole programs and the test harness setup isn't spelled out. The programs themselves look different in what they're doing, so they're not even equivalent! It's hard to tell because, for example, in some snippets the outermost loop is shown, but for C++ only the per-request "workload", but not all of it.
Even just the how of the testing can make a huge difference in my experience, especially when running synthetic workloads against garbage-collected languages. Most of them will never crash under normal workloads, but if you go out of your way to generate stupid amounts of memory allocations, practically none will be able to keep up.
The whole article is just nonsense, end-to-end, starting from the first content paragraph:
"Our test environment consisted of a cluster of 16 high-performance servers..."
Why a "cluster"? None of the workloads appear to be distributed or clustered applications! They're not testing Akka or Microsoft Orleans here, so why bother having more than one box?
What operating system was used?
What were the client systems? How many?
Were some of the languages "doing better" simply because they were slower at handling the test loads and hence failing slower?
Were the test clients correctly sending requests as fast as possible, or waiting sequentially for previous requests to complete before sending the next request?
Etc...
burnt-resistor|9 months ago
It seems like an ideological shitpost.