top | item 37616317

(no title)

tantamounta | 2 years ago

Thanks for the video. I feel like there's a bit of conflation between the terms "performance(latency)" and "throughput", but I see the point. I'd be interested to see that latency graph (Time marker 15:38) between platform and virtual threads in the case where the server doesn't manufacture a 100ms delay (say, in the case of a caching reverse-proxy).

Also - millions of Java programmers thank you for not going to async/await. What an evil source-code virus (among other things that is).

I tried to watch it at 1.25x speed as I normally do, but you already talk at 1.25x speed, so no need !

discuss

order

pron|2 years ago

To understand what happens when the server doesn't perform IO, apply Little's formula to the CPU only. Clearly, the maximum concurrency would be equal to the number of cores, which means that in that situation there would be no benefit to more threads than cores. What you would see in the graph would be that the server fails once L is equal to the number of cores. The average ratio between IO and CPU time as portions of the average duration would give you an upper limit on how much more throughput you gain by having more threads. That's what I explain at 11:34.

Also, both throughput and latency are performance metrics.