top | item 11057142

(no title)

jbert | 10 years ago

Isn't the calculation (assuming (for simplicity) one CPU, running at 100% only on application load)

60 requests/sec => each request takes 1/60s CPU-second == 16.6ms of CPU time to process? (This is time-on-cpu, and doesn't include time-waiting-for-cpu. I think time-on-cpu is the number you want if you're looking at optimising your codebase)

discuss

order

deathanatos|10 years ago

She does mention:

> each request was taking 6 / 60 = 0.1s = 100ms of time using-or-waiting-for-the-CPU.

(emphasis mine)

In my original read, I thought her core count was greater than her load, so that would also be her direct time-on-cpu. Now I'm not so sure.

And while time-waiting-for-cpu might not be important to optimizing the codebase, you probably still want to know that your serving processes are waiting for CPU; after all, it is that number that your user's browser is seeing (at least, between the two it is moreso that one). Such a result might indicate a larger machine or more machines are required, for example.

cperciva|10 years ago

Pretty much. If you want to get fancy you can make assumptions about the distribution of request arrival times and use the mean queue length of 6 to estimate the fraction of the time when the queue length drops to zero; you probably come out somewhere around 10% CPU idle time, so each request is taking 15 ms to process rather than 16.6 ms.

But cpu-time-per-request is definitely the number you want to pay attention to. If you cut that by a factor of 2, you won't decrease the load average from 6 to 3; you'll decrease it from 6 to less than 1.