It's mostly RAM allocated per client. E.g. Postgres is very much limited by this fact in supporting massive numbers of clients. Hence pgbouncer and other kinds of connection pooling which allow a Postgres server to serve many more clients than it has RAM to allow connecting.
If your Node app spends vety little RAM per client, it can indeed service a great many of them.
A PHP script that does little more than checking credentials and invoking sendfile() could be adequate for the case of serving small files described in the article.
Except that it wastes 2 or 3 orders of magnitude in performance and polls all the connections from a single OS thread, locking everything if it has to do extra work on any of them.
Picking the correct theoretical architecture can't save you if you bog down on every practical decision.
I'm sure there is plenty of data/benchmarks out there and I'll let that speak for itself, but I'll just point out that there are 2 built-in core modules in Node.js, worker_threads (threads) and cluster (processes) which are very easy to bolt on to an existing plain http app.
Libuv now supports io_uring but I’m fuzzy on how broadly nodejs is applying that fact. It seems to be a function by function migration with lots of rollbacks.
nine_k|2 months ago
If your Node app spends vety little RAM per client, it can indeed service a great many of them.
A PHP script that does little more than checking credentials and invoking sendfile() could be adequate for the case of serving small files described in the article.
marcosdumay|2 months ago
Picking the correct theoretical architecture can't save you if you bog down on every practical decision.
mifreewil|2 months ago
hinkley|2 months ago