top | item 7712766

SocketCluster – WebSockets that scale to 100K messages per second on 8 cores

93 points| BukhariH | 12 years ago |github.com | reply

24 comments

order
[+] alecsmart1|12 years ago|reply
From their readme-

"The test was only set to reach up to 100 concurrent connections (each sending 1000 messages per second) - Total of 100K messages per second."

So they had only 100 concurrent connections.

[+] denizozger|12 years ago|reply
Number of connections, message size, and frequency of messages sent are three main parameters to measure performance, and for some frameworks like Engine.IO, number of connections seems to have the biggest impact on performance (https://medium.com/node-js-javascript/b63bfca0539). It would be good to see the benchmarks with much higher number of connections, as non-blocking IO is usually why people choose Node.js platform.
[+] teacup50|12 years ago|reply
Is 100K mps on 8 cores considered high for node/websockets microbenchmarking of the socket path?

That doesn't seem like much from past experience writing high-throughput messaging code, and all this is doing is spitting out length-framed messages to a socket.

[+] bhauer|12 years ago|reply
We do not (yet) measure the performance of Websocket in our project (the TechEmpower framework benchmarks), but our "Plaintext" test is a rough analogue to a ping-pong test on Websocket. Our Plaintext test uses HTTP pipelining on a keep-alive connection. However, in our case, each request is sending a couple hundred bytes of HTTP request headers and receiving about the same in response HTTP headers prior to a "Hello world" payload.

We see approximately 600,000 of these HTTP "messages" per second on a i7-2600K 8 HT core workstation [1] from top performers such as Netty and Undertow and these top performers are network limited by our gigabit Ethernet.

We are using Undertow for a Websocket project presently and its performance there has been very good.

[1] http://www.techempower.com/benchmarks/#section=data-r9&hw=i7...

[+] bilbo0s|12 years ago|reply
It's not high at all.

I have never worked with node though... so maybe it's ... sort of ... "fast for node". Not fast in absolute terms.

[+] Pacabel|12 years ago|reply
That's what I was thinking, too. That's merely around 13,000 messages per second per core. Rates like that weren't all that impressive on low end server hardware a decade ago, so I'd hope that it's even less impressive today when using more modern hardware (or even VMs).
[+] zoomerang|12 years ago|reply
I'd be expecting more in the realm of millions of messages per second, at least from a Java solution.
[+] limsup|12 years ago|reply
Can you give some numbers of your past experience?
[+] lacksconfidence|12 years ago|reply
This is interesting, how does it compare to using nginx(or another proxy) in front of multiple socket.io instances on the same machine?
[+] jonpress|12 years ago|reply
I wrote SocketCluster. I haven't tried that yet, it would definitely be interesting to test.
[+] denizozger|12 years ago|reply
Why does each worker need a seperate store process? It seems on an 8 core machine max worker count can only be 3 (1 master, 3 workers, 3 stores). If workers had in-memory stores -or at least connect to a Redis server-, with 4 more workers performance should increase.
[+] jonpress|12 years ago|reply
They don't. You can have fewer stores than workers. In the benchmark, we could in fact do with very few stores because they are not really used. I'm sure you could fiddle with the worker, load balancer and store count to get better performance (it depends on the system's requirements).
[+] knodi|12 years ago|reply
node is single threaded
[+] trungonnews|12 years ago|reply
Do you think Golang can handle more connections than NodeJS?