(no title)
EarthLaunch | 10 months ago
In my research I recall some potential tradeoffs with SSE [1], but even there I concluded they were minor enough to consider SSE vs WS a wash[2] even for my uses. Looking back at my bookmarks, I see that you were present in the threads I was reading, how cool. A couple WS advantages I am now recalling:
SSE is one-way, so for situations with lots of client-sent data, a second connection will have to be opened (with overhead). I think this came up for me since if a player is sending many events per second, you end up needing WS. I guess you're saying to use UDP, which makes sense, but has its own downsides (firewalls, WebRTC, WebTransport not ready).
Compression in SSE would be negotiated during the initial connection, I have to assume, so it wouldn't be possible to switch modes or mix in pre-compressed binary data without reconnecting or base64-ing binary. (My game sends a mix of custom binary data, JSON, and gzipped data which the browser can decompress natively.)
Edit: Another thing I'm remembering now is order of events. Because WS is a single connection and data stream, it avoids network related race conditions; data is sent and received in the programmatically defined sequence.
0: https://news.ycombinator.com/item?id=43657717
1: https://rxdb.info/articles/websockets-sse-polling-webrtc-web...
andersmurphy|10 months ago
With http2/3 the it's all multiplexed over the same connection, and as far as your server is concerned that up request/connection is very short lived.
Yeah mixed formats for compression is probably a use case (like you said once you commit with compression with SSE there's no switching during the connection). But, then you still need to configure compression yourself with websockets. The main compression advantage of SSE is it's not per message it's for the whole stream. The implementations of compression with websockets I've seen have mostly been per message compression which is much less of a win (I'd get around 6:1, maybe 10:1 with the game example not 200:1, and pay a much higher server/client CPU cost).
Websockets have similar issues with firewalls and TCP. So in my mind if I'm already dealing with that I might as well go UDP.
As for ordering, that's part of the problem that makes websockets messy (with reconnects etc). I prefer to build resilience into the system, so in the case of that demo I shared, if you disconnect/reconnect lose your connection you automatically get the latest view (there's no play back of events that needs to happen). SSE will automatically send up the last received event id up on reconnect (so you can play back missed events if you want, not my thing personally). I mainly use event ID as a hash of content, if the hash is the same don't send any data the client already has the latest state.
By design, the way I build things with CQRS. Up events never have to be ordered with down events. Think about a game loop, my down events are basically a render loop. They just return the latest state of the view.
If you want to order up events (rarely necessary). I can batch on the client to preserver order. I can use client time stamp/hash of the last event (if you want to get fancy), and the server orders and batches those events in sync with the loop, i.e everything you got in the last X time (like blockchains/trading systems). This is only for per client based ordering, no distributed client ordering otherwise you get into lamport clocks etc.
I've been burnt too many times by thinking websockets will solve the network/race conditions for me (and then failing spectacularly), so I'd rather build the system to handle disconnects rather than rely on ordering guarantees that sometimes break.
Again, though my experience has made me biased. This is just my take.