You don't need HTTP/2 to make SSE work well. Actually the HTTP/2 TCP head-of-line issue and all the workarounds for that probably make it harder to scale without technical debt.
Can you explain what you mean here? What was your peak active user count, what was peak per server instance, and why you think that beats anything else?
Your license makes some sense, but it seems to include a variable perpetual subscription cost via gumroad. Without an account (assuming I found the right site), I have no idea what you would be asking for. I recommend making it a little clearer on the landing page.
That's said, it's very cool. Do you have a development blog for Meadow?
Couldn't find a license file in the root folder of that github. I found a license in a cpp file buried in the sec folder. You should consider putting the licensing for this kind of project in a straightforward and locatable place.
Its a lot easier to scale than websockets where you need a pub sub solution and a controller to published shared state changes. See is really simply incomparision
SSEs are one of the standard push mechanisms in JMAP [1], and they're part of what make the Fastmail UI so fast. They're straightforward to implement, for both server and client, and the only thing I don't like about them is that Firefox dev tools make them totally impossible to debug.
It is, however, interesting to note that Fastmail’s webmail doesn’t use EventSource, but instead implements it atop fetch or XMLHttpRequest. An implementation atop XMLHttpRequest was required in the past because IE lacked EventSource, but had that been the only reason, it’d just have been done polyfill style; but it’s not. My foggy recollection from 4–5 years ago (in casual discussion while I worked for Fastmail) is that it had to do with getting (better?) control over timeout/disconnect/reconnect, probably handling Last-Event-ID, plus maybe skipping browser bugs in some older (now positively ancient and definitely unsupported) browsers. The source for that stuff is the three *EventSource.js files in https://github.com/fastmail/overture/tree/master/source/io.
The Fastmail UI is indeed snappy, except when it suddenly decides it has to reload the page, which seems to be multiple times a day these days (and always when I need to search for a specific email). Can you make it do what one of my other favorite apps does: when there's a new version available, make a small pop up with a reload button, but don't force a reload (until maybe weeks later)?
It's a beautifully simple & elegant lightweight push events option that works over standard HTTP, the main gotcha for maintaining long-lived connections is that server/clients should implement their own heartbeat to be able to detect & auto reconnect failed connections which was the only reliable way we've found to detect & resolve broken connections.
"the main gotcha for maintaining long-lived connections is that server/clients should implement their own heartbeat to be able to detect & auto reconnect failed connections"
My experience with sse is pretty bad. They are unreliable, don’t support headers and require keep-alive hackery. In my experience WebSockets are so much better.
Also ease of use doesn’t really convince me. It’s like 5 lines of code with socket.io to have working websockets, without all the downsides of sse.
HTTP headers must be written before the body; so once you start writing the body, you can't switch back to writing headers.
Server-sent events appears to me to just be chunked transfer encoding [0], with the data structured in a particular way (at least from the perspective of the server) in this reference implementation (tl,dr it's a stream):
Mind expanding on your experience and how are websockets more reliable than SSE? one of the main benefits of SSE is reliability from running on plain HTTP.
the biggest drawback with SSE, even when unidirectional comm is sufficient is
> SSE is subject to limitation with regards to the maximum number of open connections. This can be especially painful when opening various tabs as the limit is per browser and set to a very low number (6).
This isn’t a problem with HTTP/2. You can have as many SSE connections as you want across as many tabs as the user wants to use. Browsers multiplex the streams over a handful of shared HTTP/2 connections.
If you’re still using HTTP/1.1, then yes, this would be a problem.
It used to be 2 sockets per client, so now it's 6?
Well it's a non-problem, if you need more bandwith than one socket in each direction can provide you have much bigger problems than the connection limit; which you can just ignore.
Another way to solve it could be using a BroadcastChannel to communicate between tabs, do some kind of leader election to figure out which one should start the EventSource, and then have the leader relay the events over the channel.
We moved away from WebSockets to SSE, realised it wasn't makings thing any better. In fact, it made things worse, so we switched back to WebSockets again and worked on scaling WebSockets. SSE will work much better for other cases, just didn't work out for our case.
First reason was that it was an array of connections you loop through to broadcast some data. We had around 2000 active connections and needed a less than 1000ms latency, with WebSocket, even though we faced connections drops, client received data on time. But in SSE, it took many seconds to reach some clients, since the data was time critical, WebSocket seemed much easier to scale for our purposes. Another issue was that SSE is like an idea you get done with HTTP APIs, so it doesn't have much support around it like WS. Things like rooms, clientIds etc needed to be managed manually, which was also a quite big task by itself. And a few other minor reasons too combined made us switch back to WS.
I think SSE will suit much better for connections where bulk broadcast is less, like in shared docs editing, showing stuff like "1234 users is watching this product" etc. And keep in mind that all this is coming from a mediocre full stack developer with 3 YOE only, so take it with a grain of salt.
Your write-up sounds like your issues with SSE stemmed from the framework/platform/server-stack you're using rather than of any problems inherent in SSE.
I haven't observed any latency or scaling issues with SSE - on the contrary: in my ASP.NET Core projects, running behind IIS (with QUIC enabled), I get better scaling and throughput with SSE compared to raw WebSockets (and still-better when compared to SignalR), though latency is already minimal so I don't think that can be improved upon.
That said, I do prefer using the existing pre-built SignalR libraries (both server-side and client-side: browser and native executables) because the library's design takes away all the drudgery.
As the other comment says, there's nothing inherent to SSE that would have made it slower than websockets. Ultimately they are both just bytes being sent across a long lived tcp connection.
Sounds like the implementation you were using was introducing the latency.
One example where i found it to be not the perfect solution was with a web turn-based game.
The SSE was perfect to update gamestate to all clients, but to have great latency from the players point of view whenever the player had to do something, it was via a normal ajax-http call.
Eventually I had to switch to uglier websockets and keep connection open.
With HTTP/2, the browser holds a TCP connection open that has various streams multiplexed on top. One of those streams would be your SSE stream. When the client makes an AJAX call to the server, it would be sent through the already-open HTTP/2 connection, so the latency is very comparable to websocket — no new connection is needed, no costly handshakes.
With the downsides of HTTP/1.1 being used with SSE, websockets actually made a lot of sense, but in many ways they were a kludge that was only needed until HTTP/2 came along. As you said, communicating back to the server in response to SSE wasn’t great with HTTP/1.1. That’s before mentioning the limited number of TCP connections that a browser will allow for any site, so you couldn’t use SSE on too many tabs without running out of connections altogether, breaking things.
I think it comes down to whether your your communication is more oriented towards sending than receiving. If the clients receive way more than they send, then SSE is probably fine, but if it's truly bidirectional then it might not work as well.
SSEs had a severe connection limit, something like 4 connections per domain per browser (IIRC), so if you had four tabs open then opening new ones would fail.
Browsers also limit the number of websocket connections. But, if you're using HTTP/2, as you should be, then the multiplexing means that you can have effectively unlimited SSE connections through a limited number of TCP connections, and those TCP connections will be shared across tabs.
(There's one person in this thread who is just ridiculously opposed to HTTP/2, but... HTTP/2 has serious benefits. It wasn't developed in a vacuum by people who had no idea what they were doing, and it wasn't developed aimlessly or without real world testing. It is used by pretty much all major websites, and they absolutely wouldn't use it if HTTP/1.1 was better... those major websites exist to serve their customers, not to conspiratorially push an agenda of broken technologies that make the customer experience worse.)
Did research on SSE a short while ago. Found out that the mimetype "text/event-stream" was blocked by a couple of anti-virus products. So that was a no-go for us.
It's not blocked. It's just that some very badly written proxies can try to buffer the "whole" response, and SSE is technically a never-ending file.
It's possible to detect that, and fall back to long polling. Send an event immediately after opening a new connection, and see if it arrives at the client within a short timeout. If it doesn't, make your server close the connection after every message sent (connection close will make AV let the response through). The client will reconnect automatically.
Or run:
while(true) alert("antivirus software is worse than malware")
These days I feel like the only way to win against poorly designed antiviruses and firewalls is to—ironically enough—behave like malware and obfuscate what's going on.
I’m a huge fan of SSE. In the first chapter of my book Fullstack Node.js I use it for the real-time chat example because it requires almost zero setup. I’ve also been using SSE on https://rambly.app to handle all the WebRTC signaling so that clients can find new peers. Works great.
This is really interesting! I wonder why it never really took off, whereas websockets via Socket.IO/Engine.io did.
At NodeBB, we ended up relying on websockets for almost everything, which was a mistake. We were using it for simple call-and-response actions, where a proper RESTful API would've been a better (more scalable, better supported, etc.) solution.
In the end, we migrated a large part of our existing socket.io implementation to use plain REST. SSE sounds like the second part of that solution, so we can ditch socket.io completely if we really wanted to.
I have used SSEs extensively, I think they are brilliant and massively underused.
The one thing I wish they supported was a binary event data type (mixed in with text events), effectively being able to send in my case image data as an event. The only way to do it currently is as a Base64 string.
Personally i use mqtt over websockets, paho[0] is a good js library. It support last will for dc's and the message queue design makes it easy to think of and debug. There also a lot of mq brokers that will scale well.
ESPHome (an easy to use firmware for ESP32 chips) uses SSE to send sensor data to subscribers.
I made use of that in Lunar (https://lunar.fyi/#sensor) to be able to adjust monitor brightness based on ambient light readings from an external wireless sensor.
At first it felt weird that I have to wait for responses instead of polling with requests myself, but the ESP is not a very powerful chip and making one HTTP request every second would have been too much.
SSE also allows the sensor to compare previous readings and only send data when something changed, which removes some of the complexity with debouncing in the app code.
I had the pleasure of being forced to use in SSE due to working with a proxy that didn't support websockets.
Personally I think it's a great solution for longer running tasks like "Export your data to CSV" when the client just needs to get an update that it's done and here's the url to download it.
I can’t find any downsides of SSE presented. My experience is that they’re nice in theory but the devils in the details. The biggest issue being that you basically need http/2 to make them practical.
That's correct, just `reverse_proxy` alone is enough. The request matcher is only needed if you want to make the same request paths get proxied to your HTTP upstream if it doesn't have those websocket connection headers. But if you're always using a path like `/ws` for websockets then you don't need to match on headers.
This is why I really really like Hotwire Turbo[0] which is a back-end agnostic way to do fast and partial HTML based page updates over HTTP and it optionally supports broadcasting events with WebSockets (or SSE[1]) only when it makes sense.
So many alternatives to Hotwire want to use WebSockets for everything, even for serving HTML from a page transition that's not broadcast to anyone. I share the same sentiment as the author in that WebSockets have real pitfalls and I'd go even further and say unless used tastefully and sparingly they break the whole ethos of the web.
HTTP is a rock solid protocol and super optimized / well known and easy to scale since it's stateless. I hate the idea of going to a site where after it loads, every little component of the page is updated live under my feet. The web is about giving users control. I think the idea of push based updates like showing notifications and other minor updates are great when used in moderation but SSE can do this. I don't like the direction of some frameworks around wanting to broadcast everything and use WebSockets to serve HTML to 1 client.
I hope in the future Hotwire Turbo alternatives seriously consider using HTTP and SSE as an official transport layer.
One problem I had with WebSockets is you can not set custom HTTP headers when opening the connection. I wanted to implement a JWT based authentication in my backend and had to pass the token either as a query parameter or in a cookie.
Anyone knows the rationale behind this limitation?
The workaround/hack is to send your token via the "Sec-WebSocket-Protocol" header, which is the one header you're allowed to set in browser when opening a connection. The catch is that your WebSocket server needs to echo this back on a successful connection.
WebSockets support compression (ofc, the article goes on to detail this & point out flaws. I'd argue that compression is not generally useful in web sockets in the context of many small messages, so it makes sense to be default-off for servers as it's something which should be enabled explicitly when necessary, but the client should be default-on since the server is where the resource usage decision matters)
I don't see why WebSockets should benefit from HTTP. Besides the handshake to setup the bidirectional channel, they're a separate protocol. I'll agree that servers should think twice about using them: they necessitate a lack of statelessness & HTTP has plenty of benefits for most web usecases
Still, this is a good article. SSE looks interesting. I host an online card game openEtG, which is far enough from real time that SSE could potentially be a way to reduce having a connection to every user on the site
1) More complex and binary so you cannot debug them as easily, specially on live and specially if you use HTTPS.
2) The implementations don't parallelize the processing, with Comet-Stream + SSE you just need to find a application server that has concurrency and you are set to scale on the entire machines cores.
3) WebSockets still have more problems with Firewalls.
Is it worth upgrading a long polling solution to SSE? Would I see much benefit?
What I mean by that is client sends request, server responds in up to 2 minutes with result or a try again flag. Either way client resends request and then uses response data if provided.
The most compatible technique is long polling (with a re-established connection after X seconds if no event). Works suprisingly well in many cases and is not blocket by any proxies.
long-polling are blocked to almost exactly the same extent as comet-stream and SSE. The only thing you have to do is to push more data on the response so that the proxy is forced to flush the response!
Since IE7 is no longer used we can bury long-polling for good.
With WebTransport around the corner I don't think is worth the time investing in learning a, what seems to me, obsolete technology. I can understand it for already big projects working with SSE that don't want to pay the cost of upgrading/changing but for anything new I cannot be bothered since Websockets work good enough for my use cases.
What worries me though is the trend of dismissal of newer technologies as being useless or bad and the resistance to change.
Around the corner? There seems to be nothing about this in any browser. [0] That would put this what, five years out before it could be used in straightforward fashion? Please be practical.
WebTransport seems like it will be significantly lower level and more complex to use than SSE, both on the server and the client. To say that this "obsoletes" SSE seems like a serious stretch.
SSE runs over HTTP/3 just as well as any other HTTP feature, and WebTransport is built on HTTP/3 to give you much finer grained control of the HTTP/3 streams. If your application doesn't benefit significantly from that control, then you're just adding needless complexity.
I managed to get through almost all middle men by using 2 tricks:
1) Push a large amount of data on the pull (the comet-stream SSE never ending request) response to trigger the middle thing to flush the data.
2) Using SSE instead of just Comet-Stream since they will see the header and realize this is going to be real-time data.
We had 99.6% succes rate on the connection from 350.000 players from all over the world (even satellite connections in the Pacific and modems in Siberia) which is a world record for any service.
This seems fairly cool, and I appreciate the write up, but god I hate it so much when people write code samples that try and be fancy and use non-code-characters in their code samples. Clarity is much more important then aesthetics when it comes to code examples, if Im trying to understand something I've never seen before, having a bunch of extra non-existant symbols does not help.
Are you referring to the `!==` and `=>` in their code being converted to what appears to be a single symbol?
Upon further inspection, it looks like the actual code on the page is `!==` and `=>` but the font ("Fira Code") seems to be somehow converting those sequences of characters into a single symbol, which is actually still the same number of characters but joined to appear as a single one. I had no idea fonts could do that.
Which characters, the funky '≠'? I've seen those pop up a few other times recently, which makes me wonder if there's some editor extension that just came out that maps != and !==
Can someone give a brief summary of how this differs from long polling. It looks very similar except it has a small layer of formalized event/data/id structure on top? Are there any differences in the lower connection layers, or any added support by browsers and proxies given some new headers?
The underlying mechanism effectively is the same: A long running HTTP response stream. However long-polling commonly is implemented by "silence" until an event comes in and then performing another request to wait for the next event, whereas SSE sends you multiple events per request.
> RFC 8441, released on September 2018, tries to fix this limitation by adding support for “Bootstrapping WebSockets with HTTP/2”. It has been implemented in Firefox and Chrome. However, as far as I know, no major reverse-proxy implements it.
I have always preferred SSE to WebSockets. You can do a _lot_ with a minuscule amount of code, and it is great for updating charts and status UIs on the fly without hacking extra ports, server daemons and whatnot.
I have investigated SSE for https://fiction.live a few years back but stayed with websockets. Maybe it's time for another look. I pay around $300 a month for the websocket server, it's probably not worth it yet to try to optimize that but if we keep growing at this rate it may soon be.
I usually use SSEs for personal projects because they are way more simple than WebSockets (not that those aren't also simple) and most of the time my web apps just need to listen for something coming from the server and not bidirectional communication.
EventSource has been around for eons, and is what the precursor to webpack-dev-server used for HMR events. It had the advantage of supporting ancient browsers since the spec has been around a long time and even supported by oldIE.
I think SSE might make a lot of sense for Serverless workloads? You don't have to worry about running a websocket server, any serverless host with HTTP support will do. Long-polling might be costlier though?
this is what i have been telling people for years, but its hard to get the word out there. usually every dev just reflexes without thinking to websockets when anything realtime or push related comes up.
The TCP stack can give you that info if you are lucky in your topography but generally you cannot rely on this working 100%.
The way I solve it is to send "noop" messages at regular intervals so that the socket write will return -1 and then I know something is off and reconnect.
My personal browser streaming TL;DR goes something like this:
* Start with SSE
* If you need to send binary data, use long polling or WebSockets
* If you need fast bidi streaming, use WebSockets
* If you need backpressure and multiplexing for WebSockets, use RSocket or omnistreams[1] (one of my projects).
* Make sure you account for SSE browser connection limits, preferably by minimizing the number of streams needed, or by using HTTP/2 (mind head-of-line blocking) or splitting your HTTP/1.1 backend across multiple domains and doing round-robin on the frontend.
I tried out server side events, but they are still quite troubling with the lack of headers and cookies. I remember I needed some polyfill version which gave more issues.
That is wrong. Edit: Actually it seems correct (a javascript problem, not SSE problem) but it's a non-problem if you use a parameter for that data instead and read it on the server.
So do I understand correctly that when using SSE, the login cookie of the user is not automatically sent with the SSE request like it is with all normal HTTP requests? And I have to redo auth somehow?
bullen|4 years ago
https://store.steampowered.com/app/486310/Meadow/
We have had a total of 350.000 players over 6 years and the backend out-scales all other multiplayer servers that exist and it's open source:
https://github.com/tinspin/fuse
You don't need HTTP/2 to make SSE work well. Actually the HTTP/2 TCP head-of-line issue and all the workarounds for that probably make it harder to scale without technical debt.
jayd16|4 years ago
Can you explain what you mean here? What was your peak active user count, what was peak per server instance, and why you think that beats anything else?
HWR_14|4 years ago
That's said, it's very cool. Do you have a development blog for Meadow?
bastawhiz|4 years ago
dlsa|4 years ago
shams93|4 years ago
smashah|4 years ago
https://github.com/open-wa/wa-automate-nodejs
There should be some sort of support group for those of us trying to monetize (sans donations) our open source projects!
shams93|4 years ago
stavros|4 years ago
herodoturtle|4 years ago
mmcclimon|4 years ago
1. https://jmap.io/spec-core.html#event-source
chrismorgan|4 years ago
ok_dad|4 years ago
You can't say that and not say more about it, haha. Please expand on this?
Also, I'm a Fastmail customer and appreciate the nimble UI, thanks!
noisy_boy|4 years ago
dnr|4 years ago
mythz|4 years ago
It's a beautifully simple & elegant lightweight push events option that works over standard HTTP, the main gotcha for maintaining long-lived connections is that server/clients should implement their own heartbeat to be able to detect & auto reconnect failed connections which was the only reliable way we've found to detect & resolve broken connections.
dabeeeenster|4 years ago
That sounds like a total nightmare!
easrng|4 years ago
szastamasta|4 years ago
Also ease of use doesn’t really convince me. It’s like 5 lines of code with socket.io to have working websockets, without all the downsides of sse.
88913527|4 years ago
Server-sent events appears to me to just be chunked transfer encoding [0], with the data structured in a particular way (at least from the perspective of the server) in this reference implementation (tl,dr it's a stream):
https://gist.github.com/jareware/aae9748a1873ef8a91e5#file-s...
[0]: https://en.wikipedia.org/wiki/Chunked_transfer_encoding
ricardobeat|4 years ago
mikojan|4 years ago
You have to send "Content-Type: text/event-stream" just to make them work.
And you keep the connection alive by sending "Connection: keep-alive" as well.
I've never had any issues using SSEs.
tekknik|4 years ago
You can also implement websockets in 5 lines (less, really 1-3 for a basic implementation) without socket.ii. Why are you still using it?
bullen|4 years ago
Read my comment below about that.
jFriedensreich|4 years ago
pier25|4 years ago
leeoniya|4 years ago
> SSE is subject to limitation with regards to the maximum number of open connections. This can be especially painful when opening various tabs as the limit is per browser and set to a very low number (6).
https://ably.com/blog/websockets-vs-sse
SharedWorker could be one way to solve this, but lack of Safari support is a blocker, as usual. https://developer.mozilla.org/en-US/docs/Web/API/SharedWorke...
also, for websockets, there are various libs that handle auto-reconnnects
https://github.com/github/stable-socket
https://github.com/joewalnes/reconnecting-websocket
https://dev.to/jeroendk/how-to-implement-a-random-exponentia...
coder543|4 years ago
If you’re still using HTTP/1.1, then yes, this would be a problem.
bullen|4 years ago
Well it's a non-problem, if you need more bandwith than one socket in each direction can provide you have much bigger problems than the connection limit; which you can just ignore.
easrng|4 years ago
hishamp|4 years ago
First reason was that it was an array of connections you loop through to broadcast some data. We had around 2000 active connections and needed a less than 1000ms latency, with WebSocket, even though we faced connections drops, client received data on time. But in SSE, it took many seconds to reach some clients, since the data was time critical, WebSocket seemed much easier to scale for our purposes. Another issue was that SSE is like an idea you get done with HTTP APIs, so it doesn't have much support around it like WS. Things like rooms, clientIds etc needed to be managed manually, which was also a quite big task by itself. And a few other minor reasons too combined made us switch back to WS.
I think SSE will suit much better for connections where bulk broadcast is less, like in shared docs editing, showing stuff like "1234 users is watching this product" etc. And keep in mind that all this is coming from a mediocre full stack developer with 3 YOE only, so take it with a grain of salt.
DaiPlusPlus|4 years ago
I haven't observed any latency or scaling issues with SSE - on the contrary: in my ASP.NET Core projects, running behind IIS (with QUIC enabled), I get better scaling and throughput with SSE compared to raw WebSockets (and still-better when compared to SignalR), though latency is already minimal so I don't think that can be improved upon.
That said, I do prefer using the existing pre-built SignalR libraries (both server-side and client-side: browser and native executables) because the library's design takes away all the drudgery.
justinsaccount|4 years ago
Sounds like the implementation you were using was introducing the latency.
etimberg|4 years ago
nly|4 years ago
rawoke083600|4 years ago
One example where i found it to be not the perfect solution was with a web turn-based game.
The SSE was perfect to update gamestate to all clients, but to have great latency from the players point of view whenever the player had to do something, it was via a normal ajax-http call.
Eventually I had to switch to uglier websockets and keep connection open.
Http-keep-alive was that reliable.
coder543|4 years ago
With the downsides of HTTP/1.1 being used with SSE, websockets actually made a lot of sense, but in many ways they were a kludge that was only needed until HTTP/2 came along. As you said, communicating back to the server in response to SSE wasn’t great with HTTP/1.1. That’s before mentioning the limited number of TCP connections that a browser will allow for any site, so you couldn’t use SSE on too many tabs without running out of connections altogether, breaking things.
bullen|4 years ago
johnny22|4 years ago
kreetx|4 years ago
coder543|4 years ago
(There's one person in this thread who is just ridiculously opposed to HTTP/2, but... HTTP/2 has serious benefits. It wasn't developed in a vacuum by people who had no idea what they were doing, and it wasn't developed aimlessly or without real world testing. It is used by pretty much all major websites, and they absolutely wouldn't use it if HTTP/1.1 was better... those major websites exist to serve their customers, not to conspiratorially push an agenda of broken technologies that make the customer experience worse.)
oplav|4 years ago
There are some hacks to work around it though.
reactor|4 years ago
mmzeeman|4 years ago
pornel|4 years ago
It's possible to detect that, and fall back to long polling. Send an event immediately after opening a new connection, and see if it arrives at the client within a short timeout. If it doesn't, make your server close the connection after every message sent (connection close will make AV let the response through). The client will reconnect automatically.
Or run:
ronsor|4 years ago
bullen|4 years ago
captn3m0|4 years ago
bastawhiz|4 years ago
foxbarrington|4 years ago
viiralvx|4 years ago
julianlam|4 years ago
At NodeBB, we ended up relying on websockets for almost everything, which was a mistake. We were using it for simple call-and-response actions, where a proper RESTful API would've been a better (more scalable, better supported, etc.) solution.
In the end, we migrated a large part of our existing socket.io implementation to use plain REST. SSE sounds like the second part of that solution, so we can ditch socket.io completely if we really wanted to.
Very cool!
shahinghasemi|4 years ago
Would you please elaborate on the challenges/disadvantages you've encountered in comparison to REST/HTTP?
samwillis|4 years ago
The one thing I wish they supported was a binary event data type (mixed in with text events), effectively being able to send in my case image data as an event. The only way to do it currently is as a Base64 string.
keredson|4 years ago
$ ls -l PXL_20210926_231226615.*
-rw-rw-r-- 1 derek derek 8322217 Feb 12 09:20 PXL_20210926_231226615.base64
-rw-rw-r-- 1 derek derek 6296892 Feb 12 09:21 PXL_20210926_231226615.base64.gz
-rw-rw-r-- 1 derek derek 6160600 Oct 3 15:31 PXL_20210926_231226615.jpg
jtwebman|4 years ago
dpweb|4 years ago
Essentially just new EventSource(), text/event-stream header, and keep conn open. Zero dependencies in browser and nodejs. Needs no separate auth.
oneweekwonder|4 years ago
[0]: https://www.eclipse.org/paho/index.php?page=clients/js/index...
alin23|4 years ago
I made use of that in Lunar (https://lunar.fyi/#sensor) to be able to adjust monitor brightness based on ambient light readings from an external wireless sensor.
At first it felt weird that I have to wait for responses instead of polling with requests myself, but the ESP is not a very powerful chip and making one HTTP request every second would have been too much.
SSE also allows the sensor to compare previous readings and only send data when something changed, which removes some of the complexity with debouncing in the app code.
rough-sea|4 years ago
waylandsmithers|4 years ago
Personally I think it's a great solution for longer running tasks like "Export your data to CSV" when the client just needs to get an update that it's done and here's the url to download it.
sb8244|4 years ago
bullen|4 years ago
https://github.com/tinspin/rupy/wiki/Comet-Stream
Old page, search for "event-stream"... Comet-stream is a collection of techniques of which SSE is one.
My experience is that SSE goes through anti-viruses better!
anderspitman|4 years ago
U1F984|4 years ago
I also had no problems with HAProxy, it worked with websockets without any issues or extra handling.
francislavoie|4 years ago
nickjj|4 years ago
So many alternatives to Hotwire want to use WebSockets for everything, even for serving HTML from a page transition that's not broadcast to anyone. I share the same sentiment as the author in that WebSockets have real pitfalls and I'd go even further and say unless used tastefully and sparingly they break the whole ethos of the web.
HTTP is a rock solid protocol and super optimized / well known and easy to scale since it's stateless. I hate the idea of going to a site where after it loads, every little component of the page is updated live under my feet. The web is about giving users control. I think the idea of push based updates like showing notifications and other minor updates are great when used in moderation but SSE can do this. I don't like the direction of some frameworks around wanting to broadcast everything and use WebSockets to serve HTML to 1 client.
I hope in the future Hotwire Turbo alternatives seriously consider using HTTP and SSE as an official transport layer.
[0]: https://hotwired.dev/
[1]: https://twitter.com/dhh/status/1346095619597889536?lang=en
ponytech|4 years ago
Anyone knows the rationale behind this limitation?
charlietran|4 years ago
goodpoint|4 years ago
Is that true? The web never cease to amaze.
__s|4 years ago
I don't see why WebSockets should benefit from HTTP. Besides the handshake to setup the bidirectional channel, they're a separate protocol. I'll agree that servers should think twice about using them: they necessitate a lack of statelessness & HTTP has plenty of benefits for most web usecases
Still, this is a good article. SSE looks interesting. I host an online card game openEtG, which is far enough from real time that SSE could potentially be a way to reduce having a connection to every user on the site
bullen|4 years ago
1) More complex and binary so you cannot debug them as easily, specially on live and specially if you use HTTPS.
2) The implementations don't parallelize the processing, with Comet-Stream + SSE you just need to find a application server that has concurrency and you are set to scale on the entire machines cores.
3) WebSockets still have more problems with Firewalls.
quickthrower2|4 years ago
What I mean by that is client sends request, server responds in up to 2 minutes with result or a try again flag. Either way client resends request and then uses response data if provided.
bullen|4 years ago
Comet-stream and SSE will save you alot of bandwidth and CPU!!!
havkom|4 years ago
bullen|4 years ago
Since IE7 is no longer used we can bury long-polling for good.
laerus|4 years ago
What worries me though is the trend of dismissal of newer technologies as being useless or bad and the resistance to change.
slimsag|4 years ago
jessaustin|4 years ago
[0] https://caniuse.com/?search=webtransport
coder543|4 years ago
SSE runs over HTTP/3 just as well as any other HTTP feature, and WebTransport is built on HTTP/3 to give you much finer grained control of the HTTP/3 streams. If your application doesn't benefit significantly from that control, then you're just adding needless complexity.
lima|4 years ago
They'll try to read the entire stream to completion and will hang forever.
bullen|4 years ago
1) Push a large amount of data on the pull (the comet-stream SSE never ending request) response to trigger the middle thing to flush the data.
2) Using SSE instead of just Comet-Stream since they will see the header and realize this is going to be real-time data.
We had 99.6% succes rate on the connection from 350.000 players from all over the world (even satellite connections in the Pacific and modems in Siberia) which is a world record for any service.
wedn3sday|4 years ago
DHowett|4 years ago
You can likely configure your user agent to ignore site-specified fonts.
loh|4 years ago
Upon further inspection, it looks like the actual code on the page is `!==` and `=>` but the font ("Fira Code") seems to be somehow converting those sequences of characters into a single symbol, which is actually still the same number of characters but joined to appear as a single one. I had no idea fonts could do that.
Rebelgecko|4 years ago
asiachick|4 years ago
Too|4 years ago
What are the benefits of SSE vs long polling?
TimWolla|4 years ago
The underlying mechanism effectively is the same: A long running HTTP response stream. However long-polling commonly is implemented by "silence" until an event comes in and then performing another request to wait for the next event, whereas SSE sends you multiple events per request.
anderspitman|4 years ago
TimWolla|4 years ago
HAProxy supports RFC 8441 automatically. It's possible to disable it, because support in clients tends to be buggy-ish: https://cbonte.github.io/haproxy-dconv/2.4/configuration.htm...
Generally I can second recommendation of using SSE / long running response streams over WebSockets for the same reasons as the article.
rcarmo|4 years ago
KaoruAoiShiho|4 years ago
tgv|4 years ago
jessaustin|4 years ago
llacb47|4 years ago
ravenstine|4 years ago
andrew_|4 years ago
sysid|4 years ago
mterron|4 years ago
Good performance, easy to use, easy to integrate.
captn3m0|4 years ago
jFriedensreich|4 years ago
gibsonf1|4 years ago
pbowyer|4 years ago
toomim|4 years ago
beebeepka|4 years ago
bullen|4 years ago
jshen|4 years ago
johnny22|4 years ago
njx|4 years ago
pictur|4 years ago
bullen|4 years ago
The way I solve it is to send "noop" messages at regular intervals so that the socket write will return -1 and then I know something is off and reconnect.
apitman|4 years ago
* Start with SSE
* If you need to send binary data, use long polling or WebSockets
* If you need fast bidi streaming, use WebSockets
* If you need backpressure and multiplexing for WebSockets, use RSocket or omnistreams[1] (one of my projects).
* Make sure you account for SSE browser connection limits, preferably by minimizing the number of streams needed, or by using HTTP/2 (mind head-of-line blocking) or splitting your HTTP/1.1 backend across multiple domains and doing round-robin on the frontend.
[0]: https://rsocket.io/
[1]: https://github.com/omnistreams/omnistreams-spec
whazor|4 years ago
bullen|4 years ago
That is wrong. Edit: Actually it seems correct (a javascript problem, not SSE problem) but it's a non-problem if you use a parameter for that data instead and read it on the server.
steve76|4 years ago
[deleted]
axiosgunnar|4 years ago
bastawhiz|4 years ago
The_rationalist|4 years ago