A while ago I created Mercure: an open pub-sub protocol built on top of SSE that is a replacement for WebSockets-based solutions such as Pusher. Mercure is now used by hundreds of apps in production.
At the core of Mercure is the hub. It is a standalone component that maintains persistent SSE (HTTP) connections to the clients, and it exposes a very simple HTTP API that server apps and clients can use to publish. POSTed updates are broadcasted to all connected clients using SSE. This makes SSE usable even with technologies not able to maintain persistent connections such as PHP and many serverless providers.
Mercure also adds nice features to SSE such as a JWT-based authorization mechanism, the ability to subscribe to several topics using a single connection, events history, automatic state reconciliation in case of network issue…
I maintain an open-source hub written in Go (technically, a module for the Caddy web server) and a SaaS version is also available.
I've used Mercure before at a startup of mine. Self hosted. Worked great! And still works to this day (I haven't actively worked on that startup for years myself.)
It comes down to all the extra bytes sent and processed (local and remote, and in flight) by long polling. SSE events are small while other methods might require multiple packets and all the needless headers throughout the stack, for example.
It doesn’t mention the big drawback of SSE as spelled out in the MDN docs:
“Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6).”
One of my company's APIs uses SSE, and it's been a big support headache for us, because many people are being corporate firewalls that don't do HTTP/2 or HTTP/3, and people often open many tabs at the same time. It's unfortunately not possible to detect client-side whether the limit has been reached.
Another drawback of SSE is lack of authorization header support. There are a few polyfills (like this one [1]) that simulate SSE over fetch/XHR, but it would be nice to not need to add the bloat.
I hate to suggest a solution before testing it myself so apologies in advance but I have a hunch that Broadcast Channel API can help you detect browser tab opens on client side. New tabs won't connect to event source and instead listen for localStorage updates that the first loaded tab makes.
The problem in this case is how to handle the first tab closing and re-assign which tab then becomes the new "first" tab that connects to the event source but it may be a LOE to solve.
Again apologies for suggesting unproven solutions but at the same time I'm interested in feedback it gets to see if its near the right track
This library (https://github.com/pubkey/broadcast-channel) from the fantastic RxDB javascript DB library uses WebLocks with a fallback to Broadcast Channel. But, WebLocks are supported on 96% of browsers, so probably safe to just use it exclusively now
This sounds like a good use case for using a service worker. All tabs talk to the service worker and the worker is the single instance that talks to the backend and can use only one connection. Maybe there are some trade offs for using SSE in web workers, I'm not sure.
BroadcastChannel is a better solution for a couple of reasons. Service Workers are better at intercepting network requests and returning items from a cache, there’s some amount of additional effort to do work outside of that. The other thing is they’re a little more difficult to set up. A broadcast channel can be handled in a couple lines of code, easily debuggable as they run on the main thread, and they’re more suited to the purpose.
I disagree. You can just postMessage to communicate with the service worker and therefore I imagine the code using broadcast channel to be actually quite similar.
About debugging, service workers are easily debuggable, though not on the main thread as you already mentioned.
Supposedly websockets (the protocol) support authorization headers, but often there are no APIs for that in websocket libraries, so people just abuse the subprotocols header in the handshake.
Presumably you try SSE, and on failure fallback to something else like WebSockets?
Push seems to require supporting multiple communication protocols to avoid failure modes specific to one protocol - and libraries are complex because of that.
I've scaled websockets before, it isn't that hard.
You need to scale up before your servers become overloaded, and basically new connections go north to the newly brought up server. It is a different mentality than scaling stateless services but it isn't super duper hard.
Honestly I just flipped the right bits in the aws load balancer (maintain persistent connections, just the first thing you are told to do when googling aws load balancers and web sockets) and setup the instance scaler to trigger based upon "# open connections / num servers > threshold".
Ideally it is based on the rate of incoming connections, but so long as you leave enough headroom when doing the stupid simple scaling rule you should be fine. Just ensure new instances don't take too long to start up.
My understanding is the hard part about scaling WebSockets is that they are stateful and long lived connections. That is also true of SSE. Is there some other aspect of WebSockets that make them harder to scale than WebSockets?
I guess with WebSockets, if you choose to send messages from the client to the server, then you have some additional work that you wouldn't have with SSE.
Yes, I know. We both work at Sanity, actually! The reason I didn't mention it was that the newer library isn't a straight polyfill; it offers a completely different interface with async support and so on.
You can easily multiplex data over one connection/event stream. You can design your app so that it only uses one eventstream for all events it needs to receive.
The caniuse link in the OP, under Known Issues, notes that Firefox currently does not support EventSource in a service worker. https://caniuse.com/?search=EventSource
You can just open a stream in the service worker and push events via postMessage and friends.
Another nice thing to do is to wire up a simple filesystem monitor for all your cached assets that pushes path & timestamp events to the service worker whenever they change, then the service worker can refresh affected clients too (with only a little work this is endgame livereload if you’re not constrained by your environment)
The number was set while Apache was dominant and common deployments would get completely tanked by a decent number of clients opening more conns than this. c10k was a thing once, these days c10m is relatively trivial
Historical reasons. The HTTP/1.1 spec actually recommends limited to 2 connections per domain. That said, I'm not sure why it's still so low. I would guess mostly to avoid unintended side effects of changing it.
> Previous revisions of HTTP gave a specific number of connections as a ceiling, but this was found to be impractical for many applications. As a result, this specification does not mandate a particular maximum number of connections but, instead, encourages clients to be conservative when opening multiple connections.
Because you're supposed to use a single connection with HTTP Pipelining for all your ressources [1]
When index.html loads 4 CSS and 5 JS : 10 ressources in HTTP 1.0 needed 10 connections, with 10 TLS negociations (unless one ressource loaded fast and you could reuse it's released connection)
With HTTP1.1 Pipelining you open only one connection, including a single TLS nego, and ask 10 ressources.
Why not only 1 per domain so ? IIRC it's because the 1st ressource index.html may take a lot of Time to complete and well race conditions suggest you use another one that the 'main thread' more or less. So basically 2 are sufficient.
Because 30 years ago server processes often (enough) used inetd or served a request with a forked process. A browser hitting a server with a bunch of connections, especially over slow network links where the connection would be long lived, could swamp a server. Process launches were expensive and could use a lot of memory.
While server capacity in every dimension has increased the low connection count for browsers has remained. But even today it's still a bit of a courtesy to not spam a server with a hundred simultaneous connections. If the server implicitly supports tons of connects with HTTP/2 support that's one thing but it's not polite to abuse HTTP/1.1 servers.
Yes, use a proper load balancer that can do that. And use Http3 which is also supported by all relevant browsers at this point. There's no good reason to build new things on top of old things.
To do that they need to MITM and tamper with the inner protocol.
In my experience this is quite rare. Some MITM proxies analyze the traffic, restrict which ciphers can be used, block non-dns udp (and therefore HTTP/3), but they don't usually downgrade the protocol from HTTP/2 to HTTP/1.
That hasn't been my experience at large corporations. They usually have a corporate proxy which only speaks HTTP 1.1, intercepts all HTTPS, and doesn't support websockets (unless you ask for an exception) and other more modern HTTP features.
"tamper" sounds much more involved than what they (their implementation) probably do: the proxy decodes the http request, potentially modifies it, and uses the decoded form to send a new request using their client, which only speaks http/1
I utilized SSE when building automatic restart functionality[0] into Doppler's CLI. Our api server would send down an event whenever an application's secrets changed. The CLI would then fetch the latest secrets to inject into the application process. (I opted not to directly send the changed secrets via SSE as that would necessitate rechecking the access token that was used to establish the connection, lest we send changed secrets to a recently deauthorized client). I chose SSE over websockets because the latter required pulling in additional dependencies into our Golang application, and we truly only needed server->client communication.
One issue we ran into that hasn't been discussed is HTTP timeouts. Some load balancers close an HTTP connection after a certain timeout (e.g. 1 hour) to prevent connection exhaustion. You can usually extend this timeout, but it has to be explicitly configured. We also found that our server had to send intermittent "ping" events to prevent either Cloudflare or Google Cloud Load Balancing from closing the connection, though I don't remember how frequently these were sent. Otherwise, SSE worked great for our use case.
Generally you're going to want to send ping events pretty regularly (I'd default to every 15-30 seconds depending on application) whether you're using SSE, WebSockets, or something else. Otherwise if the server crashes the client might not know the connection is no longer live.
The way I've implemented SSE is to make use of the fact it can also act like HTTP long-polling when the GET request is initially opened. The SSE events can be given timestamps or UUIDs and then subsequent requests can include the last received ID or the time of the last received event, and request the SSE endpoint replay events up until the current time.
You could also add a ping with a client-requestable interval, e.g. 30 seconds (for foreground app) and 5 minutes or never (for backgrounded app), so the TCP connection is less frequently going to cause wake events when the device is idle. As client, you can close and reopen your connection when you choose, if you think the TCP connection is dead on the other side or you want to reopen it with a new ping interval.
Tradeoff of `?lastEventId=` - your SSE serving thing needs to keep a bit of state, like having a circular buffer of up to X hours worth of events. Depending on what you're doing, that may scale badly - like if your SSE endpoint is multiple processes behind a round-robin load balancer... But that's a problem outside of whether you're choosing to use SSE, websockets or something else.
To be honest, if you're worrying about mobile drain, the most battery efficient thing I think anyone can do is admit defeat and use one of the vendor locked-in things like firebase (GCM?) or apple's equivalent notification things: they are using protocols which are more lightweight than HTTP (last I checked they use XMPP same as whatsapp?), can punch through firewalls fairly reliably, batch notifications from many apps together so as to not wake devices too regularly, etc etc...
Having every app keep their own individual connections open to receive live events from their own APIs sucks battery in general, regardless of SSE or websockets being used.
I also used SSE 6 or so years ago, and had the same issue with out load balancer; a bit hacky but what I did was to set a timer that would send a single colon character (which is the comment delimiter IIRC) periodically to the client. Is that what you meant by “ping”?
> Perceived Limitations: The unidirectional nature might seem restrictive, though it's often sufficient for many use cases
For my use cases the main limitations of SSE are:
1. Text-only, so if you want to do binary you need to do something like base64
2. Browser connection limits for HTTP/1.1, ie you can only have ~6 connections per domain[0]
Connection limits aren't a problem as long as you use HTTP/2+.
Even so, I don't think I would reach for SSE these days. For less latency-sensitive and data-use sensitive applications, I would just use long polling.
For things that are more performance-sensitive, I would probably use fetch with ReadableStream body responses. On the server side I would prefix each message with a 32bit integer (or maybe a variable length int of some sort) that gives the size of the message. This is far more flexible (by allowing binary data), and has less overhead compared to SSE, which requires 7 bytes ("data:" + "\n\n") of overhead for each message.
ReadableStream appears to be SSE without any defined standards for chunk separation. In practice, how is it any different from using SSE? It appears to use the same concept.
> SSE works seamlessly with existing HTTP infrastructure:
I'd be careful with that assumption. I have tried using SSE through some 3rd party load balancer at my work and it doesn't work that well. Because SSE is long-lived and doesn't normally close immediately, this load balancer will keep collecting and collecting bytes from the server and not forward it until server closes the connection, effectively making SSEs useless. I had to use WebSockets instead to get around this limitation with the load balancer.
I had a similar issue at one point but if I remember correctly I just had to have my webserver send the header section without closing the connection.
Usually things would just get streamed through but for some reason until the full header was sent the proxy didn't forward and didn't acknowledge the connection.
Not saying that is your issue but definitely was mine.
In my case with this load balancer, I think it's just badly written. I think it is set to hold ALL data until the server ends the connection. I have tried leaving my SSE open to send over a few megabytes worth of data and the load balancer never forwarded it at all until I commanded the server to close the connection.
The dev who wrote that code probably didn't think too much about memory efficiency of proxying HTTP connections or case of streaming HTTP connections like SSE.
> SSE works seamlessly with existing HTTP infrastructure:
To stress how important it is to correct this error, even Mozilla's introductory page on server-sent events displays prominently with a big red text box that server-sent events are broken when not used over HTTP/2 due to browser's hard limit on open connections.
One thing I dislike regards to SSE, which is not its fault but probably a side effect of the perceived simplicity: lots of developers do not actually use proper implementations and instead just parse the data chunks with regex, or something of the sorts! This is bad because SSE, for example, supports comments (": text") in streams, which most of those hand-rolled implementations don't support.
For example, my friend used an LLM proxy that sends keepalive/queue data as SSE comments (just for debugging mainly), but it didn't work for Gemini, because someone at Google decided to parse SSE with a regex:
https://github.com/google-gemini/generative-ai-js/blob/main/... (and yes, if the regex doesn't match the complete line, the library will just throw an error)
+1 for recommending data-star. The combination of idiomorph (thank you), SSE and signals is fantastic for making push based and/or multiplayer hypermedia apps.
I tried implementing SSE in a web project of mine recently, and was very surprised when my website totally stopped working when I had more than 6 tabs open.
It turns out, Firefox counts SSE connections against the 6 host max connections limit, and gives absolutely no useful feedback that it's blocking the subsequent requests due to this limit (I don't remember the precise error code and message anymore, but it left me very clueless for a while). It was only when I stared at the lack of corresponding server side logs that it clicked.
I don't know if this same problem happens with websockets or not.
"Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6). The issue has been marked as "Won't fix" in Chrome and Firefox. This limit is per browser + domain, which means that you can open 6 SSE connections across all of the tabs to www.example1.com and another 6 SSE connections to www.example2.com (per Stack Overflow). When using HTTP/2, the maximum number of simultaneous HTTP streams is negotiated between the server and the client (defaults to 100)."
This library (https://github.com/pubkey/broadcast-channel/blob/master/src/...) from the fantastic RxDB javascript DB library uses WebLocks with a fallback to Broadcast Channel. But, WebLocks are supported on 96% of browsers, so probably safe to just use it exclusively now.
Even if they don't change the default 6 open connection. They could have at least made it per tab rather than per site. [1] [2] And I dont understand why this hasn't been done in the past 10 years.
AFAIK browsers require https with http2. This is a locally running server/app which will probably never have https. Maybe there is an exception for localhost, I'm not sure.
OpenAI's own documentation makes note of how difficult it is to work with SSE and to just use their library instead. My team wrote our own parser for these streaming events from an OpenAI compatible LLM server. The streaming format is awful. The double newline block separator also shows up in a bunch of our text, making parsing a nightmare. The "data:" signifier is slightly better, but when working with scientific software, it still occurs too often. Instead we've had to rely on the totally-not-reliable fact that the server returns each as a separate packet and the receiving end can be set up to return each packet in the stream.
The suggestions I've found online for how to deal with the newline issue are to fold together consecutive newlines, but this loses formatting of some documents and otherwise means there is no way to transmit data verbatim. That might be fine for HTML or other text formats where newlines are pretty much optional, but it sucks for other data types.
I'm happy to have something like SSE but the protocol needs more time to cook.
Currently at work I'm having issues because
- Auth between an embedded app and javascript's EventSource is not working, so I have to resort to a Microsoft package which doesn't always work.
- Not every tunnel is fond of keep-alive (Cloudflare), so I had to switch to ngrok (until I found out they have a limit of 20k requests).
I know this isn't the protocol's fault, and I'm sure there's something I'm missing, but my god is it frustrating.
I wouldn't characterize this as "automatic", you have to do a lot of manual work to support reconnection in most cases. You need to have a meaningful "event id" that can be resumed on a new connection/host somewhere else with the Last-Event-Id header. The plumbing for this event id is the trivial part IMO. The hard part is the server-side data synchronization, which is left as an exercise for the reader.
Also, God help you if your SSE APIs have side effects. If the API call is involved in a sequence of side-effecting steps then you'll enter a world of pain by using SSE. Use regular HTTP calls or WebSockets. (Mostly b/c there's no cancellation ack, so retries are often racy.)
SSE is not underrated. In fact it's being used by Open AI for streaming completions. It's just not always needed unlike the very obvious use cases for normal REST APIs and Websockets.
It was a pain to figure out how to get it to work in a ReactJS codebase I was working on then and from what I remember Axios didn't support it then so I had to use native fetch to get it to work.
I seem to remember not having too many issue with useEffect and context on this.
Maybe the issue is you wanted to implement it in a singular react component when in reality you should be treating it like an other state library since it is something long lived that should live outside of react and pass data into react.
Pretty much a year and a half back (I think it was March 2023).
We had a real complicated set up back then, since I couldn't put our Open AI client key on the client side so I wrote an end point to to call Open AI's GPT3.5 API and then send that back to the front end to get the "typewriter" effect that they had on the frontend. It was quite broken back then cause sometimes random artifacts used to pop up in response, and some chunks came along with one another requiring me to write some really convoluted deserializing logic for it.
I’ve never understood the use of SSE over ndjson. Builtin browser support for SSE might be nice, but it seems fairly easy to handle ndjson? For non-browser consumers ndjson is almost assuredly easier to handle. ndjson works over any transport from HTTP/0.9 to HTTP/3 to raw TCP or unix sockets or any reliable transport protocol.
Manually streaming a XHR and parsing the messages is significantly more work, and you lose the built-in browser API. But if you use a fetch ReadableStream with TLV messages I'm sold.
Does anyone have a good trick for figuring out when the client side connection is closed? I just kill the connection on the server every N minutes and force the client to reconnect, but it's not exactly graceful.
Secondly, on iOS mobile, I've noticed that the EventSource seems to fall asleep at some point and not wake up when you switch back to the PWA. Does anyone know what's up with that?
I haven't seen a library that does that yet, including Python. Usually you just throw messages into the void. Do you know of a specific library that does that?
There's no ack on a raw SSE stream, unfortunately -- unless you mean send an event and expect the client to issue an HTTP request to the server like a keepalive?
There should be an ACK on the tcp packet (IIRC it’s not a lateral ACK but something like it) and the server should handle a timeout on that as the connection being “closed” which can be returned to the connection opener.
You might want to look into timeouts or error callbacks on your connection library/framework.
I remembered wrong. In most circumstances a tcp connection will be gracefully terminated by sending a FIN message. The timeout I talked about is on an ACK for a keepalive message. So after x time of not receiving a keepalive message the connection is closed. This handles cases where a connection is ungracefully dropped.
All this is done at the kernel level, so at the application level you should be able to just verify if the connection is open by trying a read from the socket.
Thanks for clarifying, that would've sent me on a long wild
goose chase. Most libraries only provide some sort of channel to send messages to. They generally do not indicate any FIN or ACK received at the TCP level.
If anyone knows any library or framework in any language that solves this problem, I'd love
to hear about it.
I had some trouble implementing when server has to wait for another endpoint (webhook) to feed output to browser . During request processing (thread1) had to store the sse context in native Java object which will be retrieved later when webhook(thread2) is called
But then with multiple service instance you wouldn't know which service had stored it so webhook had to publish something which others has to subscribe.
I've had no idea they exist until I began to use APIs serving LLM outputs. They work pretty well for this purpose from my experience. An alternative to SSE is web sockets for this purpose I suppose
I'm curious as to how everyone deals with HTTP/2 requirements between the backend servers and the load balancer? By default, HTTP/2 requires TLS which means either no SSL termination at the load balancer or a SSL cert generated per server with a different one for the front end load balancer. This all seems very inefficient.
Not sure how widespread this is but AWS load balancers don't validate the backend cert in any way. So I just generate some random self signed cert and use it everywhere.
You don't need http2 on the actual backend. All limitations for SSE/http1 are browser level.
Just downgrade to http1 from the LB to backend, even without SSL. As long as LB to browser is http2 you should be fine.
Finding use cases for SSE and reading about others doing the same brings me great joy. Very easy to set up -- you just set 2 or 3 response headers and off you go.
I have a hard time imagining the tech's limits outside of testing scenarios so some of the examples brought up here are interesting
Great post. I discovered SSE when building a chatbot and found out it’s what OpenAI used rather than WebSockets. The batteries-included automatic reconnection is huge, and the format is surprisingly human readable.
It’s funny how HN has a mix of people who think AGI is just around the corner, people trying to build/sell stuff that uses LLMs, and others who can’t stand LLM-generated content. Makes me wonder how much overlap there is between these groups.
I don't have anything against LLMs, I use them daily myself, but publishing content that's largely AI-generated without a disclaimer just feels dishonest to me. Oh, and also when people don't spend at least some effort to make the style more natural, not those bullet point lists in the article that e.g. Claude loves so much.
No they're not.
They're limited to 6 clients per browser per domain on http/1.1
Which is important because nginx can't reverse proxy http/2 or higher, so you end up with very weird functionality, essentially you can't use nginx with SSE.
They are handy for implementing simple ad-hoc hot reloading systems as well. E.g. you can have whatever file watcher you are using call an API when a file of interest changes that sends an event to listening clients on the frontend. You can also trigger an event after restarting the backend if you make an API change by triggering the event at boot time. Then you can just add a dev-only snippet to your base template that reloads the page or whatever. Better than nothing if your stack doesn't support it out of the box and doesn't take very much code or require adding any additional project dependencies. Not as sophisticated as React environments that will only reload a component that changed and only do a full page refresh if needed, but it still gives a nice, responsive feeling when paired with tools that recompile your backend when it changes.
So it’s websockets, only instead of the Web server needing to handle the protocol upgrade, you just piggyback on HTTP with an in-band protocol.
I’m not sure this makes sense in 2024. Pretty much every web server supports websockets at this point, and so do all of the browsers. You can easily impose the constraint on your code that communication through a websocket is mono-directional. And the capability to broadcast a message to all subscribers is going to be deceptively complex, no matter how you broadcast it.
Yes most servers support websockets. But unfortunately most proxies and firewalls do not, especially in big company networks. Suggesting my users to use SSEs for my database replication stream solved most of their problems. Also setting up a SSE endpoint is like 5 lines of code. WebSockets instead require much more and you also have to do things like pings etc to ensure that it automatically reconnects. SEEs with the JavaScript EventSource API have all you need build in:
But why add it to HTTP/3 at all? HTTP/1.1 hijacking is a pretty simple process. I suspect HTTP/3 would be significantly more complicated. I'm not sure that effort is worth it when WebTransport will make it obselete.
At the core of Mercure is the hub. It is a standalone component that maintains persistent SSE (HTTP) connections to the clients, and it exposes a very simple HTTP API that server apps and clients can use to publish. POSTed updates are broadcasted to all connected clients using SSE. This makes SSE usable even with technologies not able to maintain persistent connections such as PHP and many serverless providers.
Mercure also adds nice features to SSE such as a JWT-based authorization mechanism, the ability to subscribe to several topics using a single connection, events history, automatic state reconciliation in case of network issue…
I maintain an open-source hub written in Go (technically, a module for the Caddy web server) and a SaaS version is also available.
Docs and code are available on https://mercure.rocks
reply