Hacker News new | past | comments | ask | show | jobs | submit login

I've always had a bit of a soft spot for Server Sent Events. Just simple and easy to use/implement.



With ipv6 they can now be fully scaled easily but they are absolutely awesome, much easier to scale because you can give your client a simple list of sse services and its essentially stateless if done right.

Websockets get really complex to scale past a certain level of use.


> With ipv6 they can now be fully scaled easily

Any day now: https://www.google.com/intl/en/ipv6/statistics.html


What does ipv6 give you that virtual hosts don't?


In theory direct connections between all devices on the internet. In practice, everything's still going to be behind a firewall. But it's still an improvement over NAT, and hopefully we'll eventually get universally adopted protocols for applications to open ports.

I've been using IPv6 more recently, and one nice thing as a developer is being able to use the same IP address for local connections and internet connections. Simplifies managing TLS certs for example, since the IP address used by Let's Encrypt is the same one I'm connecting to while developing.


You misunderstand. I'm asking why IPv6 would help this specific situation, not why ipv6 is nice in general. None of what you said applies to this context.


You're right, I was responding to your comment directly and not taking context into account.

I guess maybe what GP is getting at is that with vhosts on IPv4 you need to have some sort of load balancer in order to share the IP, but with IPv6 you can flatten this out and give every host it's own IP?


If you have multiple machines with their own IPs, v4 or v6 makes no difference. If you have a single machine (vhosting), the number of IPs and their type makes no difference.


The difference is IPv4 addresses are far more expensive


I will quote the comment you are trying to justify:

> With ipv6 they can now be fully scaled easily but they are absolutely awesome, much easier to scale because you can give your client a simple list of sse services and its essentially stateless if done right.

If you don't understand it either please stop saying random stuff about IPv6 that we already know and has nothing to do with this thread.


I stand by my comment. I think the suggestion that scaling might be limited by IPv4 cost is a reasonable guess as to OP's concerns.


How is it limited? How is it less limited with IPv6?

A client gets an SSE endpoint (hostname). That endpoint maps to an IP. A server at that IP receives the connection. Which part is better with v6?

Are we talking about the few cents it would cost you to give each server an IPv4? Are we thinking about a distant future where that cost is not negligible compared to the cost of compute? Something else?


AWS is now charging about $3/mo per IPv4 address. If that's negligible for you, awesome. It's not negligible for everyone.


So that was the only argument. Wow, I'm glad I finally got to it, but I'm not amazed. All it took is making it up myself.

I don't use AWS so ok thanks. And if I did, I would use their ingress/gateway solutions, which totally circumvent this problem (while being quite expensive anyway).


I agree, AWS is way too expensive. In fact you have a good point which is that the cost of compute on AWS is significantly higher, such that the relative cost of IPs isn't as significant. You can get a solid VPS from Hetzner for the cost of an AWS IP address.

What's your preferred provider?


I don't feel it would be right to just give you an answer after chasing yours 11 messages deep and finally having to make it up myself. I don't have the kind of emotional energy it takes to communicate with you.


why dropbox when rsync"


I know what you're referencing but I really don't see why. What does IPv6 have to do with this?


Similitudes in timbre. There is plenty of room for improving the state of the web. Not 100% certain IPV6 is that, but it certainly offers more address space, and that would be foolish to avoid embracing simply because old tech is duct-tape-able enough.


I am not trying to say IPv6 is bad (in fact I'm a fan), I am asking what benefits it offers specifically in the context of SSE.

I am not the one making a claim, I expected some arguments with it. "IPv4 is not web scale" is not good enough https://youtu.be/b2F-DItXtZs


is anybody actually able to disable ipv4? maybe if you only serve vpn or internal users?

This might be the best thing about Elixir/Phoenix LiveView. I haven't actually had to care in quite some time :-) (though to be fair, I keep things over the websocket pretty light)


Yeah there's 6to4 schemes. It's common in the U.S. for cell providers to give IPv6 but private IPv4 and do NAT (although IPv4 could be skipped altogether)

AWS you can use NAT Gateways for 6to4 and do v6 only subnets


Also because they’re so simple you can use a CDN to scale them way more easily than you can WebSockets: https://www.fastly.com/blog/server-sent-events-fastly


> SSE connections keep a mobile device’s radio powered up all the time. You should avoid connecting a SSE stream on a device that has a low battery, or possibly avoid using SSE at all unless the device is plugged in.

Damn, that’s a huge downside


Same applies to websockets, but yeah.


But isn't your device’s radio powered up all the time anyway?


I think it’s on all the time but the phone has different power levels that it gives the radio, all this to optimize the battery usage.

So depending on how much the phone needs to utilizes the radio, the higher the power level is?

That’s just my theory though.


I don't think keeping sockets open waiting for incoming data have big impact on battery usage because there is no data transmission at that moment so radio shouldn't consume much energy in stand-by mode.

I use K9-Mail app for email working 24h a day, it has multiple accounts on different IMAP4 servers. You know, IMAP requires one keep-alive socket per subscribed folder and I have no problem with battery usage.


I don’t think IMAP is a good example here. Your email client will try both subscribing and classic polling. Subscribing is not a MUST. And from the point of view of end user, the difference when polling in a classical way is simply that the user will be notified later, so it’s hard to tell what the client really did in the background.


> way is simply that the user will be notified later, so it’s hard to tell what the client really did in the background.

I receive emails instantly. There is polling option in settings, I've disabled it.


I agree. Unfortunately you can only have 6 SSE streams per origin per browser instance, so you may be limited to 6 tabs without adding extra complexity on the client side.

https://crbug.com/275955


Is above still an issue with http2/3?

edit: From the article: To workaround the limitation you have to use HTTP/2 or HTTP/3 with which the browser will only open a single connection per domain and then use multiplexing to run all data through a single connection.


No, if you can enable TLS and HTTP/2|3, you are only technically using a single browser connection, onto which multiple logical connections can be multiplexed.

I think the article calls this out. There is still a limit on the number of logical connections, but it's an order of magnitude larger.


just use a service worker to share state, you would be much better off doing this anyways. saves a ton and is performant.


I think you need a SharedWorker for that rather than a service worker https://developer.mozilla.org/en-US/docs/Web/API/SharedWorke...


Shared workers (inexplicably) dont exist on Android Chrome

https://issues.chromium.org/issues/40290702


A service worker would work fine; the connection would be instantiated from the SW and each window/worker could communicate with it via navigator.serviceWorker.


That doesn't work because browsers have duration limits on ServiceWorkers:

https://github.com/w3c/ServiceWorker/issues/980#issuecomment...

Also unfortunately Chrome doesn't keep SharedWorker alive after a navigation (Firefox and Safari do):

https://issues.chromium.org/issues/40284712

Hopefully Chrome will fix this eventually, it really makes it hard to build performant MPAs.


In my experience, as long as a controlled window is communicating with the SW, the connection will remain alive.


HTTP 2/3 doesn't have they limitation.

For HTTP 1, simply shard the domain.


You can get around that limit using domain sharding, although it feels a bit hacky.


just one tab use SSE and others use storage event.


Can the tabs share a background worker that would handle that?


You can use https://www.npmjs.com/package/broadcast-channel which creates a tab leader, no need for a background worker

Edit: of course you could use: https://caniuse.com/sharedworkers but android does not support it. We migrated to the lib because safari took its time… so mobile was/is not a thing for us


Here's the chrome android issue for Shared Workers. Add your voice if it is something you need

https://issues.chromium.org/issues/40290702


Is that true if you are using HTTP/2?


Yes, but the limit is different (usually much higher) and negotiated, up to maximum SETTINGS_MAX_CONCURRENT_STREAMS (which is fixed at 100 in Chrome, and apparently less in IOS/Safari.)


Nope. That's only a problem with HTTP/1.1


The downside is that you have to base64 payloads or otherwise remove newlines.

I wonder why they didn't just a multipart streamed response.

Supports my metadata, very commonly implemented format


No need to base64 everything if you can just escape the new lines. Or you can use https://github.com/luciopaiva/binary-sse


Works with bog-standard Apache prefork and PHP.


don't forget the timeout reconnect!


Absolutely underrated.


SSE are really a subset of Comet-Stream (eternal HTTP response with Transfer-Encoding: chunked) only they use a header (Accept: text/event-stream) and wraps the chunks with "data:" and "\n\n".

But yes it's the superior (simplest, most robust, most performant and scalable) way to do real-time for eternity.

The browser is dead, but SSE will keep on doing work for native apps.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: