Hacker News new | past | comments | ask | show | jobs | submit login

I kind of miss long polling. It was so stupidly simple compared to newer tech, and that's coming from someone who thinks WebRTC is the best thing since sliced bread.



SSE isn't really more complex than long polling. The only difference is the server don't close the connection immediately after sent the response. Instead, it wait for data again and send more response using the same stream.


One limitation of SSE compared to long polling (and WebSockets etc) is you can't efficiently send binary data such as cbor, protobuf, etc. Though if your long polling is chatty enough eventually the HTTP overhead will kill your efficiency too.


You can use binary-sse (https://github.com/luciopaiva/binary-sse) with minimal overhead.


Long polling is more amenable than SSE for most HTTP tools out of the box, eg curl. The SSE message body is notably different from plain HTTP responses.

To the OP, you can still build APIs with long polling. They are uncommon because push patterns are difficult to design well, regardless of protocol (whether long-polling, SSE, websockets, etc).

Whiteboarding a push API is a good exercise. There is a lot of nuance that gets overlooked in discussions whenever these patterns come up.


Well I know I can write applications use it, but I don't often write code outside the context of a team that has other opinions anymore :)


Agreed - I see SSE as basically a standardized approach to modern long polling


Oh, if only it were that simple.

The networking that makes Second Life go uses long polling HTTPS for an "event channel", over which the server can send event messages to the clients. Most messages go over UDP, but a few that need encryption or are large go over the HTTPS/TCP event channel.

At the client end, C++ clients use "libcurl". Its default timeout settings are not compatible with long polling. Libcurl will break connections and make another request. This can result in lost or duplicated messages.

At the server end, Apache front-ends the actual simulation servers, to filter out irrelevant connection attempts (Random HTTP attacks that try any open port, probably). Apache has its own timeouts, and will abort connections, forcing the client to retry.

There's a message serial number to try to prevent this mechanism from losing messages. The Second Life servers ignore the serial number the client sends back as a check. Some supposedly compatible servers from Open Simulator skip sequential numbers.

The end result is an HTTPS based system which can both lose and duplicate what were supposed to be reliable messages. Some of those messages, if lost, will stall out the user's activity in the game. The people who designed this are long gone. The current staff was unaware of how bad the mess is. Outside users had to find the problem and document it. The company staff has been trying to fix this for months. It seems to be difficult enough to fix that the current action is to defer work on the problem.

So, no, long polling is not "stupidly simple".

The right way to do this is probably to send a keep-alive message frequently enough that the TCP and HTTPS levels never time out. This keeps Apache and libcurl on their "happy paths", which work.


My solution to broken connections has actually been to have relatively short timeouts by default, eg 10 seconds. That guarantees we have a fresh connection every so often without any assumptions about liveness. You can even overlap the reconnects a bit (eg 10 second request timeouts, but reconnect every 8 seconds) as long as the application can reconcile duplicated messages - which it should be able to do anyway, for robustness reasons.

Really, anytime there is any form of push (whether SSE, long polling, etc) then you need another way to re-hydrate to the full state. In which case you are nearly at the point of doing plain old polling to sidestep the complexity of server-driven incremental updates and all the state coordination problems that entails.

Of course with polling, you lose responsiveness. For latency-sensitive applications (like an interactive mmorpg!) then HTTP is probably not the correct protocol to use.

It does sound like Second Life has its own special blend of weirdness on top of all that. Condolences to the engineers maintaining their systems.


I've seen a bunch of timeouts / heartbeat / keep alive durations. I think it might have been Wireguard, but 25 seconds seems like a good number. Usefully long, most things that break are more likely to do it at ~30 seconds, and if there's an active activity push at 15 or 20 seconds with device wakeup then the keep alive / connection kill might not even happen.

Full Refresh; yes please, in the protocol, with a user button, with local client state cached client code and reloaded state on reconnect. Maybe even a configurable polling period; some services might offer shorter poll as a reason to pay for a higher tier account.


> with a user button

If the user ever has to push a "retry" button, the networking levels are very badly designed. Just because some crappy web sites work that way does not mean it's OK.


The user shouldn't _have_ to. However, a 'refresh state' (and validate state, more gracefully than a full kill and reload) button can be both helpful and psychologically reassuring.

It can also be very helpful for out of band issues, like ISP hiccups, random hardware failures, bitflips, etc.


> Of course with polling, you lose responsiveness.

No, that's the whole point of long polling. The server delays the reply until it has something to say. Then it sends it immediately.

The trouble here is middleware which does not comprehend what's going on and introduces extraneous retry logic.


Sorry, that part of the comment was probably not clear - I was comparing "plain old polling" (stateless request-reply with no delay) with "push", ie long polling


> The Second Life servers ignore the serial number the client sends back as a check. Some supposedly compatible servers from Open Simulator skip sequential numbers.

I mean, if you're not respecting long polling, of course long polling doesn't work. That's like complaining that http doesn't work because your networking stack doesn't look at port number and distributes packets randomly to any process.


>I kind of miss long polling. It was so stupidly simple compared to newer tech, and that's coming from someone who thinks WebRTC is the best thing since sliced bread.

I still use it all the time. There are plenty of applications where the request overhead is reasonable in exchange for keeping everything within the context of an existing HTTP API.


You can still use long polling with HTTP/2 nowadays, it isn't going nowhere.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: