Hacker News new | past | comments | ask | show | jobs | submit login

Does anyone have a more recent experience? I currently use socket.io 2.2 with node v10.16, no v8 tweaks in a docker container. At ~1000 sockets, sometimes the server receives spikes of 8000 HTTP reqs/sec, which it has to distribute to the websockets, up to 100 msgs/sec, ~1kb/msg to each socket. These spikes are making the server unstable, socket.io switches most of the clients from websockets to xhr polling.



I found this article very useful to resolve connections moving to xhr polling on our Node.js 10.16 server https://medium.com/@k1d_bl4ck/a-quick-story-about-node-js-so...


I don't have any experiments to share, but you can go father if you stop using socket.io, but I guess you need something to deal with long polling.

You should consider tweaking --max_old_space_size, we got a lot of mileage giving node more memory.


I make heavy use of socket.io chatrooms and have to support older browsers. Haven't found any better solution yet, but I'll try the tweak, thanks. I'm also looking into load balancing via haproxy or nginx, since I need less than 10k concurrent clients.


Have you tried "sticky-cluster" ?

https://github.com/uqee/sticky-cluster




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: