Hacker News new | past | comments | ask | show | jobs | submit login

I wonder how many WebSocket connections one dyno can take?

Also seems like this would make heroku's routing problems[1] even worse.

[1] http://news.rapgenius.com/Jesper-joergensen-routing-performa... et al.




> Also seems like this would make heroku's routing problems[1] even worse.

No, this will not make routing issues worse.

The routing issues described in that post specifically apply to single-threaded / non-concurrent applications (or those with very low concurrency, such as Unicorn w/ 2 workers).

WebSocket connections are like requests that last forever. If you're using WebSockets in your app, you'll need your app to be highly concurrent in order to maintain lots of open connections to your users. You don't want regular requests to block behind never-ending WebSocket requests.

Random routing should actually work pretty well on apps with high concurrency. Node.js, Play, Go, Erlang, and even Ruby apps with Faye should all work great.

If you're concerned about this for your app, the best way to find out is to test it!


That's a little disingenuous. You've selected a very specific set of frameworks, whereas most of the users here are probably thinking "great! I can run my Rails stack with no problems! Heroku engineer said so!"

Note to readers: the only thing that "fixes" this is if the request-handling code is asynchronous in such a way that it doesn't block a process when connections are held idle. Most of the common web frameworks don't do this because the coding required to make a fully asynchronous stack is nasty. Even apps written in nominally asynchronous frameworks (like node.js) could be in trouble if the request path is pathological (i.e. the websocket periodically makes long-running, blocking database queries).

That said, most of you will never encounter this problem, because it's the sort of problem that's "nice to have" -- by the time concurrency issues become a limit, your app will be popular.


I don't think that's disingenuous.

It's safe to assume that anyone who hopes to leverage websockets will not be using a blocking application architecture.

We can also assume people are running this software on multiprocessing machines with connections to the internet.

Of course there's always people "doing it wrong," but caveating every potential misunderstanding is a slow way to communicate.

edit: Your post is still valuable though! Thanks for highlighting what makes these frameworks sensible for use with websockets.


"It's safe to assume that anyone who hopes to leverage websockets will not be using a blocking application architecture."

No, it isn't. I'll wager that right this very second, there's someone out there incorporating websockets into their Heroku-based Rails app and not thinking about (or understanding) the consequences.


Wouldn't such an app be hosed on almost any platform due to massive memory waste?


I don't think the memory waste is the problem in this case, a websocket is a long lived connection. If you mix it with regular requests and don't think about the concurrency consequences you'll be able to serve 1 request and then allow for 1 websocket connection and your done. All other connections will be pending until the websocket is closed.


bgentry's comment didn't seem disingenuous to me at all, and I was surprised by sync's question. WebSocket connections are long lived, thus if your framework only supports one (or a few) concurrent connection you're gonna have a bad time.

Heroku's past routing problems with certain low concurrency frameworks/servers doesn't apply with WebSockets because you'd be crazy to use such a framework for WebSockets.


"if your framework only supports one (or a few) concurrent connection you're gonna have a bad time."

Rails only supports one concurrent connection per process (by default...for good reasons), and there are a great many people using it at scale, including on Heroku. Asynchronous stacks are becoming more common, but they're still exotic in terms of deployment -- and most of those probably aren't written very well.


I'm specifically talking about WebSockets. Do you really want to run one process for every client to connected to your WebSocket server? The answer is no. Even one (OS) thread per connection can get unwieldy.

And I think lots of people would disagree that async stacks are still "exotic" or "not written very well".


"Do you really want to run one process for every client to connected to your WebSocket server? The answer is no. Even one (OS) thread per connection can get unwieldy."

Yes, no kidding. But people will still try to do this with frameworks that don't support anything else (like Rails), because that's the shortest path to a working product.

"And I think lots of people would disagree that async stacks are still "exotic" or "not written very well"."

Well, those "lots of people" can disagree all they want, but they're wrong. The problem isn't that the frameworks are badly written, necessarily -- it's all the stuff in the stack, including the app-specific logic. Virtually no one knows how to write asynchronous web apps of any complexity. It's a very hard problem.


This is why greenlets exist.


As of Rails 4, the current release if Rails, you're incorrect.


Yes, rails 4 has threading turned on by default now. That eliminates the absolute stupidity of needing one process per concurrent request (finally!).

It's nice that you guys are adding this stuff, but it doesn't invalidate the larger point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: