Is it possible in this context to replace Node.js with Event Machine without any kind of loss of expressiveness, abstraction, performances? Or, in other words, what are the advantages of node compared to an event programming library for another language like Ruby or Python?
Since Ruby supports fibers, it should actually be simpler to take the state across events firing, at least in theory, but since I'm everything but a node.js expert I would love to hear your opinion on the matter.
As I mentioned in the post real bottleneck occurs when it's used with the other supporting libraries (which would be the practical case). Most libraries in Ruby (or Python) are implemented as blocking processes. This defeats the purpose of using an evented programming library.
In contrast, most of the Node.js' libraries are written in a non-blocking manner. This may be because of the initial path the eco-system took. Also JavaScript, being naturally event-driven language encourages writing non-blocking code.
Hello laktek, I really agree about the fact that node.js being from the ground up an evented environment got most libraries written in the same paradigm.
On the other side I fail seeing how Javascript is a more event driven language compared to Ruby. In the context it's often used (the web browser) it supports callbacks, but not with some special semantic. Just callbacks, anonymous functions.
For this point of view I think that languages with coroutines are probably more powerful as event-driven programming languages, because it's possible to save the state across calls in a much more high level way, leading to code resembling a lot erlang code for instance.
The author states that a difference between Realie and Etherpad is that he users Web Sockets for client/server comm. I'm fairly certain that Etherpad uses Web Sockets too. Am I mistaken?
Etherpad had 3 modes of transport between client and server: short-polling, long-polling, and streaming, each with varying support for different browser/OS/proxy settings. The client starts the connection with the most widely supported method (short-polling), and then attempts to "upgrade" the connection by testing out faster connection types.
I think the messages from client to server always had to do an HTTP post, so we did incur the overhead of all the HTTP headers with each post from client to server, which happened at most twice per second.
You can view the client-side JS code for handling these 3 connection types here:
I didn't write this, but that's my understanding of how it works.
You can also get around the overhead of HTTP headers by using a Flash shim, but we never got that to work well through weird proxy configurations and misbehaving ISPs.