Even non-evented ones, the interesting part really is not node itself (despite what the blog says) but the ability to pipeline streams without having to touch every byte yourself.
It should be possible to do something similar using e.g. generators (in Python) or lazy enumerators (in Ruby)
In fact, in Python's WSGI handlers return an arbitrary iterable which will be consumed, so that pattern is natively supported (string iterators and generators together, then return the complete pipe which will perform the actual processing as WSGI serializes and sends the response). Ruby would require an adapter to a Rack response of some sort as I don't think you can reply an enumerable OOTB.
Ye, 100% true. I've spent many man-years writing entire systems like this in C and Java.
It's just in node, doing it in the evented way was actually simpler and quicker to implement, than the 'ghetto' way. This isn't usually the case, and I always recommend doing the simplest thing that works first. It's just nice here that the simplest thing is also a tight solution.
We use it to mean, the quick and dirty simple solution that we write first, and that is usually good enough but your a little embarrassed to admit. It's not the elegant or crafted, but works.
Our original solution was literally a shell exec, and it was perfectly fine (..for a while)
What is the connection between evented and streaming? It seems like a thread-per-request server would have to do exactly the same thing (except, they would not have to worry about giving back their event loop thread).
Evented streaming doesn't tie up a process waiting for specific data to come in. During the times that the process is waiting other things can get done (other streams processed, etc.).