Gary was using a much more powerful instance (both in cpu and memory) that this post uses when he reached 2 million with Phoenix. But Phoenix is also doing quite a lot more per connection than node. Basically for multiple reasons they're incomparable experiments, there's really no basis here to draw conclusions like '1/4 of what Elixir/Erlang can handle' from.
You have a fair point, but if by "more difficult" the grandparent meant "more difficult to maintain", I'd have to agree with him. The failure modes of processes running in the BEAM are much more transparent and understandable than the usual ad-hoc JS nodejs server.
IMO they're on the same level but I think it's just my style of how I manage my Node.js applications down my try/catch blocks, exception handling, and primarily process isolation.
> usual ad-hoc JS nodejs server
Yep - that's the key why I feel they're on the same level. I heavily rely on the cluster module and IPC to get my work done which gives me true process isolation/safety however I admit it's more code in Node.js to make it rock-solid ;)
Anecdotal, but I find that the typical Node.js developer is 100% "single-process" with weird supervisors like PM2. I also see over-engineered fleets of EC2 instances behind ELB/ALBs with health checking kicking bad instances out + a way to replace them... this is also stupid common in Docker/docker-compose/K8s as well.
For the record I think that development pattern described in the above paragraph is total crap and I put it on the same level of idiots giving modern PHP7 a bad name. Devs who practice like this completely blew the content from their CS100/200 level courses out their butt (yeah yeah I say this as a dropout myself whatever ha).
> NodeJS performance is hampered by its single-threaded architecture, but given that fact it performs very well. The NodeJS server was the smallest overall in lines of code and extremely quick and easy to write. Javascript is the lingua franca for web development, so NodeJS is probably the easiest platform for which to hire or train developers.
Do you actually read the blog post before making the claim ? The author is not using "Cluster" module to scale connections across CPUs with "Sticky-session" library. Also, neither he mentioned the runtime flags with which the launched all VMs in comparison nor he mentioned the optimization at OS level.
I expected all benchmarks that use the (linux) kernel for I/O events would perform similarly. The code is so small that the interpreter overhead should not degrade the performance too much. May be the JSON parsing / serialization?
That benchmark is from 2016, but the use of `golang.org/x/net/websocket` package for Go should be replaced with `github.com/gorilla/websocket` and re-run, if it hasn't been already. Based on their git repo, gorilla/websocket has existed in some form since at least 2013 though not sure how production-ready it would've been in 2016.
https://phoenixframework.org/blog/the-road-to-2-million-webs...