Which ironically, is a model with probably best performance and scaling characteristics. If you want high performance, you should avoid sharing mutable stuff as much as possible.
The bigger problem is that Javascript disallows (or at least, browsers don't currently provide) efficient sharing of immutable values between threads/workers.
Because JS has no notion of immutability, sharing is done via copying - but what's worse, copying via full-blown serialization, not just in-memory blitting or COW. At least, that was the state of things when I last checked a year ago.
Modern JS does allow you to send large buffers from one worker to another with just a pointer copy (because it stops being accessible from the sender).
Yes, but even with that restriction, it doesn't let you transfer ownership of structured messages, only buffers and images; so a serialization roundtrip is needed.
I disagree. Play with Go's go routine and channels and see how easily you can spread a simple or complex job across 16 cores, and then bring that data all back together. JS has nothing on that.
It's important to distinguish performance (making the most out of a single computer) from scalability (making an overall system that continues to perform well when you throw lots of hardware at it). Data-sharing is essential for performance. It's anathema to scalability, not least because different processes in different datacenters aren't going to be able to share data without explicit messages between them anyway.
Node.js (and PHP, for that matter) often don't get enough credit for their scalability in the large. Perhaps this is because the surrounding distributed-systems ecosystem hasn't grown up around them yet the way it has around, say, Erlang or Java or even C++, but architecturally they're pretty sound if you want to build distributed systems that scale across many computers.
If you're looking at things solely from a distributed systems perspective, sure. But I work on graphics. It'd be irresponsible for me not to embrace shared memory, where it's so essential for performance it's practically impossible to contemplate not having it. (Imagine if you couldn't have multiple fragments read from the same texture!)
We should use all the hardware resources available to us. If our programming models don't allow us to do that, then we should replace them with models that do.