Hacker News new | past | comments | ask | show | jobs | submit login

> I did not mean to disparage it just because it was developed for a particular domain.

Fair enough, that's all I was saying. I'd agree with you that the increased use of Node everywhere to build applications, with tools like Electron, is an utterly stultifying trend.

It's fantastic for I/O-heavy 'server'-type code, and for using JS as an at least surprisingly performant interpreted language, but it's disappointing to see it replacing languages like Swift or C++ in computation-heavy systems programming - or domains which should be the preserve of systems programmers.

> However, if "An embedded system uses the internet to communicate" has become synonymous with "It runs linux and uses web services and nodejs" then that would be a perfect example of contorting the application to match the technology that more programmers are most productive with.

I absolutely agree. No quibble with you there! I'm a fan of using the best tool for the job. And, much as it's a meme, I absolutely adore Rust for lots of systems programming applications.

PS: If what you love about Rust is its safety even more than its speed, then I'd recommend looking into Ada, which could be called the precursor of Rust. It has none of Rust's modern trendiness, but it's a mindblowing accomplishment in TypeScript/Haskell-grade type expressivity (with Rust-grade safety) while still retaining I-believe-greater-than-Rust speed.




It's fantastic for I/O-heavy 'server'-type code

I have found nodejs frustrating for CRUD APIs, the is 'fantastic for heavy I/O' while true ends up guiding you to complex microservices(or trying to use a external services for everything) because you don't want to slow down the event loop as you add more and more cpu processing to your app.


Interesting! I'm not a Node expert specifically, just a general systems programmer, but I might be able to give you some pointers.

Is your application I/O-bound, or CPU-bound? i.e. what's the bottleneck, which you would have to increase in order to speed it up? Your comment is a bit ambivalent given your remark about adding "more and more CPU processing".

If you're I/O-bound, then you're free to do more CPU processing. If/once you're CPU-bound, there are a few questions:

- Are you using all your cores? Node is single-core, so you may need to run one Node process per core. This obviously depends on how parallelisable your program is. Edit: u/eyelidlessness has given some more Node-specific suggestions that may allow true shared-memory concurrency, or even shared-memory parallelism across several cores.

- Are you able to increase your CPU's clock speed? (This obviously assumes a cloud environment or something similar, where you can easily swap out CPUs. I'm not talking about overclocking.)

- Have you profiled exactly what is using so much CPU? Is there some wasteful computation you can remove? Try `node --cpu-prof` to generate a profile. If you're unfamiliar with analysing profiles, Brendan Gregg's blog is the place to go. This article from the guy who wrote Sled is also a very good longread: https://sled.rs/perf.html

I'm surprised if you're really using so much CPU in a Node application, at least if it's a typical CRUD one. I'd strongly suppose that you're doing some wasteful computation, either in an algorithm in your business logic, or else in inefficient JSON parsing. Let me know if you can give any more info :)

Edit: It looks like u/eyelidlessness has given some more Node-specific tips for improving CPU saturation. I'd definitely check out the pointers that he/she gave.


If your workload is actually CPU-heavy, Node’s solutions for this are several:

- worker threads - child processes/IPC - native bindings (n-api/gyp/etc) - WASM/WASI

Maybe so many as to cause choice/research paralysis. If you’re primarily interested in writing JS/TS, worker threads are a great option. And for most use cases you don’t need to worry about shared memory/atomics, postMessage has surprisingly good performance characteristics.


Workers threads spawn a new JS VM( which implies a new GC) for each worker! We tried it and the gains from parallelism stopped when there were half as many workers as CPUs.


They do create a new VM isolate. That’s a cost your workload will need to exceed before you get much if any benefit.

As far as thread count, my default guidance is 50% of cores if your CPU has SMT/hyper threading, coreCount - 2 otherwise.

Those numbers can go higher depending on how much of your workload is CPU-heavy. If you have an even mix of compute and IO, for example, your threads will frequently be idle/less contentious (same principle as the single threaded event loop).

And if your workload is a queue of short lived, isolated steps, I also recommend pre-spawning idle threads (potentially well beyond that active maximum). Pre-warming those isolates can help as well, or isolating with vm APIs (eg SourceTextModule) instead.

As with, well, everything: your mileage may vary, it depends on your actual bottlenecks and constraints, as well as your tolerance for tuning.


“More and more CPU processing” doesn’t sound like a CRUD API.


A CRUD API doesn't mean "just talks to the database and nothing more", a simple example and one of many is how PDF generation can f*ck up your event loop.


My CS advisor in undergrad stirred a love of safety in me with Ada. I've always wanted to get back into it. Recent headlines have made me think that Rust will move in that domain. In 10 years, probably a lot of safety-critical systems will use a lot of Rust. Ada was (to my knowledge) never particularly well accepted by "the masses", whereas Rust is. It was just too early.


Oh, nice! No, Ada doesn't seem ever to have achieved the same mainstream adoption, or at least awareness, that Rust has. Which is startling to me, because it's comparable to Rust in performance while being far far greater in safety, including inbuilt support for design-by-contract (honestly I feel like I'm shouting at a brick wall trying to get people to understand the benefits of DbC; Rust's type system is only a first-order approximation).

I agree that the problem seems to be that most 'practical programmers' don't understand the benefits of PL-research-y features, until they're forced to use it and then suddenly it takes off (cf the growth of ADTs or dependent typing after TypeScript introduced people to them).

The other problem is the perennial 'trendyism' in programming, which I hate. People won't investigate interesting languages from the 80s with unusual features; only once something's added to the JS framework du jour does it achieve wide adoption.


Back when I learned Ada it didn’t have a solution for use-after-free, does DbC fix that by expressing ownership and sharing?


There was a thread a while back which covered the various variants of Ada and how their memory management contrasts with that of Rust: https://news.ycombinator.com/item?id=14210258

Basically: no, it likely hasn't changed much since you learned it, unless that was genuinely decades ago. But there are various solutions (or, at least, ideas which are considered solutions, depending on what you consider the problem to be) that you might have missed!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: