Hacker News new | past | comments | ask | show | jobs | submit login

Erlang does a different thing called "reduction-counting": the active process in a scheduler-thread gets a budget of virtual CPU cycles (reductions/"reds"), tracked in a VM register, and part of the implementation of each op in the VM's ISA is to reduce that reduction-count by an amount corresponding to the estimated time-cost of the op. Then, the implementation of the call and ret ops in the ISA both check the reduction-counter, and sleep the process (scheduling it to resume at the new call-site) if it has expended all its reds.

(If you're wondering, this works to achieve soft-realtime guarantees because, in Erlang, as in Prolog, loops are implemented in terms of recursive tail-calls. So any O(N) function is guaranteed to hit a call or ret op after O(1) time.)

If you're writing an Erlang extension in C (a "NIF"), though, and your NIF code will be above O(1), then you have to ensure that you call into the runtime reduction-checker yourself to ensure nonblocking behavior. In that sense, Erlang is "cooperative under the covers"—you explicitly decide where to (offer to) yield. It's just that the Erlang HLL papers over this by having one of its most foundational primitives do such an explicit yield-check.




If loops are implemented as tail calls, doesn't the call opcode get optimized away thus preventing the reduction checker from being run?


It marks the point just before the call as a yield. That yield remains after optimization.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: