Hacker News new | past | comments | ask | show | jobs | submit login

Doesn't linux have something similar: System V messages or even Unix Dgram sockets say using named paths for queue names.



System V messages weren't bad, but they aren't used much.

The real trick is tight IPC and CPU scheduling integration. You want a send from process A to process B to result in an immediate transfer of control from process A to process B, preferably on the same CPU. The data you just sent is in the CPU's cache. QNX is one of the few OSs where somebody thought about this.

With unidirectional or pipe-like IPC, the sender sends, which unblocks the receiver, but the sender doesn't block. So the OS can't just toss control to the receiver. The receiver goes on the ready-to-run list and, quite likely, another CPU starts running it. Meanwhile, the sending process runs for a short while longer and then typically blocks reading from some reply pipe/queue. It takes two extra trips through the scheduler that way. Worse, if the CPU is busy, sending a message can put you at the end of the line for CPU time, which makes for awful IPC latency under load.

It's one of the classic mistakes in microkernel design.


> With unidirectional or pipe-like IPC, the sender sends, which unblocks the receiver, but the sender doesn't block. So the OS can't just toss control to the receiver.

Interesting. I remember looking at the API but I just didn't have enough experience or context then to dig deeper and answer those questions. I stayed away from message queues and opted for shared memory, mostly because they seemed obscure and was afraid I would hit some corner case bug and would be stuck on my own debugging low level kernel code.

> You want a send from process A to process B to result in an immediate transfer of control from process A to process B, preferably on the same CPU.

I can see a message-passing centric system having some specific optimizations in scheduler. Say once a few messages are sent, there might be a DAG formed of which senders send to which receivers. Sorting that DAG using topological sort might be interesting, then making scheduling decisions based on it. That is, if sender1 sends message to receiver1 and receiver1 and the sends to receiver2. Maybe it is more efficient to run them in that order -- sender1, receiver1, receiver2.

Saw that done in a realtime system, which processed low latency data. That graph was static, but this sorting trick allowed sometimes for processing data with the latency of only one frame.


There's a discussion of this in this old QNX design document.[1] See the IPC section. Integration between scheduling and message passing is essential if you're making lots of little IPC calls. When control passes from one process to another via an IPC call, the receiving process temporarily inherits the priority and remaining CPU quantum of the caller. This makes it work like a subroutine call for scheduling purposes - nobody goes to the back of the CPU queue. So IPC calls don't incur a scheduling penalty.

[1] http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/sys_arc...


Solaris has a mechanism called Doors that works like this. It's primarily used by gethostbyname()/getaddrinfo() to make a synchronous process-crossing RPC to nscd, the name system caching daemon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: