Hacker News new | past | comments | ask | show | jobs | submit login
“DBus is seriously screwed up” (gmane.org)
281 points by deng on April 28, 2015 | hide | past | favorite | 199 comments



This whole pulling crap out of the Linux kernel mailing list is doing the industry a disservice.

Quotes are taken out of context, used by people who lack the required technical understanding, which then reach the ears of managers, who will promptly act on overblown concerns. All of it somehow backed by Linus' word, even when he doesn't actually mean it.

I've started a stopwatch to see how long it will take for some of my former coworkers to contact me with concerns about DBus. We used it pretty heavily.

As for DBus itself: guys, it is useful. It works. You can count on it being there and working in most distributions, you can talk to most things nowadays. Its easy to use (for most use cases), language bindings are everywhere.

So, turns out is slow. It's so cool that we have identified that. Let's fix it.

Even if the slowness doesn't seem to bother many of its use cases.


This whole pulling crap out of the Linux kernel mailing list is doing the industry a disservice.

The worst part is that I sorta understand why some low-rent blogo-journolist pulls that nonsense. Hacker News should be better than this.

A juicy pullquote can be an attractive headline and cheap and easy to write article. Start with a basic technical introduction paraphrasing wikipedia but doesn't get the reader anywhere near ready to understand the LKML post. Then pad out the rest with some hemming and hawing about: Has Linus gone TOO FAR!?

On HN, these kind of submissions are just as trashy, but because they point directly to LKML we can be fooled into thinking to be something loftier.

It should be called "kdbus patch review: dbus performance overhead may be in userspace library, not context switching." But nobody would submit that because it's not catchy. I'd still click that link, but the title would be boring and technical, just like the post it links to is. I actually flagged the other front page LKML post "Big-Endian is effectively dead."


"On HN, these kind of submissions are just as trashy, but because they point directly to LKML we can be fooled into thinking to be something loftier."

No, it's because of the cognitive dissonance in play on HN. It's a fucking news-re-blog, for crying out loud, but because of the stated goals[0] of the site there's some idea that we're rising above the trash and nonsense of sites like Slashdot.

Don't get me wrong; articles get posted here before they get reposted on Slashdot or The Register etc., but that's all there is to HN. It's not a better place for discussion than either of those two sites.

[0]https://news.ycombinator.com/newswelcome.html


I would call it "dbus performance overhead is userspace library AND context switching" See: http://article.gmane.org/gmane.linux.kernel/1939651

Or really, it isn't "context switching"; it's that there's the bus daemon in the middle which multiplies the userspace library overhead by 2, since messages are read and written twice as many times if you have the daemon in the middle.

Speeding up read/parse and marshal/write is completely separable from doing fewer read/parse marshal/write operations. Two different performance issues that aren't related as far as I can tell.


Sure, but you don't need to defend dbus in this branch of the thread. I'm showing the difference in kind from "DBus is seriously screwed up." I knew my title wasn't great but in the context of that comparison, my proposed title and yours are effectively the same.


So totally this; it's what I've said time and again. Is Linus abrasive? Yes, he can be. He usually has reason. But people seem to rarely read the whole thread, and even if they did, they are usually outsiders who lack both the technical chops to make valid criticisms, and they aren't familiar with Linus' management style. I for one count myself as one of the former as I've been outside of kernel land for quite some time. I'll say this though: the N900, with 256MB of RAM and a single core 600MHz ARM chip from 2009 had DBus, and it didn't seem to suffer too much from it. Hell, it's kind of impressive to be able to make phone calls from the command line, say over SSH:

https://wiki.maemo.org/Phone_control#Make_a_phone_call

Can improvements be made? Probably. Should they be? Why the hell not? Should clickbait headlines stop distracting us from real work? I only wish they would.


Strigi desktop search and NEPOMUK semantic desktop extensively relied on D-Bus. The D-Bus background daemon often hung and NEPOMUK itself was known to be slow and wasteful on hardware resources. Maybe it wasn't D-Bus fault at all, but the years long problems lead to a bad impression. http://en.wikipedia.org/wiki/Strigi , http://en.wikipedia.org/wiki/NEPOMUK_(framework)

Comparing a PPC (WinCE), iPhone1 and N900 with the similar hardware specs - there was no clear winner.


As much as I loved the N900 for its system transparency and malleability (I still use it), it was hardly a paragon of performance. I don't know how much D-Bus played into it, though.


While i enjoyed the N800 for the time, the waves that Maemo has since created across the Linux ecosystem has me wondering if the project should never have happened.


I don't lack the required technical understanding. The headline is correct; Linus identified several embarrassing and obviously stupid problems with kdbus, including security concerns, in that thread.


I am wondering if there are "Linux Insurgents", which are groups of people with a particular agenda that find small bits of data which can be made to serve their cause and expose them. Or maybe they just don't like DBus. Or maybe they just want to feel like they are having a larger debate.

If it is any consolation, while I worked at Sun in the kernel group and later, this sort of 'cherry pick a message to start a debate' technique was also present. Sometimes it is just someone who believes something is broken and they use the message to bolster their argument.


I also wonder if the reverse happens, with people sabotaging the system by breaking things that work, making easy things harder, and replacing fast things with slower things.


I don't think it's fair to call systemd or dbus sabotage. It's entirely possible to construct explanations that rely on blinkered, tasteless incompetence, naivete, and personality cults rather than maliciousness.


I was thinking more along the lines of reimplementing the Windows registry in Samba...


> I am wondering if there are "Linux Insurgents", which are groups of people with a particular agenda that find small bits of data which can be made to serve their cause and expose them. Or maybe they just don't like DBus. Or maybe they just want to feel like they are having a larger debate.

D-Bus is related to systemd (in that systemd relies on it), so this issue seems to have garnered a fair bit of spillover from the larger systemd debate.


No doubt the spillover is real.

Clearly highlights one of the challenges of Linux user land, I guess in the 'Bazaar' model this might be equivalent to trying to ban the selling of meat from endangered species amongst all the vendors.


The bazaar works in the sense that each vendor stick to their product.

but with systemd one vendor is "branching out", and offering (tightly) interconnected products.

And for a certain subset of customers, distro maintainers, this offer is tantalizing. Because it may make their life easier.

Or maybe to go back to the meat, it is not the kind of meat thats the problem. But that now you have one vendor selling whole meals.

And to stretch it a bit further, they are not selling it to the eater, but the chef.

Say you have a favorite restaurant, and when you sit down you can combine this meat with that potato etc.

But then one day you come in and you find the menu had changed to pre-set meal. Now if you want your favorite kind of meat you can only have it alongside a potato you barely tolerate (if not downright dislike).

This because the restaurant switched supplier to one that only sell pre-packaged meals rather than individual components.


This is the best analogy of systemd project I've seen so far.


I understand the whole point of putting dbus into kernel was to fix performance. The objections are mainly because performance haven't really materialized but the costs are very much present.


What's being taken out of context here? It seems the conversation is about dbus being slow, and you agree that it is?


Probably referring to the link title, which could imply quite a bit more than it simply being slow.


Scroll-up to earlier discussion there, it's about security implications and conditional byte swapping, so it;s not all about the performance.


Well how can it be slow but not screwed up?


By working?


We've always known it's slow. It has a ton of other problems as well, not least of which is pointless duplication. But its speed has always been terrible.


>Quotes are taken out of context, used by people who lack the required technical understanding

What? This is literally the maintainer of the Linux kernel on his mailing list voicing his opinion on something kernel related. Funny when he bashes things HN hates like Nvidia drivers, suddenly he's a truth teller and his words repeated on the mountains. When he's bashing something you like suddenly you're "OMG GUISE IGNORE LINUS HE DOESNT KNOW WHAT HES TALKING ABOUT!!"

Come on.


The thing is Linus does know what he is talking about; and can separate the performance concerns, from functionality, etc. I think the real point of the message is to change dbus not the kernel to address the performance issues.

But if someone without the proper technical understanding looks at the just the pull quote, they might think it is a lost cause or not worth using.


It blows my mind how such a simple operation as passing messages from process to process can baloon to waste the measured half a million CPU cycles. People manage to have full-blown HTTP servers service a request with less than that.

Heck, I have worked on an algorithmic trading platform that in the limit of 5us receives market data, dedups it (multiple multicast streams for redundancy), uncompresses it (fricking zlib), parses it, analyzes it, sends to multiple algorithms which decide if current market situation matches certain rules, decides and fills market order, the order gets inspected by independent mechanism to stop the algorithm if it malfunctions, and only then it gets sent to market, over TCP, which is another form of IPC.

All in the span of fricking 5us which is 40 times less than the benchmark suggests for this simple task. Granted, the algorithmic trading world goes to great lengths to avoid overhead including kernel overhead any kind of task switiching, branch prediction fails, etc. But still, come on, guys...


> But still, come on, guys...

libnetsnmp used to make close to a million calloc() calls when responding to a query for a number of CPU sockets on the machine. That was kind of amazing in its sheer brazenness.


> It blows my mind how such a simple operation as passing messages from process to process can baloon to waste the measured half a million CPU cycles. People manage to have full-blown HTTP servers service a request with less than that.

You're comparing a lightweight thing with a security policy driven message bus. Obviously the latter is going to be more expensive.


Nothing obvious about it.

If you look at the Linus's trace, it's all heap and mutex operations. That's just sloppy internal design full of concurrency bottlenecks and lots of in-memory cloning. You certainly don't mean to imply that both are the only way to implement a "security policy driven message bus" software, do you?


> If you look at the Linus's trace, it's all heap and mutex operations. That's just sloppy internal design full of concurrency bottlenecks and lots of in-memory cloning.

Far from it. One of the core designs of kbus (which dbus cannot do because it's not in the kernel) is that you can seal the payload buffer from the sender so the receiver and use it safely concurrently without having to clone anything.

There is obviously a lot of general inefficiency in the userland libraries but not in the design of it.


You're talking about memfd, I think? That has nothing to do with kdbus in particular. It's an independent syscall that replaces many use cases for splice/vmsplice, even though it was introduced as part of the kdbus project - it's still a separate thing.


> You're talking about memfd, I think? That has nothing to do with kdbus in particular. It's an independent syscall that replaces many use cases for splice/vmsplice, even though it was introduced as part of the kdbus project - it's still a separate thing.

That was created for KDBUS but has more use than than being used for KDBUS exclusively. It still was written for KDBUS.


And so it was accepted and merged. That something good came out of kdbus is excellent, but that's no justification to merge in the whole package. A ton of proposed kernel additions end up like this - a few good ideas refined and accepted, and the rest thrown out.


the trace isn't dbus itself, it's a particular client program using the gdbus binding. The gdbus binding uses a lot of malloc and threads, and this particular client program is all blocking round trips.


>You're comparing a lightweight thing with a security policy driven message bus. Obviously the latter is going to be more expensive.

The real problem is he doesn't give any data for volume/bandwidth processed in his case.

I don't see why a "security policy driven message bus" would be more expensive than what he describes (unzipping, business rules encorcement, analyzing, etc).


> I don't see why a "security policy driven message bus" would be more expensive than what he describes (unzipping, business rules encorcement, analyzing, etc).

It was in comparison with an HTTP server. I did not read much about the mail thread but from what I can tell it essentially benchmarks the uninteresting part of DBUS which is the part that runs in userspace. That's old and definitely needs improvements, but not the design of it. In fact, the design of KDBUS or the DBUS dispatching core is from what I can tell quite intelligent and makes sense from a performance point of view.

Without a doubt there are a lot of old bells and whistles attached to DBUS which need rewriting.


Moores Law works to make it eventually more than fast enough.

Moores Law does not help it get more secure.

How about we work on things that mater in this post-Snowden world, like increasing security -- and let Moore do the performance improvements! Seems a more reasonable and efficient approach, no? Adding more attackable surface area to the Kernel is not a good idea at all. It is an NSA wet dream.


Moore's law makes it faster, but Wirth's law compensates for that dearly. Let's also not forget that pesky Amdahl making our bottlenecks stay bottlenecks, as some things are just fundamentally wrong for performance.


Kinda good points; however: Wirth's law is more true for commercial software where profit is the motive. Amdahl's law only really makes sense in parallel computing. All that is beside the real point; which is that in this post-Snowden world, we should not be sacrificing security for performance.


Wirth's law is more true for commercial software where profit is the motive.

How I wish. Sure, for proprietary and commercial consumer-facing apps, it might be the case. There's a ton of proprietary/commercial development tools and infrastructure (hyper-optimized JVMs, k/kdb...) that are inverses of Wirth. FOSS has its opuses as well, but it's just as full of crap.

Amdahl's law only really makes sense in parallel computing.

Sure, but that doesn't mean one can just magic away blatant bottlenecks, and even if some speedup is possible, it does not retroactively make these decisions correct.


> Amdahl's law only really makes sense in parallel computing.

Contemporary computing is parallel computing. Even phones have multiple cores.


Nice job missing the point.


Security is usually correlated with efficiency, not inefficiency. Think djbware. If something is inefficient, it means nobody has bothered thinking through the design, and well...


So let us make the user space implementation more performant instead of going down the more inherently risky path in regards to security.


I can make two observations and hypotheses from the profiling results, which agree with Linus' conclusion of "bad user-level code":

- Memory allocation/deallocation are taking the most time.

- All the percentages for each function are very small.

The former is a characteristic of code which heavily abuses dynamic allocation. It's surprising to see how many programmers are not aware of the overhead it adds and would malloc()/free() frivolously when something simpler would suffice. This is also often accompanied by copious amounts of unnecessarily copying data around. I've worked with small embedded systems where every use of dynamic allocation would need to be justified thoroughly in code reviews; perhaps these developers would benefit from being put through the same process.

The latter is a phenomenon which arises from "excessive modularity": the functionality of the system has been split into so many little pieces that the time each function contributes to the overall total is tiny. Instead of seeing an obvious "80% of the time is being spent here" that could easily be targeted for optimisation, that 80% is scattered amongst several dozen functions each taking 1-2% each. The bottleneck isn't concentrated in one area --- the whole system is uniformly inefficient. It's extremely difficult to optimise a system like this because nothing in particular stands out as being optimisable. I've had to optimise some large Java applications that were like this, and the solution was basically to remove most of the code and rewrite it to get rid of many chains of indirection.


Frequently the second problem is exacerbated by cache misses. Every operation becomes (much!) slower, but nothing stands out.

Not a popular opinion here, but excessive modularity like that is practically the definition of bad, hard to follow code though IMO. Ignoring performance issues, everything happens somewhere else, and large changes now take several times longer, since the changes are not local to a function. The code is several times longer than it otherwise would be due to all the function declarations...

Its a nightmare. It tends to be the kind of code you feel productive while writing ("I'm cleaning up this 200 line function"), but is really just making the codebase worse (is there a general term for this kind of false productivity? It's a common problem I see).


> Its a nightmare. It tends to be the kind of code you feel productive while writing ("I'm cleaning up this 200 line function"), but is really just making the codebase worse (is there a general term for this kind of false productivity? It's a common problem I see).

This issue is pervasive when Desktop/Web developers try to improve embedded software. I've achieved a thousandfold increase in performance by converting an embedded data logger from using printf to using a dedicated formatting function (most of the time was spent on parsing the format string and performing allocations).


The compiler doesn't do the parsing for printf at compile time for the common cases? That's semi-surprising.



Or rather: compilers exist that do. The embedded world has a lot of strange toolchains


The percentages are small because

a) This is the output of perf top, a system-wide trace of the last few seconds

b) The application uses lots of library functions

c) Due to lots of locking in the application, the cpu doesn't spend much time in the app itself

To find out which high-level functions (that in turn call library functions) take the most time, use perf record to get call stacks.


The way you generally deal with the second problem is to have a profiler that can give you the inclusive time spent in a subroutine, not just the exclusive time. The inclusive time is the time spent in that sub and in subs it calls, the exclusive time is time spent only in that sub.

When you have inclusive time it usually becomes easy to find that function that's spending 80% of the time in either itself or in the things it calls.


Indeed, decomposition is easier than extracting modules. This is often an argument in favor of modularity! :) As others have mentioned, the second issue can be addressed with tooling.

But, is excessive modularity really the culprit? Are you saying you read the code and found that to be the case, or it's just a hunch? Reading through the comments here, it seems that whatever one's personal hobby-horse happens to be is what will get the blame.


For the second problem, sounds like the dbus devs should follow sqlite's lead: http://permalink.gmane.org/gmane.comp.db.sqlite.general/9054...


Somebody should modify perf to log "time spent in call", "time spent in stackframe setup", "time spent in mov", "time spent in branches", etc.


So, you can get some of these kinds of cycle counts by changing the event that gets used for record. https://perf.wiki.kernel.org/index.php/Tutorial#Events

So you could use record -e branch_instructions, for example.

Past that, things like "time spent in stackframe setup" is not sane to measure on an optimized binary, because the instructions hat set up the stack frame may be all over the place (due to shrink wrapping, etc).

This means the overhead of counting them would probably require instrumentation and be quite high (and perf is sampling based, not instrumentation based)


> Past that, things like "time spent in stackframe setup" is not sane to measure on an optimized binary, because the instructions hat set up the stack frame may be all over the place (due to shrink wrapping, etc).

Doesn't debug info already account for stuff like this? So we can give line number info if you break on any instruction? Seems to me you'd "just" need to augment debug info with performance classes, then let the profiler display worry about classification after the fact.


Linus is just being Linus. He's brusque, non-chalant. He's inflammatory to get people to listen. His thesis here is that bad DBus performance isn't due to context-switching overhead or buffer copies (which can be solved by moving the daemon into the kernel), but instead it's due to malloc-intensive / utf8-parsing-intensive marshalling.

Secondly, he's saying that if performance is being used as an argument for kdbus, then that's an invalid argument.

He's totally right by the way. In this pure message-passing benchmark, where the message-passing overhead is the majority of the work, the slowness is not in kernel-scheduling/system-call/kernel-buffer-copies. People confused a potential impossible-to-overcome bottleneck as the most relevant bottleneck.

But that doesn't mean there isn't a reason for kdbus. Kdbus allows for much better authentication than UNIX sockets do (you can authenticate with pid, pgid, uid, gid, kdbus token, etc.). Also it allows for message-passing security policies to live in the kernel which is crucial for security applications. The tangential performance benefits are nice too, even though the bottleneck wasn't in the kernel to begin with.


Responses like yours are why I enjoy to visit Hacker News.

Sorry for going meta - I'm just a bit disappointed because your tone used to be the normal thing on HN, even when there were just as many disagreements. People were able to keep it friendly. Today not as much; I find HN has become too mainstream and "reddity", and the tone in general too emotional and aggressive, about who or what is right.

This wasn't unexpected, but still, it is good to see not everyone has moved into that direction.


Some form of notification publish/subscribe functionality is necessary in modern Linux. But dbus isn't very UNIX-y. It's also strangely opaque. With pipelines, you can see how the plumbing works. With dbus, programs can interact in nonobvious ways. Even worse is polkit, a poor replication of Windows Group Policy.

The most unixy solution is buried at the bottom of this thread, from Plan 9: https://news.ycombinator.com/item?id=9450988 ; but if that's not popular, then I think what people actually want for desktop purposes is an equivalent of the Windows PostMessage system.


Plan 9 in no way implies performance. Just because you can make it look like a filesystem, doesn't mean it actually is going to achieve anything special performance wise.


And in fact the performance of Plan 9 tended to be extremely poor when compared apples-to-apples with Linux.

It's just that everything was so simple it felt fast, even if your filesystem throughput was crap.


[flagged]


The Unix philosophy has proven inadequate to meet the needs of real-world software integration

I'm assuming that's why it's so trendy to talk about "microservices" these days.

What's needed is the COM philosophy, wherein services register themselves under globally named endpoints and present typed structured discoverable APIs.

The Unix philosophy doesn't say anything against implementing service discovery mechanisms.

...if you look at the Plan 9 developer community it's a) very small and b) packed with the pathologically neckbearded.

My God, "pathologically neckbearded". That's just rich. Let's shit on everyone who shows us different ways of building systems that allow for greater composability and emergent features.


The Unix philosophy doesn't say anything against implementing service discovery mechanisms.

It's less about service discovery mechanisms per se. It's more about using objects communicating via APIs as units of composition, rather than processes communicating via data (usually text) streams. The object-based approach hews closely to how programmers actually think, makes protocol specification as easy as declaring a method (and checked by the compiler!), and enables much more sophisticated interaction between components. See Miguel de Icaza's essay "Let's Make Unix Not Suck" for an in-depth look at how the COM model improves on Unix philosophy for software integration: http://baizid.org/literature/bongo-bong.html


See also "A Note on Distributed Computing"[1] by Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall.

[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...


Grandparent comment has already been zapped, but:

What's needed is the COM philosophy, wherein services register themselves under globally named endpoints and present typed structured discoverable APIs.

There is a real tension between RPC-orientated systems and protocol-orientated systems. The RPC and distributed object model (COM,DCOM,CORBA) produces tightly coupled systems that provide more functionality if all parts are from the same vendor/team. The protocol-orientated style gives us the RFC protocols with a much wider diversity of implementations.

The question is which application developers do you make happy. The Windows/GNOME style is a single system with lots of moving parts that can't be individually replaced. One of the attractions of UNIX has been its hetrogeneity.


Actually... I can't think of a better word for people who envangelise Plan 9 on comment sites but never actually use it themselves.


> Sorry to say, but "does this feature make application developers happy" trumps "muh unix" when considering whether to add something to the kernel.

Which application developers? A hell of a lot of developers are really really happy with "muh unix". Why should a bunch of hipsters with half baked ideas get to rob those developers of that happiness because pushing through half baked ideas makes them happy?


It seems to me that more and more of this comes up as people are moving onto Linux for "cloud services", bringing with them mental models honed on Windows (and to some degree, OSX).


Gross oversimplification.

The Unix workstations were by and large simply a great deal more expensive, and there weren't any compelling options on x86 other than Windows, DOS, maybe OS/2, and a few other clones.

It was pretty much pure business--the technical merits don't enter into it. That said, Microsoft took on a heroic burden to attempt backwards-compatibility.

If you want a vindication of the Unix philosophy, look at the server space.


It all started with a microbenchmark:

http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...

DBUS implementation aside (which seems to leave a lot of room for optimization), my initial reaction was that the gain was rather minimal compared to the claims of the systemd folks.


That was my first reaction as well. But to be fair, the kdbus people always said the real speedup will be seen for large packets.

However, the real issue is how slow overall DBus seems to be according to those numbers. As Linus says earlier in the thread: "No way should it take 4+ seconds to send a 1000b message to back and forth 20k times." This is indeed rather shocking.


I would expect any half decent HTTP server to be able to handle more messages per second than that. If that is going to be the future fabric underlying all application ecosystem on Linux I would be as wary of it as Linus is.

I will put it another way: if it can't be more performant than a simple HTTP client/server it doesn't deserve to be included into mainline kernel especially when the costs are so high.


Well, he's not claiming that kdbus is that slow. He's claiming that kdbus is faster not because it's in the kernel, but because it's not shitty code like userspace dbus (completely different implementation).


So what's the use case for sending large packets of data over dbus where the performance gains warrant integrating the dbus server to the kernel? Are they planning to migrate the X server to dbus?


Large packets, I'm not sure. For a long time, you've been able to attach a file descriptor (e.g. one end of a pipe) to a message, and then send bulk data over the file descriptor.

My understanding of this whole thing was that most of the car companies thought they should try using linux for their console / entertainment / etc systems. Electrical and mechanical engineers seem to all only use windows and not know anything about linux, so they asked themselves "what's the bus on linux?" and found D-bus, the desktop bus used by desktop environments and utilities, for a handful of messages a minute.

They used it to send frequent sensor data to all sorts of services that might be interested, like digital gauges, sound distribution, etc. They didn't realize that there are lots of good low-level IPC mechanisms available on linux, and they should have probably used 0mq, or a scheme based on unix sockets with filesystem permissions based access control, etc.

Anyway, they realized that D-bus had way too much overhead, and instead of porting to some other IPC, got to work accelerating D-bus with additional kernel functionality, such as the AF_BUS which was rejected by the linux networking maintainers. (Though that was probably a better idea than kdbus.)

How exactly GregKH got into this project, I'm not entirely sure. He is relatively friendly with the systemd developers. He's paid by the Linux Foundation, and many of those car companies are members of the Linux Consortium, or something like that, I think. I dunno. Apparently a lot of people used D-bus accidentally, and now the thing to do is make the kernel make it faster.


The same car companies that put the entertainment system and the brake servos on the same bus...


X is going to be migrated into the recycling bin. Lennart wants to migrate PA to dbus, and the possibility exists for doing the same for some future version of Wayland.


Ah, Wayland. Yet another example of the successor to a system being less capable than the system it replaces.

"Wayland does not currently provide network transparency, but it may in the future. It was attempted as a Google Summer of Code project in 2011, but was not successful. Adam Jackson has envisioned providing remote access to a Wayland application by either 'pixel-scraping' (like VNC) or getting it to send a "rendering command stream" across the network (as in RDP, SPICE or X11). As of early 2013, Høgsberg is experimenting with network transparency using a proxy Wayland server which sends compressed images to the real compositor."

[1] https://en.wikipedia.org/wiki/Wayland_%28display_server_prot...


It turns out that in this day and age, desktop users much prefer pixel-perfect tear-free rendering to network transparency. Wayland is X11 without the legacy cruft -- and it turns out that the only thing left in X11 that's not legacy cruft -- like seriously, bleeding-edge tech for 1986 -- is a centralized way to hand out direct-rendering frame buffers to user processes and composite them together for the final image. Which is exactly the extent to which modern apps use X11.

Remote access can be achieved by running a special Wayland server on the remote side that connects to a Wayland client on the local side and exchanges framebuffer and event data with that client. In this setup, apps can talk Wayland and not even be aware that their display is being networked; they are also completely decoupled from the remoting protocol which can be anything. RDP and streaming h.264 would be good choices, certainly better than legacy X protocol.

Wayland is objectively a huge win.


Why do you say X has tearing and pixel errors?

I know some hardware (eg. nvidia optimus) have tearing but these are driver bugs and not a property of X (or freedesktop's implementation thereof).


It's historically been basically impossible to guarantee a tear- or glitch-free display under X because it was impossible in the protocol to sync display updates to vblank.

The XPresent extension may ameliorate this, but adding yet another extension to a protocol already lousy with them, such that they have to be tested for at runtime, is pretty much polishing a turd at this point. Starting from scratch is the correct approach and even the X.Org foundation advocates Wayland as X's replacement.


"It turns out that in this day and age, desktop users much prefer pixel-perfect tear-free rendering to network transparency."

People keep telling me that.


Ugh, and here i thought Wayland was doing good...


IIRC the use case is to stream media over DBus, which currently no one is doing because it is too slow.


I don't get the use case, why can't the programs use DBus to negotiate the streaming over some low-overhead transport? Why are people so insistent on ramming a square peg into a round hole?


because they want the square hole for a reason, the square hole being a unified in-kernel IPC system.

Greg KH has given a few talks/blogs about the design goals of kdbus. https://lwn.net/Articles/551969/


"because they want the square hole for a reason, the square hole being a unified in-kernel IPC system."

Another one?


No, a first one. The others aren't unified, they are niche solutions. Those specific shortcomings are covered in those KDBus talks.

If the existing solutions were sufficient, why would the 'new' way ever get merged into mainline?


Well actually thats what kdbus would do anyway, all the large transfers will happen over memfd buffers...


I'd much prefer a an IPC that lets me pass a socket and use that socket on both ends, instead of this.

I really still don't see a good-enough scenario for this.


I'm not terribly convinced by this point, since it's relatively easily solved by the "transparent direct connection" idea that's been suggested in LKML. That is, rather than making the application set up its own Unix socket connection for the media, the DBus library would have a simple call to switch to a direct socket connection as a transport for a given DBus connection. This wouldn't work for multicast, but who needs to multicast raw media data to whatever local process feels like receiving it?


So: if DBus was suddenly fast enough to stream media over, what applications would be using this feature, for what sets of streaming endpoints?


KDBus was designed with sandboxed applications in mind. So use-case wise think android-style "use the camera app to take a picture/video", where the camera app transfers the media to another app over kdbus, thus without leaking information about the system.


You don't fling uncompressed video between apps like that, it goes straight to the hardware-accelerated h.264 (or whatever) encoder and then a low bandwidth stream comes out. Local AF_UNIX sockets (aka userspace dbus) provide ample bandwidth for that use case.


I think the idea is "why use dbus to coordinate creation, rights assignment, ... of local sockets when you can push the data over dbus"?


Yes, that's what I was talking about. Current dbus is is using local AF_UNIX sockets for transport -> the current transport is plenty fast enough to push encoded video over.


If the point is to have Android style IPC, why not just pull a version of Binder into the mainline kernel?


Android's Binder was already merged into the Linux kernel (not just "staging", the real thing) since 3.19... half a year now.

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

But not many people are aware of that; since neither systemd nor Lennart Poettering have anything to do with it, the usual armchair architects did not paste their, uhm, well considered opinions about how much it sucks and is the end of the UNIX philosophy all over the comment threads when it happened.


As far As I can remember in mailling list one of kernel developer mention "android IPC (Its name I think is binder) is completely broken" , He didn't mention any evidence for his claim.


I don't know. I'm not advocating kdbus for multimedia, I'm just repeating what I've heard. I think one of the driving forces behind kdbus is Samsung who are interested in kdbus for Tizen.


PulseAudio


I think wayland was one possibility.


In that case, they should implement just the functionality they need in kernel, not a full DBUS implementation.


So, basically, systemd-bus would solve the problems even without kdbus, if it avoided all the extraneous memory allocations.


I don't think Linus is up to date on libgio-2.0 (and the GDBus implementation).

It does in fact use a SLAB, based on early kernel versions with per-thread allocation caching.

As for avoiding utf8 validation, that would require being able to trust the sender/receiver. That, I believe, is a major bullet point of both memfd (sealing) and kdbus.


Linus' trace shows that 12% of the runtime is spent in libc's malloc and free. So yeah, there's a problem there.


kdbus really addresses a different performance issue than tuning mallocs and locks would. By tuning mallocs/locks/validation/parsing, you can reduce the overhead of each dbus send/receive, but with kdbus you remove half of the send/receives. So however fast your sends/receives are, kdbus will still make things faster by not doing as many of them.


Here is the long version of that point: http://article.gmane.org/gmane.linux.kernel/1939651


He's not saying DBus as a concept is bad, just that the implementation sucks. If he's right, that's something that can be fixed relatively easily (compared to a major architectural change).


He does seem to be saying that kdbus is pointless since those fixes would be in userspace where it's spending most of its time. Or am I misunderstanding?


The point of kdbus is to do dbus in kernel space. The one spending lots of time in userspace overhead (not doing actually useful work) is regular dbus.

Linus is saying that kdbus is pointless because its performance gains don't come from being in-kernel, they come from the code not being a complete shit-show, and he believes the same performance should be achievable by fixing regular userspace dbus.


Well, the code review of kdbus in the rest of that thread would argue that its code is a shit-show, it just happens to be a faster shit-show than userspace dbus.


I agree, the OP title is a bit misleading.

I use dbus every day on my laptop and it works totally fine - I don't notice its existence. I'm all for improvements, but I'm quite patient and quite happy to wait this one out.


> I use dbus every day on my laptop and it works totally fine

My laptop appears to run fine despite dbus, if the hundreds of daily 'Failed to connect to socket' messages are any indication.


I made a silly desktop app some time ago and it used DBUS to get notifications from NetworkMonitor when the system went online/offline. Nothing too fancy, very few lines of code.

When I was implementing that, I managed to get several segfaults from my Python code. All together seemed a little bit fragile to me :(

That was 5 years ago, things are probably better now (to be fair, I don't know what was causing the crashes; NM was 0.8 back then), but when I read Linus comments I can help to think he's probably right.

This is an anecdote and all, but my point is that until I had to use it... DBUS was pretty good and was working fine :)


I'm not a software engineer or gifted hacker.

Given that, I was able to use dbus to come up with a quick solution for a department feature in an afternoon using Pidgin's libpurple dbus bindings through purple-remote to create a presence tracker and simple announcement bot with bash.

A little digging into Pidgin's DBUS Howto gave me all the documentation that I needed. It just took a lunch hour to have something functional up and running without any real in depth programming knowledge required or needed. In an afternoon, I'd hacked up an RSS feed that showed everyone in the department's current presence as reported by their IM status and that feed could then be consumed by the required apps.

I don't know about dbus' other merits or flaws, but it did help me solve a specific problem quickly. This was a dead simple use case though.

At the time, I assumed that people more knowledgeable than me could do a hell of a lot more with it.


Two years ago I had to use DBus to connect to a Bluetooth device from Java.

In the end, after a few minutes of good work the connection between the adapter and the peripheral would timeout (or someone would go out of range), the whole thing would stall and we'd never get another message from BlueZ, be able to connect to any device ever again. It was without doubt the worst development experience I ever had.

Worse, you couldn't reload the dbus library and start over because then Java would scream and crash. So we had to restart the JVM, and BlueZ, etc.

I'm sure I was doing something wrong, but if I can't get it to work properly in a matter of weeks then it's not just my fault.


Were you using threads? Python really should not crash, but combine it with library imports implemented in c and threads, and you get really subtle race conditions which result in segfaults. I have myself been force to debug why Python segfaulted, and it was a standard library call which internally used threading, and that caused a conflict with a c library that was not threading safe.


Sorry, it wasn't Python. It was a system component that crashed as consequence of my Python code using DBUS.


> When I was implementing that, I managed to get several segfaults from my Python code

Well, it wasn't really DBus' fault then, was it?


I didn't mean to say my Python code was the one crashing with a segfault. Apologies if my comment wasn't clear enough.

I just did a search in my bugzilla account at Red Hat (Fedora distro) but I couldn't find a report for that specific crash. The closest I can find is a report on a crash of notification-daemon (could be related though, as it uses DBUS to advertise a service).

I'm surprised I didn't file a bug report, but there you are.


Maybe your laptop would be much faster if dbus was faster.


no, both concept and implementation is bad. This thing shovels and computes a LOT of data per call, you cant optimize that out because its what authors of kdbus intended. Its typical 'look at all that CPU we have now, lets use it' mentality that keeps Wirth's law true.


> Since then I'm convinced that people who are inventing RPC solve non-existing problems.

Many RPC solutions ultimately failed, because they were slow and overdesigned/complex. (e.g. CORBA, Network OLE/DCOM, Java RMI, XML-RPC/SOAP and other XML-based protocols, etc.)

Whereas e.g. REST (if you call it even RPC) is just very simple and enough for most purposes.

It's more like the concept of Component Object Model failed. For example OLE (the base technology behind DCOM) doesn't fly beside the legacy Office usage. It goes without saying that binary based implementations of such Component Object Model are faster than XML based ones - a fade of the last ten years.

The component format (if you call it that way) that just works is HTML5.

Edit: okay, it seems it's a controversial topic (My comment used examples which were DCOM and OLE 1+2 (the original MS Office thing "Compound document") - http://en.wikipedia.org/wiki/Object_Linking_and_Embedding and http://en.wikipedia.org/wiki/Compound_document , not COM directly)


You do realize that pretty much whole of Windows is, underneath, based on COM, including most of the .Net-exposed system API's?

I'll agree that COM and DCOM are... hairy, if you're implementing COM objects 'by hand'. But there's a reason for all of it, and I have yet to see an easier and more flexible way to write cross-language components. I could write a COM object in C++, call it from VBScript in a very simple command line program, as well as use it directly in a Web application in (here it comes!) 1999! That's 15 years ago! And while COM is universally reviled now, and none of the cool kids will want to be seen within 100 feet of it (actually, the 'cool kids' don't even know what COM is any more, it's so 2005 to hate on COM...), for those who spend 6 months on understanding it, it worked very well, and has been supported for close to 20 years now (on Windows, that is).


COM is indeed an interesting beast and very useful. Being able to integrate other programs into yours, relying on COM to do so is very helpful. I don't know of a way on Linux or OSX to embed a word processor to work on documents but never show the user (but this might be due to my lack of knowledge about those systems - I would welcome being enlightened). I once wrote something that processed a plethora of Word documents using Word via COM to extract data from them and shove it into a database.

I can't fully remember how I did it but I recall type libraries, including generated headers etc. but I remember being impressed by it.

And as you state, all calls within Windows rely on COM. Stop the RPC service and observe as your system becomes unusable.


LibreOffice uses UNO, which is their own version of COM: http://en.wikipedia.org/wiki/Universal_Network_Objects


Mozilla has its own XPCOM too: http://en.wikipedia.org/wiki/XPCOM

Read the Criticism section:

"XPCOM adds a lot of code for marshalling objects between different usage contexts (e.g. different languages). This leads to code bloat in XPCOM based systems. This was one of the reasons why Apple forked KHTML to create the WebKit engine (which is now used in several web browsers in various forms, including Safari and Google Chrome) over the XPCOM-based Gecko rendering engine for their web browser.

The Gecko developers are currently trying to reduce superfluous uses of XPCOM in the Gecko layout engine. This process is commonly referred to as deCOMtamination within Mozilla."

But my original top comment was about RPC and the compound document format (OLE as example), not about COM.

Apple had a compound document format too (death since 1997): http://en.wikipedia.org/wiki/OpenDoc

"OpenDoc's flexibility came at a cost. OpenDoc components were invariably large and slow. For instance, opening a simple text editor part would often require 2 megabytes of RAM or more, whereas the same editor written as a standalone application could be as small as 32 KB. This initial overhead became less important as the number of documents open increased, since the basic cost was for shared libraries which implemented the system, but it was large compared to entry level machines of the day. Many developers felt that the extra overhead was too large, and since the operating system did not include OpenDoc capability, the memory footprint of their OpenDoc based applications appeared unacceptably large. In absolute terms, the one-time library overhead was approximately 1 megabyte of RAM, at the time half of a low-end desktop computer's entire RAM complement.

Another issue was that OpenDoc had little in common with most "real world" document formats, and so OpenDoc documents could really only be used by other OpenDoc machines. Although one would expect some effort to allow the system to export to other formats, this was often impractical because each component held its own data. For instance, it took significant effort for the system to be able to turn a text file with some pictures into a Microsoft Word document, both because the text editor had no idea what was in the embedded objects, and because the proprietary Microsoft format was undocumented and required reverse engineering.

It also appears that OpenDoc was a victim of an oversold concept, that of compound documents. Only a few specific examples are common, for instance most word processors and page layout programs include the ability to include graphics, and spreadsheets are expected to handle charts. [...]

But certainly the biggest problem with the project was that it was part of a very acrimonious competition between OpenDoc consortium members and Microsoft. The members of the OpenDoc alliance were all trying to obtain traction in a market rapidly being dominated by Microsoft Office. As the various partners all piled in their own pet technologies in hopes of making it an industry standard, OpenDoc grew increasingly unwieldy. At the same time, Microsoft used the synergy between the OS and applications divisions of the company to make it effectively mandatory that developers adopt the competing OLE technology. In order to obtain a Windows 95 compliance logo from Microsoft, one had to meet certain interoperability tests which were quite difficult to meet without adoption of OLE technology, even though the technology was largely only useful in integrating with Microsoft Office. OpenDoc was forced to create an interoperability layer in order to allow developers to even consider adoption, and this added a great technical burden to the project."

And there were others too: http://en.wikipedia.org/wiki/Compound_document


Thanks! I'll take a look!


I don't know of a way on Linux or OSX to embed a word processor to work on documents but never show the user (but this might be due to my lack of knowledge about those systems - I would welcome being enlightened). … I once wrote something that processed a plethora of Word documents using Word via COM to extract data from them and shove it into a database.

This is more of a ideological difference in how things should, and can, be done. For a long while, the only way to reliably edit Word documents was using Microsoft Word (half of it being undocumented, and half of it being documented as "this is supposed to do whatever Microsoft Office does"), so you effectively needed to embed Word within your program in order to reliably edit Word documents. A lot of that changed

In the open source world, there are canonical tools, but traditionally it's been about the format being open and accessible explicitly so you don't need to rely on/run a third-party program to do the manipulation, rather either roll your own manipulation routines following the documented standard (thereby strengthening the ecosystem with multiple implements, hopefully) or use a library (that the canonical tool may be based on/also use).

Both schemes have their advantages and disadvantages. It would be an interesting study in why/how the different ecosystems evolved to favor one over the other, how that has changed over time, and how that's influenced the size and robustness of the respective ecosystems. There's probably a lot of influence on the Windows/RPC side coming from the canonical tools being primarily interface GUI interfaces, which is then natural to want to "drive" or "control" remotely, vs "headless" interfaces that are able/meant to be scripted as a first order requirement.


> I don't know of a way on Linux or OSX to embed a word processor to work on documents but never show the user

KDE has KParts and GNOME has Bonobo. The latter is apparently deprecated, but it used to be used by the desktop panel for embedding widgets running in separate processes.


Check what does KDE. For example, KDevelop uses Kate as internal text editor.


I believe that is currently called kparts. Both gnome and kde had corba models early on and looked to be following the OpenDoc model. I think both orbit and Mico are now orphans. They seemed to scale it all back and pivot. I think people want widgets with behavior, not full blown browsers and word processor components.


It is still working very well indeed. I was recently trying to add photos to iCloud programmatically on Windows. Guess what, there is a COM interface for that.

https://gist.github.com/tobiasviehweger/7a302b7179efb99082d8


COM sort of works, on Windows, when when one or preferably both sides of the connecting components are made by Microsoft. You never see non-Microsoft code talking to non-Microsoft code from a different vendor via COM. This is not a successful component interface standard.


Huh? ArcGIS is wholly extendable, and complete businesses are based on the Arc extension ecosystem, using COM. AutoCAD can (could?) be extended using COM. I've personally written extendable systems using COM. COM is a hugely successful component interface standard. Name any other standard with the same speed, flexibility and adoption.

Again, I'll agree that it's not trivial to implement a COM object. You need good understanding of C(++), the Windows API, threading, and some more things. There are tools to make it easier (e.g. to write a COM object in Visual Basic) but they generally don't make for robust components. In the hands of a competent, experienced developer, COM is a massively powerful tool.


It's not about COM.

COM is: "Unlike C++, COM provides a stable ABI that does not change between compiler releases. This makes COM interfaces attractive for object-oriented C++ libraries that are to be used by clients compiled using different compiler versions." http://en.wikipedia.org/wiki/Component_Object_Model

Others are talking about the compound document format OLE. And about DCOM (Distributed Component Object Model). Both OLE 1+2 and DCOM are not that great implementation wise, in retrospect. COM on the otherside is fine.

D-Bus is like DCOM, and has similar/other deficits, as do the competition: CORBA, RMI, XML-RPC, SOAP.

MDI and compound document format were an oversold concept, that failed. Today we have HTML5 with iframes.


I've written non-microsoft code that talks to non-microsoft code via COM. It was a painful experience, but you do in fact see it happen. I also know people who do this regularly, and seemingly fairly painlessly. I wouldn't call it a standard by any measure, more of an underdocumented and arcane mechanism.


It's more like the concept of Component Object Model failed. For example OLE (the base technology behind DCOM) doesn't fly beside the legacy Office usage.

True, but the idea of the self-registering COM object eventually seemed to limp across the finish line.

Anyways. "Two key kdbus developers" (ahem) have said that the entire kernel signal mechanism should be deprecated because it's "too brittle and complex". When I read that, a big light bulb turned on in my head about what the actual problem is here.


Signals are too brittle and complex. That's why you have to read three whole manpages to figure out what happens if a process gets the same signal twice in rapid succession.

They are also un-Unixlike: they are used to communicate three or four different kinds of information, and they do most of them badly.


Exactly. Signal mechanism makes sense to notify synchronous errors that arise from the thread's own execution, like SIGSEGV or SIGILL or SIGFPE.

Most of the rest of the traditional UNIX signals are events that should be communicated asynchronously via file descriptors that a process can poll at its leisure, which would be more UNIX-y.

Well at least Linux has signalfd(2) now.


Except they're not wrong. We have a potentially infinite set of signals and data we'd like to communicate to programs and a very limited set of signals to do it with. I mean apache uses sigwinch as an exit code!


We have a potentially infinite set of signals and data we'd like to communicate to programs

Ah, I think I've identified your problem.

Signals are for alerting programs to a specific (small) set of changes of external states. There's nothing "potentially infinite" there (frankly having 2 different SIGUSR's is generous).

For transmitting arbitrary messages, Linux provides message queues, which are very nearly sockets (and the ways they aren't sockets make using them for IPC easier). What's the limitation there?

It lacks multicast, sure, but then receivers can just make their own queues and have the broadcaster send to them if you want that.


You answered it yourself. It lacks efficient multicast.

What dbus really needs is an efficient kernel-level capability-aware multicast IPC mechanism, and a total rewrite of the user-space daemon to not be rubbish.

All the user-space daemon needs to do is provide a registration point for other processes to find and connect to each other via (existing) unicast or (new) multicast mechanisms.

My initial thoughts were that kdbus /was/ that IPC mechanism, but it sounds like that might not actually be true.


Apples and protocols.

REST is an architectural approach, and it can very well be using the same overdesigned XML envelopes, binary-encoded payload or anything else, really.


The problem with REST is that it's great for what it was intended to do initially, but there are too many engineers that don't take their medication that actually use it to do RPC :P


Naive REST (HTTP, JSON) isn't exactly known for its performance


Seems to be doing better than the speeds sited by the KDBUS thread -- https://www.techempower.com/benchmarks/#section=data-r10&tes...


The fact that one slow implementation is slower than another slow implementation says nothing.


> Naive REST (HTTP, JSON) isn't exactly known for its performance

Naïve XML isn't known for its performance either. JSON is usually faster to parse than XML.

While D-COM uses a binary protocol, it uses XML for its "Introspection Data Format". Objects instances may implement Introspect which returns an XML description of the object, including its interfaces (with signals and methods), objects below it in the object path tree, and its properties. Objects may be introspected at runtime, returning an XML string that describes the object. The same XML format may be used in other contexts as well, for example as an "IDL" for generating static language bindings.

More info: http://dbus.freedesktop.org/doc/dbus-specification.html


Agreed, that said, I'm kind of wondering what non-naive (smart?) REST is, no sarcasms :P


It's called Plan 9.

In Plan 9, each process has its own filesystem tree, and other programs can expose themselves to this process as file servers, meaning that data internal to the programs can be accessed via the same read, write, delete, etc. calls as files. For example, when running under rio, the Plan 9 window system, the current window contents are available at /dev/window and you can write draw calls to /dev/draw to do graphics.

REST is pretty much applying Plan 9 principles to the Web and http.


You can use REST architectural principles with a faster transport and serialization. For example, using a REST style API for a messagepack or thrift RPC service.


Isn't Mozilla's XPCOM (the basis of Firefox and all Mozilla applications?) also based on COM?

https://en.wikipedia.org/wiki/XPCOM


The Linus profile is mostly a red herring here, because it is 1) a bad benchmark with a bunch of blocking round trips and 2) mostly profiling the gdbus bindings which are just one binding.

For a more in-depth performance discussion of dbus, check out http://lists.freedesktop.org/archives/dbus/2012-March/015024...

Above noted on linux-kernel here: http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...


The PDF link is sadly dead... (The ml message references heavily to the PDF)


And https://lwn.net/Articles/636997/

But, let's stop bitching and come up with alternatives, solutions, or closure. I'm tired of hearing about KDBUS (all love to gregkh) and DBUS.


Good link.

As a side note, I see sorokin's posts about DCOM, RPC and multi-threading:

> Probably some unique thread-id can be propagated through the calls. And if incoming call has the same thread-id as one of our outgoing call it is handled inside this outgoing call as in nested message loop, otherwise a new thread from thread pool is used. This will create an illusion that two processes share the same set of threads and it works well with mutexes.

I think he is slowly driving towards an actor like message passing thing like Erlang, Akka or Orleans without even realizing.

Which reminds me of Virding's law:

http://rvirding.blogspot.com/2008/01/virdings-first-rule-of-...

---

Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.

---


alternatives, solutions, or closure

The link you posted has what I think is the right one:

The problem is that probably this code is not needed at all.

To me, DBUS seems like a solution looking for a problem: The majority of the time there is no real reason to have such a complex layer of abstraction for IPC, when the system already provides much simpler alternatives.


To me, DBUS seems like a solution looking for a problem: The majority of the time there is no real reason to have such a complex layer of abstraction for IPC, when the system already provides much simpler alternatives.

Then you need to read and reread and reread Havoc Pennington's posts on why he wrote d-bus in the first place:

https://news.ycombinator.com/item?id=8648995

https://news.ycombinator.com/item?id=8649459

D-Bus provides a service discovery system that lets you not only send messages to named endpoints, but also monitor whether there's something on the other end of the endpoint, and if necessary, request that it be started before you send the message in a non-racy fashion.

It also provides a common, typed, structured, discoverable, auditable serialization layer which is necessary for doing API-style IPC, and a benefit besides since when you "just use sockets" you have to do all the serialization and deserialization yourself, leading to possibly buggy code.

Oh, and it also does multicast, which no other commonly used Unix IPC can do.

People who say "I don't see the problem that d-bus solves" aren't looking hard enough and/or are ignorant of the issues of how software gets developed in a modern environment. D-bus solves many problems with IPC under Linux, and makes developers' lives a whole lot easier when they have to connect with other applications or services.


Is there a document that explains the why's and wherefores of dbus? Aside from those posts.



Is there a paper/article that discusses and categorizes the differences between IPC mechanism? Their pros and cons etc. What mechanisms are there? What is used on Windows? OS X/iOS? QNX? BeOS? Android? Solaris? etc.

The Wikipedia article on IPC https://en.wikipedia.org/wiki/Inter-process_communication is really incomplete.


this article compare kdbus (and thus dbus) to android's binder: http://kroah.com/log/blog/2014/01/15/kdbus-details/


> DBUS seems like a solution looking for a problem: The majority of the time there is no real reason to have such a complex layer of abstraction for IPC, when the system already provides much simpler alternatives.

Indeed. I just use shm_open()+mmap(MAP_SHARED) and go about sharing my data between processes, and have had no problems whatsoever. Then throw in sem_open() so only one process is locking the data at a time. Wrap all that in a 2KB header and it's trivial to use in any project.

It would seem that modern computing is all about building abstractions on top of abstractions, to solve 'problems' nobody ever had. Nobody ever seems to consider the costs of such complexity (in terms of maintenance, understanding the system as a whole, attack vectors, etc.)

By all means, if you need a super complex system with tons of features like D-bus, great! But I am pretty confident that Mousepad (Xfce text editor) doesn't need D-bus just to open a text file in a new tab of an existing window.


>Indeed. I just use shm_open()+mmap(MAP_SHARED) and go about sharing my data between processes, and have had no problems whatsoever. Then throw in sem_open() so only one process is locking the data at a time. Wrap all that in a 2KB header and it's trivial to use in any project.

Is that sarcasm, or you don't know what dbus is used for?


> Is that sarcasm, or you don't know what dbus is used for?

I know what it's used for in the desktop application space; but you might want to ask the authors of Xfce components like Mousepad that make their software rely on it for simple tasks like reusing an existing window when opening a file. Because I'd sure like to be able to run my Xfce desktop with D-Bus off in FreeBSD.

"D-Bus is a message bus system, a simple way for applications to talk to one another."

With shared memory (IPC) or sockets (RPC), applications can talk to each other.

Yes, D-Bus gives you an API interface for this (though I personally can't stand it); which apparently comes with tremendous overhead, if the linked article is to be believed.

Yes, with raw memory/sockets, you have to do the queue work yourself. But that's CS101 stuff; you shouldn't touch IPC if you can't implement a simple shared queue.

For most use cases, D-Bus is like using a jackhammer to nail in drywall.

"In addition to interprocess communication ... it makes it simple and reliable to code a "single instance" application or daemon"

Again, very easy with shared memory and a semaphore. If your attempt to open the semaphore fails, you create it and become the daemon. When the daemon closes, you delete the semaphore. When a second instance opens and connects to the semaphonre, and then shared memory, then it writes the command-line arguments into the queue, and the daemon handles it. Voila, "single instance" application or daemon.

"and to launch applications and daemons on demand when their services are needed"

And this would be a more complex use case that would be harder to solve. It's also a much less common use case. I don't really feel it's directly related to basic IPC, it's D-Bus trying to solve multiple problems at the same time.

I'd rather follow the Unix way and split that into a separate component, perhaps built on top of the IPC protocol.


>With shared memory (IPC) or sockets (RPC), applications can talk to each other.

The whole idea is the "standard" part, not merely the "talk to each other part".


I think what you're looking for is API standardization? If so, dbus does nothing more to help you than POSIX IPC, since each dbus-speaking application can define arbitrary methods with arbitrary signatures and arbitrary side-effects. Dbus is basically a way to do library calls across address spaces--it doesn't help me write programs that are loosely-coupled to one another.


No, looking for a standard message bus that works across applications.

I don't mind that "each dbus-speaking application can define arbitrary methods with arbitrary signatures and arbitrary side-effects", as that's the whole point.


If you don't know the API the application uses, you can't speak with it anyway. D-bus or not. If you want to talk to an entirely separate application, use a header for that application with its IPC/RPC functions defined in it. That's the same thing you'd do with or without D-bus.

All D-bus gets you is an official version of the header I spoke of. Along with so much overhead that you can't use it for any kind of media streaming, apparently. And now they want kdbus; and I definitely don't want their code in my BSD kernel.


there is no real reason to have such a complex layer of abstraction for IPC, when the system already provides much simpler alternatives

This. So many things that freedesktop does I just kind of look at and think "this just isn't remotely a problem I have, so I can't judge whether it 'solves' it or not."


Freedesktop was a interesting idea at the outset, but has since grown into what seems like another branch of RH alongside Fedora and Gnome.


DBUS is a "weird" one.

It came out of the initial freedesktop project to improve interaction between KDE and Gnome.

KDE already had system called DCOP for internal use (allowed for instance Konqueror to be web browser, file manager and multi-pane FTP/SFTP client all at once).

But as with much of KDE it was created using C++. And Gnome is made using C.

So DBUS is something like a reimplementation of DCOP in C.

And the rest is history...


People have come up with alternatives. They are:

* AF_BUS, a new socket address family that provides a "generic" bus

* adding multicast and order guarantees to AF_UNIX sockets

None of these have the political traction that kdbus has. Kdbus, systemd, etc. get mainstreamed because they have top developers gunning for them being mainstreamed. The people on the sidelines going "but muh unix" are also-ran haters, nothing more. So let's just throw the switch and fry this sucker.


The Linus' explanation is here:

http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...

"Just to make sure, I did a system-wide profile (so that you can actually see the overhead of context switching better), and that didn't change the picture."

"The real problems seem to be in dbus memory management (suggestion: keep a small per-thread cache of those message allocations) and to a smaller degree in the crazy utf8 validation (why the f*ck does it do that anyway?), with some locking problems thrown in for good measure. "


I still fail to see why we need DBUS in kernel, so please correct me and clear my lack of understanding. It is claimed userspace dbus performance is bad, but people want to use it in a much larger scale, so what I deduce from that is there is nobody using dbus on that scale right now, since its performance sucks.

Reason for getting kdbus into kernel, and not a generic ipc but specifically kdbus, is because user space that depends on dbus will continue to work with kdbus.

What I still don't understand is, what is the point of keeping compatibility of currently non-existing userspace, if software that depend on dbus right now doesn't need performance enhancements and can just keep using regular dbus, and software that need high performance bus doesn't even use dbus since it's slow? Under these circumstances, why they insist on making kdbus strictly a dbus implementation but not a more generic ipc mechanism?


Now is the time for Linus to disappear for a few days, and re-emerge with a sane IPC design that both binder and dbus can build on.

Bootstrapped in itself, of course :) (I have no idea what that would even mean in this context. But hey, it's all wishful thinking, right?)


But this doesn't benchmark systemd's userland dbus implementation. That one is much better, right?


I hope so, but no one except systemd is using it yet. Last time I checked the sd-bus.h header isn't installed with systemd yet due to the systemd folks concern that that API for sd-bus wasn't finalized yet.

It would be very interesting to benchmark it against the glib implementation (which most people, even if they aren't using glib directly, are using these days).


Can someone point out where the quote comes from? I'm not a DBus developer but I'd like to know why the subject is what it is, and not something like "Issues with capability bits and meta-data in kdbus" or "Kdbus needs meaningful review".


It's Linus being hyperbolic. Read follow-ups in the linux-kernel thread to see more specifically what these benchmarks mean.


What's potato, as said in the mailing list?


Internet slang for a device, often a camera, of poor quality or outdated technology.

http://www.urbandictionary.com/define.php?term=potato&defid=...


jargon for slow computer or old smartphone.


Let's talk about the elephant in the room.

This is quite obviously the NSA adding massive additional attackable surface area to the Linux kernel.

They did it to SSL/TLS, they did it to many others, they are now doing it in earnest to Linux.

First systemd, now kdbus. Keep this up and OpenBSD, or something else, is going to kill the Swiss cheese that Linux is becoming. I love Linux, so it is sad to see this happening.


systemd, kdbus and pulseaudio - wow r00t lol. kthxbai /nsa.

Seriously, its already a huge mess to administate a GNU/Linux with systemd and dbus and polkit already as it is. Its the only parts of my system where I feel I cant really inspect its inner workings. Its a black box.


I absolutely agree it is a mess for other reasons as well; but let us attack it on all angles, and I think security is the most important one these days.


Should be rewritten in Erlang or Go, if you'd ask me.


Are you missing '/s'?


Another great Linus quote:

"The people who talk about how kdbus improves performance are just full of sh*t."

http://thread.gmane.org/gmane.linux.kernel/1930358/focus=193...


You are not quoting the part where he shows his benchmark results, analysis of the problem, and suggested solution. I don't necessarily agree with Linus' tone or choice of words, but what you are doing is far worse. I'm not even sure if you are trying to be sarcastic, but that does not even matter: You are not contributing anything either way.


I have made an observation from Linus comments which is sad specially to people who worked on KDbus all this time. If you think your comment helped anything maybe you should consider making them when you see comments like the next one:

quote: """ amelius 3 hours ago

Should be rewritten in Erlang or Go, if you'd ask me. """

otherwise you are full of sh*t. :)


Linus is mixing together two different performance issues.

http://article.gmane.org/gmane.linux.kernel/1939651

kdbus is solving an "architecture" issue that would affect any binding, while his profile is essentially of the gdbus binding.


As entertaining as it might be to people who are entertained by such things ("not me" would be an extreme understatement), it's pretty much exhibit A for Linus' inability to manage people in addition to code. It's embarrassing, honestly.


Well, as Eric put it

At this point the strongest possible language and the strongest possible push back are being used because everything else is routinely swept under the rug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: