Hacker News new | past | comments | ask | show | jobs | submit login
The destructive desktop — Linux in trouble? (ngas.ch)
141 points by larelli on Feb 15, 2012 | hide | past | favorite | 76 comments



"If you don't use NetworkManager, a number of programs will refuse to connect to the Internet or behave in various weird ways. More so, a lot of system services now depend on NetworkManager and won't start unless it is running. And if you run NetworkManager, it starts periodically messing up any local system configuration. So you're basically bound to use NetworkManager. "

...

"The effect of all those changes are numerous. For one, it is no longer possible to run the system without a graphical user interface unless you plan to invest a huge amount of work and to throw out most of your system support. If you want to get vendor support, this is not the way you will want to go."

Not true. Ubuntu Server does not use a GUI. Nor does it use Network Manager.


This is factually wrong: Then you try to figure out how to configure NetworkManager from the command line. There's no tool in the entire distribution which lets you do that

nmcli lets you configure network manager through command line http://manpages.ubuntu.com/manpages/maverick/man1/nmcli.1.ht...


Factually wrong, effectively right.

Between 1998 and 2008 I always configured my networks by hand on Debian machines. Then I switched to Ubuntu for my desktop. After the first upgrade, X wouldn't start. As a result, I had no network connection, because that is set up by the NetworkManager applet (which is completely braindead: you don't need an applet to have a network connection start automatically).

Then I found there just wasn't any CLI interface to configure the network. Or at least: I didn't know which one it was. When I found out (it was network-manager-cli or something back then), it wasn't installed. Well gee, thanks, it's not like it doesn't happen regularly with Ubuntu upgrades that you get stuck on the cli...


Not on any current Ubuntu LTS (we are talking primarily about servers here).


You still don't need to configure NetworkManager to get a network connection. You can use the old methods (/etc/network/interfaces or /etc/sysconfig/network-scripts) instead.


Those are not the old methods - were they ever portable beyond debian? The old method is ifconfig. I don't care about linux any more, but on FreeBSD ifconfig still works, as it has for at least 20 years. If it becomes impossible to use KDE programs without using networkmanager, that will be a real irritation for me.


/etc/sysconfig is Red Hat. /etc/network is Debian.

Of course ifconfig (or better yet, the ip) still work.


I've never done any systems programming on Linux (or any other OS for that matter) so I can't comment on most of the article but I HAVE configured a network on a Fedora box on many occasions and it's simpler than the author's making it out to be.

On my fedora computer, I can just run system-config-network from the terminal and have a GUI pop up where I can configure network devices and DNS addresses with no need for X11 whatsoever. This is enough for a stable internet connection with no further action from the administrator of the computer such as "periodically calling ifconfig and ip route add until you finally managed to fetch all the data before NetworkManager would mess it up again."

Of course, if your router issues the configuration via DHCP, then you don't even need to do this much. You can just boot in on install and access the internet. I believe RedHat and CentOS work the same way. Contrary to his perception, configuring a network is not really a big deal on a Linux computer! :-D


I suspect that the author is used to the old interface, involving manually writing shell scripts that call ifconfig and friends. And now he's pissed off because he can't do that anymore and concludes that the system must suck, or that his "freedom" is in jeopardy because his old way is no longer supported.

Well I see things differently. Users should be free from interference from ancient and non-user-friendly cruft. Yeah so some /etc scripts don't work anymore, but the new way allows displaying GUI dialogs which the old way didn't allow so there are legit reasons supporting the new way.


You can still do all of that. NetworkManager is far from the only way to get your box to connect to the internet.


It does have a tendency to intrude if you don't explicitly configure it not to, or remove it if you're not going to use it. I always forget to tell it not to configure /etc/resolv.conf, and always get confused when DNS lookup breaks.


apt-get purge it!


Operating systems with much more reliable and user-friendly network configuration interfaces (Windows and Mac) continue to permit advanced users to fiddle with their settings on the command line without actively impeding them.

Desktop Linux has gone from assuming users are developers to assuming users are morons who must be protected from themselves. This, and some other imbalances (such as surrendering good sense to a deified few designers) brought on by attempting to imitate the established "friendly" systems without actually understanding them is at the root of the continued failure to gain serious traction.


As others have pointed out, NetworkManager has a CLI.


If you think nmcli qualifies as command line network configuration, you've never done any advanced network configuration at all. nmcli is a bad joke, with all the limitations of NetworkManager itself, and a really crappy interface to boot.

iproute2 is the gold standard of command-line network configuration on Linux, and for good reason. That NetworkManager conflicts with it instead of taking proper advantage of it is the whole problem.


Yes. I was never advocating advanced configuration. Since when did Windows and OS X allow advanced network configuration through the command line? NetworkManager is supposed to be a simple, user-friendly tool. My mother does not know what "static IP addresses" or "DHCP" or "gateway" are, she just wants the damn wifi to work. Can you imagine her typing in iproute2 commands?


Windows allows some, I've had to mess with it in the past. Mac OS X allows all the command line network configuration you would expect of a BSD, since, well, it basically is a BSD, while at the same time having a great, user-friendly configuration interface for the 80% case.

I don't care about your mother, she shouldn't be on the command line in the first place, and if she were with nmcli, you'd still have to explain all those terms to her (and they're very basic terms, I refuse to believe your mother is actually that stupid, mine sure isn't).

You've entirely missed the point -- repeatedly. Your mother gets to configure from a friendly GUI limited to the functionality she needs, but anyone who needs anything more advanced is screwed out of it because they can't have the GUI and direct access to underlying functionality, because the former actively impedes the latter. This is a uniquely Desktop Linux state of affairs, no other OS behaves like that.


Network manager is particularly annoying. It doesn't seem to like static IP addresses at all, sure the GUI will allow you to assign them but it doesn't actually seem to affect the system at all.

I tried disabling NetworkManager since on my development box I like to have a number of static IPs assigned to it for testing on the LAN. However I also want to be able to easily connect to wireless networks and I can't find an easy way to do that without NM (I tried wickd but had other issues with that).

The upshot of this is that I have to use NM with some bash scripts that run on login to overwrite some of it's settings.

Don't get me started on sound..


Did you file a bug for that behavior? It sure doesn't seem to behave like that for me.

While I cannot and won't claim that you're wrong, this is not supposed to be the case (and isn't in my private and professional experience).

You acknowledge that NM is 'easy' for wifi, but try hard to abandon it for reasons that are unclear. Except for tracking down the real error/filing bugs: Why couldn't you just set up a simple rule on the dhcp server (which I assume you have, at least for your wifi connections) to hand out static leases for your dev box?

Mentioning 'sound' in the end, in this way and out of context, is borderline trollish (I might be wrong, but I assume this is a suppressed pulseaudio rant there).


The issue with static IPs & Network Manager is posted all over the internet (just google "network manager static ip" so I assume they know about it already. Frankly I don't know enough about how NetworkManager works to dig into the source code and isolate the issue, since it's been ongoing for a while and I'm not the only person experiencing it I'm going to assume it would not be a simple fix.

I could use dhcp rules but then I sometimes have to move my development machine to other places where of course I will be on a different dhcp server which I may not have admin access to. The workaround of having a shell script that sends a bunch of "ifconfig" commands does the job for me whilst still allowing me to use NM for wifi config but it just seems like something that should be unnecessary.

I sort of assume that most experience Linux people have experienced some sound issues at some point. My problem with pulseaudio has always been the slight latency that seems to happen between sound being sent by the program and played by the speakers which means that for example when an MP3 file is paused the sound continues for a fraction of a second. Sure this may not be a huge issue and I didn't even notice it for a while but as soon as I noticed it, it really started to grind on me. Not to mention that it makes the system terrible for doing any kind of audio editing.

The solution for this is to uninstall pulseaudio and go back to ALSA which is a bit of a pain in itself, but once you've achieved this you get odd problems like (in ubuntu anyway) the sound control panel applet disappears and you have to run alsamixer to change the volume, also volume control for some programs (such as spotify) stops working completely.

There are a number of other various small issues that can make desktop Linux a pain to use sometimes, this is why I am often surprised that canonical seem to prioritize redesigning the GUI every few releases over fixing stuff like this.


Duckduckgoing 'network manager static ip' really came up with a variety of reports. But a good number of explanations how this is supposed to work as well, in the top.

I'm sure you tried a couple things before you decided that it's just not working the way you like it, but for me it seems that static IPs, your way, should be no problem [1,2,3] with the on-board tools.

Sound: Never had that (or any related) issue since I stopped running weird apps under wine and I light a candle for Lennart every other day for a sound solution that just works in my world, whatever I throw at it.

1: http://wiki.debian.org/NetworkManager#Enabling_Interface_Man...

2: http://manpages.ubuntu.com/manpages/oneiric/man5/NetworkMana...

3: http://wiki.debian.org/NetworkManager#FAQ


Yes, I've been down that route with setting things in network manager config and had them wiped out when doing Ubuntu updates.

The real problem here is that if you set multiple static IPs in NM GUI it will lie to you and tell you that it has done it when it hasn't. This is pretty dire UX.

The audio issues with spotify were from running the native Linux client, oddly the volume control in the Windows/Wine version works fine with alsa (although a recent update to spotify seems to have broken wine compatibility almost entirely).


Kind of limited with VPNs too. It would be great if you could simply import an OpenVPN configuration file, but in some cases the only solution is to skip Network Manager and start/stop the service from the command line.


It's 2012. Who cares?

I think Linux on the desktop hit its peak in the early 2000s, when 'Windows, Mac and Linux' was in people's minds, we had Linux companies like Loki and Transgaming etc, commercial games from Epic and ID, proper UX-focused companies like Eazel and Ximian, etc.

I think most people have given up, but that's OK: Linux on the desktop, back then, still made a huge difference to today. GNOME had GTKHTML which spawned a rival KHTML which became Webkit which now seems to be the app platform for the thing that came after the desktop - the browser.

Now I have more apps open in Chrome right now than I do in the dock, taskbar, gnome panel. So do many users. And the direction is more in the browser than ever.

It's not just Linux: worrying about the desktop per se is irrelevant - like worrying about the dominant groupware platform or the dominant LAN manager. OS/2 might be better than NT, but nobody cares anymore.


I'm not sure , if more apps move to the browser then surely the desktop OS becomes more of a commodity as long as it is able to support running the browser itself.

One thing Linux does lend itself well to is providing a kernel and basic services for interacting with hardware etc and leaving a fairly blank sheet to build other stuff on top off which could be a very basic consumer system with just a browser or a full fat dev environment, Android is a good example of this.

If all I want to do is run HTML5 apps using Chrome I can't think of a good reason to justify a $100 purchase of a Windows license , or paying more for hardware in order to run OSX.

In the future I can imagine a huge amount of the population using Linux based devices, they just won't know or care that they are Linux based devices. However I don't forsee a future in which everyone uses KDE or Unity and run only "free as in freedom" software.


Some of us like Linux on our desktops (and laptops) and do not use web apps for anything intensive.

The only web app I make intensive use of is Google Reader, and I plan to move away from it. I am glad that so many programmers and startups can make a living on a platform (the web) that is not controlled by any gatekeeper, but I would be sad if I were forced to move to something like Chrome OS.

Yes, only about 1% of users of desktops and laptops run Linux, but that situation is probably sustainable given how knowledgeable and resourceful that 1% is.

If you do not want to see discussion among that 1%, maybe you could just avoid articles like this?


"Who cares?"

The guy who wrote it clearly does.

Please do not assume that your personal opinion is shared by everyone.


> Please do not assume that your personal opinion is shared by everyone

Sorry if you thought I was doing that.

Maybe I should explain things better: the guy who wrote the post is indeed concerned with Linux on the desktop. I'm just suggesting that, whether the Linux desktop is good or bad, the desktop OS itself isn't particularly relevant to the state of computing right now and isn't worth being concerned with.


I think to a large extent "the desktop OS itself isn't particularly relevant to the state of computing right now" is because of Linux (and OSS in general).

We now have a commodity OS and toolset that can be adapted to a huge number of devices from smartphones to servers and thus used for a huge variety of purposes.

Imagine if we had a world where MS (or some other proprietary vendor) was the only game in town on desktop and server, would we have the same number of startups creating MVP webapps? In fact would the web even exist as it does today?


I would like to differ on this - I just dont get the recent consensus on linux UI's being broken. What is so bad about gnome / kde / xfce / xmonad ?


In the Linux world more so than anywhere else, you have an odd and "at-odds" mix of users. There exists the "1993 was a great year for Linux" crowd, and the "Linux on the desktop" crowd. And somehow, you have major overlap between the two. While there are distros and packages contributing to both mindsets (meaning the two can coexist just fine), the biggest problem I see in the Linux world is the users who outright reject experimentation and change.

Personally I love where Gnome and KDE (and even Unity) are going. I might not use them daily, I might not find all their features helpful or productive, and I sure as hell am not using them on my server, but there is nothing preventing any Linux user from simply not using the new software. It's easy to find a distro that uses Gnome 2 or KDE 3 or doesn't have PulseAudio. But it's hard to use modern day software without modern day packages or backends. Without this rote experimentation that is at the heart of open source, I wouldn't be using Linux on my desktop. There are a lot of very controversial ideas put forth in the mainstream distros over the years which have contributed more to the adoption of Linux than they've taken away from the traditional users.

If someone doesn't like Unity, doesn't like Gnome 3, doesn't like KDE Plasma, doesn't like bleeding-edge distributions... there is a choice. Debian, Slackware, Arch, Gentoo, the list goes on. I don't much like the interface of OSX, but I don't create mailing lists to pooh-pooh it, I simply don't use it. If you have a point, make it. If you just want to complain, get a cat.


Personal experience only:

KDE - mostly fine, I use it, but it still won't let me set a default transparency the way I could in v3.5. If I could find a supported way to run 3.5 I would.

Gnome/xfce - gtk-based so backwards button order, requires magic typing to let you enter a file path in an open dialog, probably other things that would annoy me if I put up with those any longer. Gnome also has registry-based configuration, and neither has a decent well-integrated browser or email client; firefox/thunderbird/evolution work as standalone apps but e.g. drag-and-drop doesn't work as reliably as on KDE (or didn't when last I tried), proxy settings have to be configured individually, etc.

xmonad - urgh. Underdocumented if you don't want to learn haskell, and doesn't even have a browser etc. last I looked.

There's not a whole lot wrong with KDE, but for my use cases it's still worse than it was on version 3.5 (the aforementioned transparency problem, and the lack of a music player with the features of amarok 1.4 are my main issues). That's the most irritating part.


Having network manager on the server just doesn't make sence in most cases. I'm running CenOS 6 (community rebuild of RHEL 6) on my vps without network manager and don't see any problems with this setup. I also don't know any service which would not work without network manager, I'm aware only of the opposite ones (eg. Red Hat cluster suite won't work with networkmanager). Moreover I'm quite sure that you can install RHEL, Fedora or Debian in headless mode.


I have also been annoyed by the way Ubuntu handles the network management, I was setting up an nfs server at home, a task that should've taken no longer than 10 minutes, ended up eating up a few hours, not to mention the fact that I had to reconfigure the network reboot which I blamed on myself but now blame on networkmanager.

I have been using Solaris 11 at work for the past few months, even-though I dislike Oracle, I was surprised by the way they implemented their networking, its a pleasure using it and its the most flexible and configurable networking in all OS's Ive used before.

I will still use Linux at home for personal use, I still envy Enterprises that have the financial ability to get these servers running Solaris 11. I know that It's not open source, and that Oracle is the most evil company, I still love their product (which was developed by sun, and was open-source till oracle stuck their nails in-to it) and I hate myself for loving it.


This article is not well-informed. I worked on or sat next to people who worked on a lot of the stuff mentioned. So you can take me as biased or as having a clue or both as you wish.

A general point, the changes described here have been over the course of something like 15 years. So the article seems to be making a "stuff keeps changing!" point... but we are talking about over 15 years. Think about changes to hardware, the Internet, etc. over that time. And most indicators are that the Linux desktop has moved much too slowly compared to say Windows, Mac, Android, and iOS.

Some examples of errors:

"So the Gnome developers wanted to reduce the complexity of their protocol as well and started working on a protocol which was supposed to join the advantages of DCOP and CORBA. The result was called the Desktop Bus (dbus) protocol. Instead of complete remote objects it just offers remote interfaces with functions that can be called."

This is false on several levels. dbus was mostly a kind of cleanup of DCOP for general use, with no intent to "join the advantages of CORBA" which were essentially none. I can make no sense of "instead of objects it offers interfaces" - it has both objects and interfaces, and pretty much can implement the same kind of API that DCOP does (I believe KDE even did that). Basically this paragraph doesn't mean anything I can relate to the actual technology.

"APIs to abstract the uses of OSS, esound and ALSA: gstreamer for Gnome and Phonon for KDE"

This is wrong. GStreamer is for making graphs of elements, where elements are decoders, encoders, effects, filters, etc. and can be both audio and video. There is one kind of element ("sound sink") that does abstract sound output, as you would imagine. There are some other elements that use sound APIs too. But GStreamer is not the same thing as a sound API like ALSA, in any way shape or form. It's for building multimedia _apps_, sort of a media toolkit.

Moreover, the main reason to replace the older tech here (OSS, esound) was just that it didn't work very well and didn't support a lot of the things sound cards do. It's not like keeping that old stuff was an option, since it could barely play beeps.

"it is no longer possible to run the system without a graphical user interface"

I'm just not sure what planet that's on. There sure are a lot of headless Linux servers out there in the world, and it's pretty obvious that the large Linux distributions care about this intensely.

Re: NetworkManager, if it's somehow needed when headless and not configurable headless, that would be considered a bug by all involved. Just a matter of tracking down the details and reporting them if they have not been. All the Linuxes aspire to (and in my experience do) support headless operation.

"they don't implement the original X11 protocol directly and rely on so-called window manager hints."

This sentence is total word salad. X11 has had window manager hints for two decades. What's new is "extended window manager hints" which are some new hints in the same spirit ... in order to do new things. They don't "wrap" anything, so "directly" is just gibberish. Kind of like how CSS 2.0 isn't the same as CSS 1.0, you know? This complaint is equivalent to bitching because you can't use IE5 on the modern web anymore. The protocols are documented, and you have to use an implementation that implements something from within the last 5 years. The extended window manager hints range from 6 years old to 10 years old, so that's how old a crap we're talking about.

An almost exact translation of this claim to the web is: "they don't implement the original CSS 1.0 directly and rely on so-called CSS 2.0 properties" ... see how that makes no sense?

"Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough."

This 100% misunderstands why dbus is used. The first goal of dbus is not to send a message from process A to process B; it's to keep track of processes (help A find B, have them each know when the other goes away). The messaging is important but in many ways secondary.

Overall, the article doesn't understand the big picture of why all this new stuff was needed. I think there's one big reason: dynamic change. The old ways of doing things almost all involve editing a text file and then restarting all affected applications. But to implement the UIs that people expect (as you'd find on iOS, Android, Windows, Mac), everything has to be "live"; you change a setting in the dialog, and the whole system immediately picks up on the change. You unplug a cable, everything notices right away. etc. The daemons are because so many pieces of dynamically-updated live state are relevant to more than one process or application. That's why you have a "swarm of little daemons" design. And guess what: some other OS's have the same design.

That's (at least one of) the major problems being solved. And the author here gives no indication he knows it exists, let alone his proposed alternative approach.

I sort of get the inspiration for the article: Linux has been trying to keep up with modern UI expectations without having enough staffing for that really, and certainly regressions have been introduced and there have been bugs and things that could have been better. On the 6-month distribution release cycles, users are going to see some of that stuff. It's software, people. And it's understaffed open source software to boot. So yeah, legitimate frustration, shit changes, sometimes it breaks. I get it.

But there's no need to wrap that frustration up in pseudo-knowledge as if it were a technical problem, or say inane things about getting back to the "unix way"; if someone could show up and make the desktop UI stuff behave well with the "unix way" they would have done it. Or maybe they did do it, and the critics understand neither the problem requirements nor the "unix way." Just saying.


Totally agree; other inaccuracies are:

"However, the Linux incarnation of OSS was a particularly simplicistic one which only supported one sound channel at the same time and only very rudimentary mixing."

That's incorrect. The sound channel limitation depended on the hardware you had installed. So did the mixing capabilities. If the hardware supported it, OSS exposed the additional capabilities.

Those of us with SoundBlaster cards remember very well why we looked using them on Linux (because, unlike most cards, they supported multiple applications outputting audio simultaneously).


Despite all the incorrect or misleading facts (you spotted quite a few more than I did) I can totally relate to what I think was the author's intention in writing it:

Magic.

Modern linux distributions are doing many things in a way that is, at best, surprising and, at worst, undebuggable.

"Any sufficiently advanced technology is indistinguishable from magic" comes to mind quickly.

Working with Linux in the 90s was surely not as easy as it is today, and probably for the better. But I also find myself longing for the old days at times. Examples are lack of NetworkManager (like lack of Bridge support in the version on my laptop, no clue if it's been fixed upstream, I'm using distro packages) or certain hald/dbus automagic things. And no, I won't go into details and that can be held against me, but there frustrations and annoyances - surely partly to be blamed onto me and partly to the software. Coming from that, I feel with the author.

Then again I'm also glad I don't have to wade knee-deep into config files every time I want to change something. :)


yeah, it's a tradeoff. If the software does more, then the software is more complex... OK, but, sometimes it's nice that it does more. People are used to other systems (iOS, Android, Windows, MacOS) and those are setting the bar pretty high. They are all extremely complex systems that do a lot.

Everyone wants a simple system... as long as it has just this one thing that they need... and this one other thing...

This author seems to feel there was some way in which the software could do everything it does and there would be no downsides... you know, here and there in some detail it's probably true that the tradeoff is wrong. But that's just saying "all software could be better" or "all software has bugs" or something - true, but not an actionable insight.

I get the guy's frustration. But you know, there's no need to wrap the emotion up in non-factual hypotheses about source code that one is not familiar with.

Software sucks. We all know it. Using your imagination to diagnose why isn't going to get anyone anywhere ;-)

There probably are some improvements possible if we all go look at the source and get the real info.


That's a nitpick really. Every major OS from windows 95 onwards (BSD included) could/can do software mixing on cards that didn't have hardware mixing, except for linux/OSS.


And that was only true if you were using the version of OSS included in Linux.

The commercial version of OSS supported software-based mixing when the hardware didn't support it.

Same as the version of OSS found in Solaris today.


Disclaimer I'm a curmudgeon and I think you're clueless. And no, I don't consider it ad-hominem to point out cluelessness, it's a form of ignorance, and ignorance can be cleared up by study. So rather than point out how wrong he is, ask 'what is he trying to say?' and deal with that.

This is an exemplar from your comment that leads me to this observation, you write (first quoting the author):

""Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough."

This 100% misunderstands why dbus is used. The first goal of dbus is not to send a message from process A to process B; it's to keep track of processes (help A find B, have them each know when the other goes away). The messaging is important but in many ways secondary."

I would suggest that it points out a 100% misunderstanding on remote procedure calls. Your defense of dbus asserts that its primary function is to 'keep track of processes' which would suggest its name should be 'dlocate' or 'dmonitor' but its 'dbus' because most of the traffic on it is like a 'bus' where data goes from point A to point B.

The original author points out that all of the 'features' of dbus which are not directly related to interprocess communication could have been implemented on top of the existant architecture. People have done that, they called them 'location brokers' back in the 80's. And what the folks who invented dbus missed was all of the research about what makes for good network protocols like Andy Birrell's seminal paper on the RPCs or work done at Sun, Xerox, Apollo and elsewhere.

You wrote:

"Overall, the article doesn't understand the big picture of why all this new stuff was needed."

But that wasn't what the article is about at all, it was asking the question why is all the substructure re-invented every time? The author rants at how Linux's tendency to constantly recreate every wheel every time is hugely destructive and wasteful.

The real problem, which is not mentioned explicitly but I suspect is at the root of this entire rant, is that it is infinitely easier to create in Linux than it is to fix. When there is a problem with the way desktop events get delivered you can either fix the broken system or you can invent an entire new one. Too often, for reasons which are not well reasoned or supported, people create. I see three reasons for that:

1) It is hard to have two smart, outspoken, and opinionated people work on the same piece of code.

2) If you can choose between "the person who fixed Y" or the "the person who created Z" on your resume, inevitably people lean to the latter.

3) When all you want is feature 'X' which should notionally come from system 'Y', it takes less work to create a new system Z which does all the things you personally need from Y and has X as a new feature. Than it is to understand all of the users of Y and what they need and then incorporating X into that.

And lets close with this bit, you wrote:

"if someone could show up and make the desktop UI stuff behave well with the "unix way" they would have done it."

They have, Motif and SunTools were both such then, Windows and MacOS are examples today. I think you could successfully argue that Linux is on the brink of proving that Free Software is a fundamentally broken model of software development, and use the window system as the exemplar of that argument. The closest counter example we have is Canonical which, as well all know, is well loved by all folks who work on Free Software.

The linked rant boils down to 'Linux sucks because nobody can be bothered work with somebody else's code.' which is of course an exaggeration (but what are rants if not emotional expressions of frustration through hyperbole rhetoric?). If you cannot see the danger that poses to its lively hood then yes, you are by definition clueless.


Don't know what to tell you. I was in the room and writing the code on a lot of this, and what you're saying doesn't correspond to the whys and the whats that were on the whiteboard at that time.

"It is hard to have two smart, outspoken, and opinionated people work on the same piece of code."

But all the stuff discussed here - dbus, gstreamer, EWMH, GNOME, etc. - has had dozens (e.g. EWMH, dbus), hundreds (e.g. gstreamer) or even thousands (e.g. GNOME, Fedora, Ubuntu) of contributors. And that's not counting all the people that build on top of those things, it's only counting the ones who contribute to them directly.

"it is infinitely easier to create in Linux than it is to fix"

I've always found in open source that it's harder to find people to create, than to fix. I mean yeah, there's a background noise of a thousand 1-person projects being born and dying every day. But the big projects with momentum are full of dedicated people primarily interested in incremental change.

Most of the technologies we're talking about here are in the range of 6-12 years old, with no significant overhaul or replacement in that time. For perspective, Firefox (as "Phoenix") appeared 9 years ago, and Mac OS 10.0 is 10 years old. It feels tough to argue that Linux is moving faster than Apple, Microsoft, Google, web tech, etc. It's relatively stable as OS's go.

Sure, Solaris and IRIX are (were?) even older and there was prior art on all sorts of fronts. If you'd like to argue that the original Linux desktop efforts should have copied more from those: you're probably right on some of the specifics. It's easy to say this or that could be slightly better if you look at a huge piece of software like a full Linux distribution. What counts is the software that exists, not the software we all coulda woulda shoulda written.

There were a few hundred people who probably worked on or around Linux desktop IPC back then, and I think zero argued that SUNRPC was a good option. Maybe it was, and someone could have showed up to prove it in code. They did not. Instead, a number of other systems were coded and tried (MICO, ORBit, DCOP, IPC-over-X11, even SOAP), and in the end dbus caught on as a working solution. By that time everyone had a lot of hard knocks and knew what problems they were trying to solve. All the solutions people tried worked fine for sending a message. That was not what differentiated these approaches. The problems to solve included things like how to cross boundaries between systemwide daemons and user session; how to discover, activate, and track other apps and daemons; licensing issues; a least-common-denominator implementation that all the projects were willing to use; security model; etc. At some point dbus cleaned up everybody's ad hoc hacks and experiments, and now Linux is pretty uniform about using it and has been for years. Is it perfect? Not at all. It was just the first thing to be good enough and it stuck.

If someone comes along and does something legitimately better and worth switching to, then I'm sure Linux will do so, and take a lot of heat for it too.

"So rather than point out how wrong he is, ask 'what is he trying to say?' and deal with that."

Well, I think he's trying to say what he says, which is "please don't write software which requires any of the Gnome/KDE and DBus API. Writing X11 programs with xcb and proper RPC APIs like SUNRPC or Thrift should be more than good enough."

This is nonsense.

The idea to use raw xcb rather than GTK or Qt or HTML: come on. You'd spend months getting to the point where you had crappy buttons and scrollbars working. Replicating user-expected and mandated functionality provided by the toolkits is a multi-year task to do _poorly_. You'd never, ever finish writing your app (and it'd suck, too).

On the IPC front: you'd be adding yet another way to do it and thus more complexity. It's fine to say SUNRPC should have been chosen in 2001, but it wasn't, and rewriting hundreds of apps today is nuts. Whatever your dbus annoyances, you could solve them in one place and fix the whole system.

More importantly, most of the newfangled (= 6-12 years old) crazy ideas that this post complains about, exist for some good reasons that the author of the post doesn't seem to be aware of. You could certainly build a system _involving_ SUNRPC or Thrift that would work. But you'd have to innovate on top with an understanding of the problem space. And what's the end-user benefit of that, at this point in time?

I'd argue it's a big old zero.

But if someone shows that there's enough benefit, I hope a new idea wins on the merits (and the running code).


"And if you wonder how you were supposed to install these packages without network access: by periodically calling ifconfig and ip route add until you finally managed to fetch all the data before NetworkManager would mess it up again."

Doesn't make any sense either, NetworkManager is a service that can be stopped like any other one.


While I don't quite understand the specifics the author describes, I can relate very well to the general problem: Open Source software often depends on packages that are common on the Linux Desktop. This quickly gets you into dependency hell if you try to compile the program/library on a different platform.

An example I was recently confronted with was libmdb, a library that reads Microsoft Access databases. For some reason it depends on glib2, which in turn depends on a few other libraries. In the end I needed to compile 5 different libraries because libmdb uses hash tables and arrays from glib2.


The alternative would be for libmdb to implement that stuff themselves. That would increase development time, increase the amount of bugs, increase the size and decrease developer happiness.


Or they could link statically. They're both under the LGPL, and it seems like the sensible thing to do if they're only using the library for hash tables...


Didn't downvote you, but .. How?

The project authors certainly don't care about people with 'special' requirements. For them it just saves time. Now the packagers _could_ do what you suggest, but especially for such a central library it doesn't make sense. You want to have only one version of that thing for a gazillion reasons.

The original request for this thread? Well, it seems that person wanted to port a library to a completely different system _and created the binary for that system_. Right, he can statically link it. What does he gain? He still needs to gather all dependencies before and his complaint (as a developer/distributor) of having to meddle with glib would be the same. He didn't complain about distributing another couple .so files as far as I understand the issue.


The project authors certainly don't care about people with 'special' requirements.

The source of all these problems in a nutshell. No one has a reason to polish the UX for a case that never happens to them.


As opposed to closed source dependancies on the .NET runtime or DirectX or Java?


I suppose he meant, as opposed to standard compliant libraries. For hash tables there's search.h for example. But there might obviously be other reasons like feature set and so on to pick a different library.


How is modularization and code reuse a bad thing? Fortunately, both source and binary distributions are great when it comes to dealing with dependencies.


>How is modularization and code reuse a bad thing?

The problem he is complaining about sounds like a lack of modularity to me: being forced to pull in dependencies that he won't be using at all.


He's using hash tables, so he's using glib2, and that's far better than everyone implementing their own hash table.


False dichotomy. Libraries that provide basic data structures should not have external dependencies. The issue isn't whether or not to use libraries, it is how the functionality of the libraries is organised.


You can disable network manager and use /etc/network/interfaces to manage your connections directly - this has the nice side effect of having internet connectivity even before you login. I dont understand the thrust of this article - most people running servers are running linux without a GUI and are doing so without any loss of productivity.


This looks like it's taken almost directly from 1990sLinuxUser: https://twitter.com/#!/1990slinuxuser.


tl;dr - Author tries to patch together his/her own distribution; and fails.

To provide a good user experience you need to know if the network is up or down; and when it changes. UNIX does not provide this. This is why NetworkManger is now part of the "Linux Platform".

The biggest myth in open source: "Linux is about choice".


if I could downvote.

Linux is always about choice. If you have enough information and time you can choose what ever software stack you want with it. You can even change any configuration software that comes with your distribution with any tool you want.

Also author does not rant about building a self owned distribution. The rant can even be considered good if you do think Linux is only about graphical desktop applications. Linux software can be configured or used without these but it's getting real hard to disregard their influence over other software.



Yes I was but I couldn't find it. Thanks for the reference :)



tl;dr

Linux is getting to be a mess, with all kinds of dependencies.

I guess it really comes down to your distro (since Linux is just the kernel, it's the distro which adds in all the bits). He's complaining about using RHEL and Ubuntu on a router. I think those are meant to be run on big servers. I'd guess Slackware and Gentoo are better bets for running a small server. There are also distros designed to be used on routers, if that's what you're after.

Big distros are big. They sometimes have a few bits which are over-engineered, because over-engineering makes sense to the people who join GUIs together.


Speaking as someone who has a couple of legacy servers running Gentoo: Don't do that.

Usually there's no need to have a compiler on a production machine and certainly not on a router. Unless, well, your package manager wants one. Yes, I know that you can have binary packages for Gentoo, but that kind of defeats the point.

I haven't found a serious distribution that doesn't allow you to install a stripped down system and go from there.


Fedora is constantly working to reduce dependencies. They seem to be doing quite a good job.


Can you provide any links or sources which support this? I'm genuinely interested in how the Fedora people are tackling such a problem.


I really would like to understand/get feedback (if the author is reading here as well or someone agrees with the rant)

- what 'services' on a server really depend on NetworkManager

I cannot imagine that a server would actually need it (although, on my _desktop_ I most certainly want it and love it).

From my vps (granted, not LTS. But did they actually _remove_ a dependency here? I highly doubt that):

  darklajid@neuland:~$ apt-cache policy network-manager
  network-manager:
    Installed: (none)
    Candidate: 0.9.1.90-0ubuntu5.1
    Version table:
       0.9.1.90-0ubuntu5.1 0
          500 http://de.archive.ubuntu.com/ubuntu/ oneiric-updates/main amd64 Packages
       0.9.1.90-0ubuntu3 0
          500 http://de.archive.ubuntu.com/ubuntu/ oneiric/main amd64 Packages

  darklajid@neuland:~$ cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=11.10
  DISTRIB_CODENAME=oneiric
  DISTRIB_DESCRIPTION="Ubuntu 11.10"
In fact, the Debian package description (couldn't find something for Ubuntu or RH, but the former is probably using the very same thing) says this: "NetworkManager attempts to keep an active network connection available at all times. It is intended only for the desktop use-case, and is not intended for usage on servers." [1]

- why you'd have a hard time configuring it

If you install it anyway it comes with some tools [2] called nm-tool and nmcli. The latter [3] is, while the man page admits that it isn't meant to replace the applet, described as "The main nmcli's usage is on servers, headless machines or just for power users who prefer the command line."

At this point we have the following: The author of this blog posts writes up some evolutionary history of the linux desktop technologies (arguably unrelated? Is he really missing Corba or Orbit?) and then ends with a giant rant about network-manager (which I should've proved as being unnecessary and which, contrary to what he says, _comes_ with a tool to configure it) while punching dbus every now and then.

The title is link-bait and - well - weird. The article has no real point. I'm confident it took me a lot more time to write this comment and provide the relevant links than the whole 'problem' of setting up a network connection on a more or less recent Linux system can possible take.

If it takes so long - don't blame the tools.

1: http://packages.debian.org/squeeze/network-manager

2: http://packages.debian.org/squeeze/amd64/network-manager/fil...

3: http://manpages.ubuntu.com/manpages/oneiric/man1/nmcli.1.htm...


honestly i didn't get the NetworkManager thing either. since my Arch Linux days i remember that it's manageable from cli. although i never used that cli.

but the thing that touched me is that there are standards like POSIX for the cli world and there are standards like DBus for the gui world and lack of any sane standards for both of them. POSIX-based software and DBus-based software took quite different paths in their evolution. so different that they look now like creatures from two different universes. it's always nice to see how devs from GNOME and KDE develop standards that help interoperability. i wish someone would look into interoperability between the gui world and the cli world too. i originally moved to Arch from Ubuntu because the latter looked quite noisy to me - too many desktop components that i never used, to many system resources waisted on activities that i consider useless or harmful. i've created a very minimalistic environment built around some cli tools and a few very simple gui components that didn't depend on a whole desktop. and it worked quite well until it started to fall apart - every program that didn't belong to the cli world tended to either crash or work with some limitations. mostly because of the problems with DBus. most of these problems were caused by differences between the actual DBus interfaces provided and what those programs were expecting to see. and of course there were some missing components that depended on a ton of other components from either GNOME or KDE. so now i'm back to square one - i'm back on Ubuntu and i feel like an alien. most things indeed work out of the box but it's not a linux that i understand, can easily learn, fix and customize myself.


Wait a moment. DBus is not a 'gui thing'. It is a protocol for inter-process communication. I'm not in bed with it in any way, but it seems to do a good job. You can use it from ~every~ language of your choice. Its adoption cannot (just?) be blamed on politics, it's just dead easy to use for a programmer.

Lately (and that seems to be something the author of the blog post resents) it pushed further into the system layer, for example with dbus activated services (systemd, but I'm pretty sure upstart had that as well). For as long as I can remember your distribution always started a system-level dbus instance and (only here we're talking gui/desktop environment heavy) one per session/login/user.

If you have problems with dependencies between a couple of programs that talk different protocol levels:

- Someone messed up packaging. Or you installed something in a ~weird~ way

- The same could easily happen with any other 'let's make these processes from totally unrelated projects and running using different underlying technology communicate' solution. Dbus cannot protect you from changing interfaces.

- .. except, maybe it _wouldn't_ easily happen, because without an easy way to do what dbus offers I guess you'd have fewer software and less integration points. Which you might label a Good Thing and I'd disagree.

Ubuntu still is easily manageable from the command line. It might be different from your LFS/Gentoo/Arch etc. solutions, but it's closer than FreeBSD and others. I've no love for Ubuntu, but claiming that you cannot easily (okay - define that) learn its ways and how to fix or customize it yourself? I think you should reconsider that part..


Yep, we've used DBus also on the server end: http://bergie.iki.fi/blog/interprocess_communications_in_mid...


> Wait a moment. DBus is not a 'gui thing'. yeah, you are right. i just meant to say that it's deeply connected with gui stuff and not that much with cli. the only thing i desperately needed DBus for was hardware management. i don't mind at all having dbus daemon running in background for that very purpose. when it gets to managing system daemons DBus makes me a little bit nervous - compared to dead simple bash scripts used in Arch DBus is a quite complex beast. mostly when it comes to debugging. but i can eat that.

what i can't stand is when an app that's only supposed to display image files depends on a whole lot of other completely unrelated components exposed via DBus and buried into some desktop so deeply that you need to install a huge part of that desktop just in order to make that little app happy.

what the hell is wrong with 'do one thing and do it well'? i don't need 'less integration points' - i need integration implemented on a higher, not lower level of abstraction. let me clarify this. let's suppose you want your image viewer to add paths to recently viewed pics to some database and also let you post them to your twitter account. you can write a viewer that depends on zeitgeist for storing paths and on some other funky DBus interface to work with twitter. you'll probably end up with a program that works in Ubuntu but not in another environment that doesn't provide zeitgeist or that funky twitter interface or DBus. but you could go another way:

1. write a program that only displays images passed to it as command-line arguments 2. write a wrapper that passes paths to zeitgeist 3. write a twitter app that handles posting

now your viewer works on systems with no DBus - good. the wrapper can be extended to handle any program - when the user selects a file it passes the path to that file to zeitgeist and invokes the program registered to handle files of that type. to post your pics to twitter you could write a program that displays a form for posting and uses your viewer (or any other program registered for image/png|jpg|etc) to display a preview on it. anyway just viewing and just posting are different actions and you don't need those mixed up. if while viewing an image you suddenly decide to post it you could close the viewer and open the app for posting - the path to the image file (recent file) can be read from zeitgeist. you can even integrate the 'recent documents' thing into the zeitgeist wrapper. or use some other approach to streamline things.

that's the unix way and that's how DEs could implement their reach interfaces. this would make it easy to integrate programs into desktops and preserve their 'freedom'. but DEs guys decided it's much better to trap apps into their DEs. i see it as a failure to introduce proper abstractions.

> I've no love for Ubuntu, but claiming that you cannot easily (okay - define that) learn its ways and how to fix or customize it yourself? I think you should reconsider that part..

man, just stick with Arch for a while:) you'll know what i'm talking about. system configuration via simple plain text files, simple daemon management, nothing is forced on you, if you edit some configuration file there's little chance that some gui (or non-gui) tool will mess with that file without warning you... well, let's just leave it at that. now i'll go get some sleep. no, wait! one more thing:

i gave up trying to manage any system-level stuff the way i want. i don't care anymore how things are managed underthehood in Ubuntu, Fedora, OpenSUSE, etc. i just stick with my development tools that work everywhere and try not to write programs that depend on DBus interfaces (except hardware-related ones). for all the rest i just use what's provided by the distro.


> honestly i didn't get the NetworkManager thing either

I'm on Arch and I get by well enough with wicd (at the command line) for my network management needs. It could be that my needs aren't as great as the author's, though.

As for DBUS... A couple of years ago I started using wmii and missed having email notifications in my status bar. I didn't want to run a stand-alone status bar, so I tried to hack up something with dmenu and my GUI mail client via DBUS. It was horrible, just awful. A lot of the problem was a poor and undocumented interface, but using DBUS itself wasn't great.

I've heard people mention 9P as an alternative that's ready to play nicely with the rest of the landscape, but I can't believe a sensible UNIX-like solution will ever gain mainstream adoption.


The main conclusion of this article: Network Manager is utterly broken. I concur. It's quite good for laptops and basic desktop, and absolutely unusable for any development desktop or server.

The main and gravest sin of Network Manager is that is absolutely ignores what's in /etc/network/interfaces (or similar configuration files for other distros AFAIK), and makes most usual stuff (bonding, bridging, vpns, etc) nigh to impossible. It may be relatively easy to fix, dunno.

Fortunately from my experience, you can simply remove it altogether without any problem, at least under "reasonable" distros such as debian.


I agree with the general trend that the author points out, but he speaks as if this was happening to all Linux distributions alike (he almost always uses the word "Linux" with no qualifications). This is not the case: distributions that have a strong command-line and server-oriented community (think of Debian and Arch Linux) still do support "no DBus" operation.


He blames /dev/initctl not working on systemd - but wasn't that a change in sysvinit? They moved it to /run/initctl because /dev/initctl wasn't portable. Quite recently, too.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=638019


i must say something for this topic. i fully agree with the author, that linux is becoming a mess these days when you need customize it, e.g for embedded systems or headless servers. and the first thing I _always_ do is to remove network-manager, that's an ass-hole app!


tl;dr Lennart is an ass.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: