Hacker News new | past | comments | ask | show | jobs | submit login
Arch Linux to migrate to Systemd (archlinux.org)
120 points by g-garron on Aug 14, 2012 | hide | past | favorite | 125 comments



Systemd takes a reliable, known, thoroughly debugged process (init, or various of its tweaks, including Ubuntu's upstart and Debian's insserv), and converts booting from a deterministic, predictable process to one that's inherently unpredictable.

And the stated objective? "To reduce boot times".

The best way to reduce boot times is to not boot. The reason I reboot systems is to return them to a known good state (or, very rarely, to perform a kernel upgrade).

On server hardware, I perform boots infrequently, and really, really, really want them to work right.

On end-user hardware, I perform boots infrequently, preferring to use suspend/restore to quiesce my systems (suspend to RAM, occasionally suspend to disk). That is a process which I'd like to have very thoroughly debugged and not give me any unhappy surprises (say: crash my video, e.g.: interactive session, lose track of drivers/hardware, especially wireless).

Systemd is the wrong answer to the wrong problem.

Much written better than I can: http://blog.mywarwithentropy.com/2010/10/upstart-better-init...

Systemd loses the huge transitivity of shell scripting, and puts you in the position of needing to acquire a novel skill at the one time you least need to be learning and most need to be applying: when your systems won't boot straight: https://lwn.net/Articles/494711/

I'm also not much surprised that Red Hat, who've had such a historic problem with consistency and reliable dependency management within their packaging system (as compared to Debian/Ubuntu) are proponents of this technology (hint: it's not the package format, it's the policy, or lack thereof). And now Arch.


As an OS X, and thus launchd user... I think you guys are crazy! :)

For launchd, the service description is a few declarative entries: on OS X, ssh.plist is 37 lines, but only because XML plists are really verbose; it could be half that in a saner format. On my Debian system, /etc/init.d/ssh is 167 lines of almost entirely boilerplate shell script that has to be maintained separately for each service (and that isn't even enough to make the script standalone; it invokes the 1400 line start-stop-daemon). The only thing simpler about SysV init is that it's the legacy everything is compatible with: the simplicity of shell scripts doesn't hold up when you need over 100 lines for a simple daemon.

launchd itself is many thousands of lines of code (too much?), but it provides cron and inetd-like services (i.e. generalized on-demand services - it is really nice to know that a daemon has zero effect on my system, no pages that had to be loaded from disk, when it's not being used, but still operates efficiently when it comes under load; this also makes the implementation for the daemon simpler in some cases), as well as automatic process termination/restarting. Its service-on-demand focused dependency model is nondeterministic in the same way that systemd is (?), but it's completely reliable, since it's standard by now so everything is designed to work with it.

Of course I usually use suspend and restore, but making rebooting really fast makes the system more fun to use.

And yes, I'm talking about launchd, not systemd, but from what I've heard systemd is pretty similar in design and goals.


>On my Debian system, /etc/init.d/ssh is 167 lines of almost entirely boilerplate shell script that has to be maintained separately for each service

That's a failure on debian's part, not a fundamental flaw of init. Guess what the equivalent looks like on openbsd?

    daemon="/usr/sbin/sshd" 
                        
    . /etc/rc.d/rc.subr     
                        
    rc_cmd $1


I'm sorry, but shell scripts suck as a language for booting the system. You need to fork() and exec() for almost anything non-trivial, wasting precious CPU cycles in the progress. It looks like every Linux distro has its own way to manage boot scripts. And when they fail, you have no idea what happened.

More importantly, init only handles starting and stopping of services. They don't manage services, like restarting them when they crash. Systemd can do that. The socket activation stuff also allows one to potentially save resources by not starting services until they're really needed.

The best way to reduce boot times is to not boot? Have you ever heard of "laptops" and "average users"? Even on my servers, a shorter boot time is welcome.


I'm a daily laptop linux user, specifically Chakra Linux, http://chakra-linux.org/.

My wish list goes something like this: better wireless drivers, improved sleep/suspend/hibernate/resume, better power management, a better package manager, more up-to-date applications ...

At the very, very bottom of that list -- the very last item, so far at the bottom of the list that it's in danger of falling off entirely -- is "faster boot times".

Dredmorbius is spot on, at least for me and my daily usage and the couple dozen or so servers that I'm responsible for. If things are so pooched that I have to reboot it, then it doesn't really matter to me anymore whether it takes 30 seconds or a minute to start up. I would much prefer not having to reboot it in the first place.

Since Chakra was (I think) forked from Arch Linux, I'll have to check and see if they're gonna do this too.

I hope not.

(edit: none of this is intended as a criticism of Chakra's development team, who have been doing an amazing job of putting together a system that, despite its warts, I genuinely enjoy using every day.)


Note that a better way of saying things is that systemd deals with state changes a lot better. Booting is one big state change, but when you have a laptop you go through a heck of a lot of other state changes (suspend, hibernate, resume), docking, connectivity changes (eg wifi coming and going), storage added and removed etc. You may let other people use your system (more state changes).

systemd can also ensure that only services you use actually use get started. For example printing is done as a server on Linux (cups) so systemd can ensure it doesn't start until you need it. This reduces power consumption.

Because of the way systemd manages services it can also do a better job of isolating them and dealing with unexpected issues. For example if the print server crashes, or someone attacks it while in Starbucks you'll be better off. (Its chrooting is easier to use, as well as the way things are put into control groups.)

All the things you list require developer time and attention. If systemd lets developers spend less time on startup scripts, then they will have more time to devote to the things on your list. (If you've ever had to write startup scripts you'll know how long it takes to develop and debug them.)


You can add state-change management to your system without mucking with really solid, stable, low-level, critical code like init.

There's already hotplug support, xinetd, ifupdown's pre/post up/down stanzas, and the like (though networkmanager's screwing that bit up wonderfully). Chroot jails too. I'm not saying that these are perfect (and some are a very pale shadow of perfect indeed), but they're independent of init.

Systemd mashes a whole bunch of crap in one place. Most of which I really don't want to have to worry about.

Now, if Arch and Fedora want to serve as test beds for this stuff -- and either perfect them or reject them as nonviable, well. Yeah, I suppose I can live with that. Though I'm definitely not a fan.

You see, there's a few things here.

For my own systems, I really like not having to fuck with useless shit. Currently I'm managing networking manually on my laptop as NetworkMangler has gone to crap again. So I run "ifconfig" and "route" from a root shell (yay for shell history and recursive reverse search).

For servers, part of my performance evaluation is based on how many nines I can deliver. Not having shit get fucked up does really nice things to my nines. Having shit change does crap things to my nines. I like my nines. I really hate change. It's an ops thing. Where I've got to have change, I like to have it compartmentalized, modularized, with loosely-linked parts and well-defined interfaces.

Startup scripts are a very much mostly solved problem. Debian gives you a nice template in /etc/init.d/skeleton. Play with it. Yes, I've written startup scripts.


No one is stopping you from making your own distro that meets your own needs. And servers generally do not have state changes, and it would generally be acceptable to just reboot on any of them.

You may enjoy micro-managing your networking etc - good for you. Some of us don't like doing that. To give one example of stuff that certainly doesn't just work, I was trying to run Squid on my (Ubuntu) laptop and it certainly can't handle state changes well, and neither can Ubuntu's ifup/down and init system. I often ended up having to manually do stuff that the system should have been able to handle well.

I'm personally delighted with systemd's functionality - the way it captures output from services would have saved me hours in the past from services that wouldn't startup cleanly and avoided providing useful information as to why.

(Separately: my kingdom for a simple caching web proxy server)


Systemd on opensuse as well as Mageia is pretty damn reliable. It is not 'test bed'. Various distributions have been using this.

As a result, there are way less differences now between distributions. Which means configuration becomes easier.

In any case, if you really care about things not changing, then I assume you're using a distribution which doesn't change this suddenly. So I don't see why your so awfully negative.


I'm a daily laptop linux user, and like you I know how to go a long time between reboots, so I can wait for my computer to start up.

But forget about us, we're already converts, we don't matter. My grandfather (93 years old) is also a daily laptop linux user. When he presses that power button, that laptop better be booted and ready /yesterday/. And when he pushes it again, it better be off before he closes the lid. Slow startup and shutdown times are simply not an acceptable user experience; they are literally the difference between enjoying and wanting to use the computer, and not wanting to bother with it.

And don't think for a minute he's going to learn about suspend, hibernate, power savings, battery life, or whatever. It's just not going to happen. His laptop lives in the closet, so it's going to be off (either by his doing, or the battery running out). When he sees something on tv and wants to read about it, he takes the laptop out, plugs it in, and turns it on. If it's not ready for him when he's ready for it (i.e. now) then he just won't use it.

However, since I've got that sucker booting from power button to firefox home page load complete in under 7 seconds, he uses it all the time. And it's amazing how it enriches his life. You simply can't get computer use to penetrate into lives like his without fast booting and an easy user experience.


The sane thing is to tie power management to the power button.

Light press: hybrid suspend suspends to RAM, also saves state to disk -- system spins down quickly and, so long as it's not been hibernating long enough to drain battery, restores in a second or so. Longer and it will do a boot/restore from disk.

Long press: powerdown.

Many devices have separate "suspend" and "poweroff" hardware (or soft controls) as well.

The OS and tools do all the magic bits.


That's lovely, but you're not paying attention. It doesn't matter how it's set up. It matters how it preforms.

To the non-enthusiast / casual user, closing the lid, pressing the power button, doing a system shutdown, inactivity sleep timeout, and the battery running out are all the same thing: the computer was "on", now it's "off". Asking someone like this to think about how the reason it came to be "off" affects how fast it will be ready for them later is a fool's errand. It needs to be fast in every circumstance.

Normal people just want to get something done. They judge their computer by how easy it is to use and how fast it responds to what they do. That included cold boots, launching program, and downloading webpages. Even if they're doing something "the wrong way", they will still judge it with the same criteria and the same harshness. I want my grandfather to use linux because I can quickly help him and fix things from afar, and because there are very few ways for him to mess it up. He uses it because he really thinks it's better then windows, and that's purely because it's fast and easy, every way he uses it.

For the record, I set it up so the power button does a shutdown, and everything else results in a hybrid sleep. What he understands that he can shut it down if he wants, otherwise no matter what happens (lid closed or not) everything will be the way he left it, even if he forgets about it for a few days or doesn't charge it.

That kind of simplicity is what allows people to think of linux as something they can use, not just some super complicated tool for "hackers" and "computer geniuses". I'm not saying it should be dumbed down or have options removed, but I am saying that making it enjoyable for everyone results in more people using it, and that benefits us all.


That is really cool, both your grandfather using a Linux laptop and the boot time. Could you name a few components you used?


Thinkpad T61, Ubuntu 12.04 LTS, SSD. Trim down the services you don't need. Cold boot and hibernate restore take about the same time.

Honestly, I think the SSD has the most to do with it.


Your message seems to have the hidden assumption that development resources are now being redirected from wireless drivers/suspend/power management to improving boot times. This is false. Different components are handled by different people, and axing a project does not mean that the people responsible will automatically work in one of the other fields that you prioritize.


I am not sure I understand how systemd's development inhibits progress of any of the things on your wish-list.


> My wish list goes something like this: better wireless drivers, improved sleep/suspend/hibernate/resume, better power management, a better package manager, more up-to-date applications ...

Use a different distro. None of these is a problem on a modern distro with reasonably modern hardware.


Exactly how precious are those CPU cycles? I mean, really. Can you put a dollar figure on them?

And then contrast that with the dollar figure for consultant / employee / remote hands time to figure out WTF went wrong?

There are numerous systems for managing services: monit is the best known, mon and several proprietary systems also exist. Nagios can tell you if the service is running or not (though it doesn't handle the start/stop logic).

These are small details and extensions on top of the existing SysV init foundation.

Ubuntu's boot time is already down to 8.6 seconds -- a restore from suspend is barely less than that (and restore from disk is considerably longer), though both restores preserve user state. You know, what applications / files you had open, and what was in them when you left off, positions of windows on your desktop. All that jazz. http://www.jamesward.com/2010/09/08/ubuntu-10-10-boots-in-8-...

The socket management is kind of nifty, but doesn't add a whole lot that xinetd didn't already offer (systemd does allow multi-socket services and d-bus-initiated services). I'm not convinced these couldn't be hacked into xinetd while preserving the simplicity and stability of init.

My desktop state (and its preservation) is worth a lot more than fast boot.

Yes. I've heard of inane gratuitous questions. As I said: if you're forcing average users to reboot with any frequency, you're Doing It Wrong.


No, monit doesn't manage services. Monit tries to follow clues you've given it about what's running, it polls them once in a while, and if something appears to be not running (as measured by the instructions you've given it), it runs the one-liner you've given it that should start the thing up again.

Monit does a thing that approximates managing a process, for certain values of "approximates", "managing", and "process". Supervisory process management is one of Linux's absolute weakest points. I cut my teeth on fault-tolerant HA minicomputers, and it pains me to think that 30 years later, we still don't have a way to say "make sure apache is always running. period."

As a great blog pointed out, there is exactly one process that KNOWS when a service has stopped running, and it doesn't need .pid files or polling or anything else to tell it: process 1.

I'm not a systemd advocate - I don't know enough about it, and we're using Ubuntu so I'll end up learning upstart anyway - but read this, it's way more eloquent that I can be:

http://dustin.github.com/2010/02/28/running-processes.html


Fair points. And thanks, by the way, for actually advancing the discussion.

Init can and does manage processes. Somewhat crudely, mostly via the 'respawn' directive. One thing it isn't particularly good at is telling if a process is doing something useful (say, serving out web pages successfully), but it will let you know that it's running. There was a semi-popular hack some years back to run sshd out of init (via respawn) to ensure you always had an SSH daemon on your box (Dustin mentions this). The downside is that while it will ensure sshd is running, it doesn't give you much flexibility over the process (you've got to edit inittab and 'init q' to make changes).

What monit and kin can do, above and beyond process-level monitoring, is check that the service attributes of a process are sane. That a webserver, say, kicks out a 200 OK response rather than a 4## or 5## error, and restart the service if this isn't the case. Checking for correct operation can be more useful than simply verifying a process is running (though going too far overboard in defining "correctness" can also cause problems).

For realtime/HA tools, attacking things on the single-system level is probably the wrong way to roll. You want a load balancer in front of multiple hosts with response detection -- is host A still up or not? Whether or not this ties into mitigation (restart) or alerting (notifications to staff) is another matter.

There are also places other than init you can watch things from. /proc contains within it multitudes, including a lot of interesting/useful process state. Daemons can be written with control/monitoring sockets instrumented directly into themselves. Debuggers, strace, ltrace, dtrace, and systemtap all provide resolution inside a running process/thread. Creating something sane, effective, efficient, and sufficient out of all these tools ... interesting problem.


>Ubuntu's boot time is already down to 8.6 seconds

Well, ubuntu doesnt use sysvinit either.


How long does it take your servers to finish POST? Shaving CPU cycles on boot is not something I ever worry about, because just getting to the boot loader takes minutes.

Also, shell scripts rock for an init system language. It's a language that almost everyone knows and can debug without being a CS major. The only reason you 'have no idea what happened' is because the scripts are written poorly, and code in any language would be hard to debug if it's written poorly.

Fork and exec, seriously? You're worried about functions that take microseconds to finish? Look again - the huge sleep cycles to wait for drivers to finish initializing takes up a lot more time.

I have written my own init systems three times in three languages, and examined countless distros' versions. Trust me, shell is the best compromise.


I have a Debian laptop.

ls -alh | wc -l returns 89. I can subtract "..", ".", and the "totals" lines, so that's 86 init scripts.

Big O for 86 scripts is 86 * n, which simplifies to "n". I'm not concerned.


'ls -A | wc -l' will spare you having to account for the '.' and '..' lines. Omitting the '-l' (redundant for your case) also spares the "totals" line.


"Oh, and because each script runs in less than an hour, we can just say n = 1 hour, and that's a constant, so it simplifies down to instant!"

Yeah, that's not how computational complexity works.


You're going to bash systemd but give a pass to upstart? Give me a break. Upstart is just as radical a departure from SysV init as systemd is, but the documentation (and IMHO, features) is much poorer.

As far as I can see your arguments are 1) you boot your systems infrequently, so any work in that area isn't valuable 2) socket-based activation is somehow not predictable 3) you're familiar with shell scripting, so a change that replaces shell scripting with something else must be bad. 4) and then you thow in some unclear Red Hat FUD for no apparent reason. None of those sound convicing to me.


How often do you reboot your systems?

What is giving rise to the need to reboot them when you do?


Boot times is one thing.

Upstart and systemd provide tons and tons of other features though. Restarting of crashed processes, dependencies, etc.. They also generally have much more simple config files instead of start up scripts. I don't know how many crappy startup scripts I've seen over the years, when in practice: set these environment variables, execute this program as this user with these arguments is 95+% of what's needed.

Much much much more straight forward to have some specially formatted comments (?!hahaha, that's the UNIX spirit!) to determine the boot priority and then source some files to read some arbitrary variables and construct the command line that you're interested in running with complete abstraction.


I mention boot time because it's what's pointed at specifically by Poettering in his arguments for systemd as its core benefit: http://0pointer.de/blog/projects/systemd.html

The other functionality may be nice, but 1) it's got no place in init and 2) really complicates a key piece of system infrastructure. Complexity and change are the two dual enemies of stability. As an old-fart ops type, with scars on my hide and notches on my belt, I really hate both change and complexity. The mess with my nines.

Arch and Fedora are relatively wide of my usual ambit, but I've learned in my years to be wary of what others ask for -- you may get it and have to live with the consequences (see: GNOME).

So. Yeah, I'm pretty skeptical.


One great benefit to stability is the amount of users. Sysvinit was different in every distribution. With systemd, almost everything is shared.

This results in way more users and developers looking at systemd. As a result, less bugs.



My computer changes location at least twice a week as I commute between my place and my girlfriend's place. A Mac mini serves my needs very well because (including AC adapter) it weighs only 2.7 pounds, and I really appreciate having more ports than most laptops have and not having to pay for and carry around a bad keyboard and a laptop display. (I consider all laptop keyboards bad keyboards, and -- maybe because I am "far-sighted" -- much prefer my girlfriend's 32-inch TV to any laptop display.)

But since the Mac mini does not have a battery, S3 sleep mode does not survive unplugging the device. And since suspend-to-disk is not supported by the OS I run, shutting down is the only option.

P.S., I would have preferred something like a Mac mini, but with a small battery that powers S3 sleep mode. Sadly, I could not find anything like that on the market.

P.P.S., I run OS X on it. If I were to switch to Linux, would suspend-to-disk work reliably?


Well ... you're not running Linux, so systemd is moot (you've got launchd instead, which has certain similarities).

I'm a fan of small form-factor systems, though I suspect we'll start seeing these as G3 tablets (where the iPad was G1, and the current Android-and-others are G2). Which is to say, devices with integrated display and battery, to which other peripherals may be attached (physically or wirelessly, say, by Bluetooth). That said, we're not there yet.

And yes, small form-factor PCs (CPU, no battery, no display) are pretty slick. I'm something of a fan of the FitPC offerings: http://www.fit-pc.com/web/purchase/order-direct-fit-pc3/ (Googling "small form factor" will show you numerous other vendors).

I used a similar configuration under Linux for a time, and as of mid 2000s, found suspend-to-disk worked pretty reliably, though not perfectly. In the past 4-5 years on laptops and desktops, I've had very few problems, mostly traceable to display drivers.


>In the past 4-5 years on laptops and desktops, I've had very few problems [with suspend-to-disk], mostly traceable to display drivers.

Thanks.


You can do what you want. man pmset, or Google "hibernatemode".


Have you tried Deep Sleep for hibernating your Mac?

http://deepsleep.free.fr/


Just because you don't reboot often does not mean it has to be slow. In open source community, people choose their own projects. You can't really expect for force for that matter for them to work on your favorite things.


Except in practise, systemd works fantastically and you're all worried about nothing.

Systemd also does much more than that and handles stuff like daemonization and socket creation, so that these things don't need to be re-implemented in every program that requires them.

Bash scripts are overly verbose, repetitive, and awkward in comparison to unit files.

And you can always use sysvinit if you still aren't convinced, just Arch will be optimised for systemd.


...except that those things do still need to be re-implemented in every program that requires them, since most POSIXy programs are portable to more than just systems using systemd.


This being my primary problem with the current upheaval in Linux system organization. Its instigators have mostly made it clear that they consider everything not Linux (or possibly preferably not their favorite flavor thereof) to be obsolete - throwing portability out the window.

Frankly, it's getting old.


I fail to see the problem. Either something is portable, or something is not. It is basically up to the developer.

If not being portable means way less time spent on development, then some people might choose that. Good for them.


Yeah, but then when developers in other OS besides GNU/Linux take the same attitude, they get bashed to death for not caring about portable software.


do they? do you have any examples?


Microsoft.

Apple.

IBM.

DEC. Oh, wait, that didn't work out so well, now did it?


https://bugzilla.redhat.com/show_bug.cgi?id=708572

Software has bugs.

Core, deep systems software has subtle bugs, or hidden bugs, or emergent bugs, or any of a whole host of things.

If arch and fedora want to ride this tiger, I guess they can.

Again: init is really, really stable stuff.

Add in hooks to journald, d-bus, and the equivalent of an xinetd replacement/upgrade. Too much change.

And a Really Bad Attitude from the developer. My experience (a few decades of beating around on various tech at various scales) says this doesn't bode well.


Your "much written better than I can" article is about Upstart, not Systemd. They're unrelated init daemons, and it seems like many of the complaints in the article relate specifically to upstart (event-triggered services with no dependencies, minimal scripting support, killing daemons, not all daemons managable by upstart) but do NOT apply to systemd. Systemd has service dependencies, support for old-style init scripts, and can stop daemons with arbitrary commands. Are you confusing the two init systems?

On the other hand, this would be nice: "There is no tool that will print out a dependency map." It's also pretty trivial to implement with a little shell script and graphviz.


> There is no tool that will print out a dependency map.

systemctl dot | dot -Tsvg > systemd.svg


Nice, here's the result on Mageia 2:

http://frammish.org/systemd.svg


I did a PNG of the Fedora boot process and it's 28 MB and 27852x4091 pixels :-(


And the article I had in mind to post:

Juliusz Chroboczek: A few observations about systemd http://lwn.net/Articles/453004/

Editorial / discussion at LWN: http://lwn.net/Articles/452865/

And for the record: I'm not particularly much a fan of upstart, but it annoys me somewhat less than systemd.


Point. I'd linked that off another discussion I'd had some months ago and failed to re-read it with full_comprehension bit set.


I work on embedded linux boxes in vehicles and boot times are hugely important to us. The time from when someone turns on their car ignition to the time when our box is usable is critical.

I understand that this feature isn't important to dredmorbius, but to some of us this type of improvement is fantastic.


Any reason you can't just sleep/suspend your system at ignition-off? I can see that there might be times when the system does go down hard and you've got no option but to reboot, but still, that should be rare.

I work with a fair number of embedded systems myself. Most avoid full boots where possible.


Depending on the app, sleep may be possible. But for some apps, like automotive, it is not surprising to see requirements like current draw being less than one milliamp in the sleep state.

I haven't seen or heard of any embedded apps which hibernate to flash. That would not be terribly fast, and would wear out the flash quickly.


That could raise some interesting engineering/maintenance considerations.

A local capacitor might provide the latent power to support sleep state. Or you could provision flash with enough ECC and reserve capacity (a 16 GB microSD drive fits on my pinkie nail) to survive years. Might even make swapping the storage a regular maintenance item, say 5-year cycle. Figure a high-end duty-cycle of 10 starts/day, 365 days/year -- that's 3650 read/write cycles a year. Even if that's a 100x low estimate, we're talking 365,000 cycles/year (that's assuming 1000 starts/day). As of 2003, AMD were discussing 1,000,000 cycle lifetimes for flash storage: http://www.spansion.com/Support/Application%20Notes/AMD%20DL...

Actually, in five years, controller technology would likely advance enough that, provided your unit production count is high enough, you'd just swap the entire controller for a new component with enhanced capabilities.


You wouldn't run Arch on an embedded system though. At least, I hope you wouldn't.


> Systemd loses the huge transitivity of shell scripting, and puts you in the position of needing to acquire a novel skill at the one time you least need to be learning and most need to be applying: when your systems won't boot straight:

What about those that can't debug when a shell script breaks?

Your answer is going to be that they have no business administering a server where a shell script is an integral part of the system working.

Conversely, someone that can't debug when a system can't start that uses systemd has no business administering a server where systemd is an integral part of the system working. If your system uses systemd, then you're going to need to learn a new tool. Get used to it.


If they can't debug it, they can find someone who can, and that skillset is, I can guarantee you, going to be far more widely available than Systemd debugging fu.

On which point, specifically: when Debian breaks during initrd execution, the system is dumped to a shell, "dash", a POSIX-compliant shell. It doesn't have all the niceties of bash, but it's usable.

When a Red Hat system breaks during initrd execution, the system shell doesn't handle terminal IO. You literally can't even fucking talk to the damned thing. It's a scripting-only shell.

The kicker: the RHEL initrd shell is larger than dash.

Guess which of these two systems is easier to troubleshoot / debug / rescue in a pinch?


    the system shell doesn't handle terminal IO
Does this help? http://en.gentoo-wiki.com/wiki/Initramfs#Job_Control

    the RHEL initrd
You haven't seen how this epic engineering artifact of wheel reinvention is exploding in your face. See http://lwn.net/Articles/506842/ Now try debug it with rd.debug and you will have debug info printed for debug info printing functions.


If you want bash instead of dash:

    dpkg-reconfigure -p low dash
Its that easy to get bash back...


I can live with dash. It's living without terminal IO that sort of puts a damper on things.


> What about those that can't debug when a shell script breaks?

Are they more able to debug when a systemd setup breaks? If not, it seems like a moot point to bring up. They're hosed either way.

Although I do have to say, I like the systemd model. The use of sockets to do process activation and thus doing away with almost all of the need for dependency management is pretty cool. I haven't used it enough to pass judgement, but the concept has the potential to be a good deal simpler than the init hackery we have now.


How is it a moot point? If you are equally unable to debug it either way then dredmorbius' original argument about the failure point being a bad time to learn to debug is completely meaningless. The point is that either way you're going to have to learn how to fix problems before they happen but dredmorbius is complaining because ey just happens to already know how to debug one form.


> Are they more able to debug when a systemd setup breaks?

Yes, because no programming skills are necessary for editing systemd unit files.


While I understand why you don't want to boot all the time, some of us do.

Here are some reasons:

* Despite years of work on it, I find sleeping on laptops on linux is still flakey. My Thinkpad T420s failed to wake up about once a week (on Ubuntu), so I tend to shut down.

* I like having a clean desktop when I start on a morning. If I keep sleeping my machine, I just tend to gather up programs. Of course, you could argue I should get more sorted, but I don't really want to.

* One other problem you have is to do with Linux being used on both servers and desktops. I can see your problem. Personally, if my machine ever got in such a mess that they couldn't boot, I'd just reinstall, regardless of what had broken. I suspect most people are the same. However, I can understand if you want to be able to edit how your machine starts up, and fix it when it brakes.


1: File a bug report. As I said: if you want faster boots, boot less. We should be fixing problems (like hibernate/restore flakiness) that cause people to reboot. Or long-term power draw that requires embedded devices to require poweroff. Or flash read/write duty cycle limitations that limit the ability of embedded devices to save state / the rate at which they can save/restore data. Etc.

2. You can bounce your X session. No need to reboot the full box (me? I prefer saved state).

3. My servers may be anywhere from several feet from me (stuffed into a closet with limited access and a crap POS keyboard and monitor) to tens to thousands of miles away. With varying values of ILOM / remote hands / virtual media support. "Reinstall" isn't generally a highly tenable operation. Being able to handle issues without having to dedicate one or more staff days to travel and unavailability for other tasks really sucks productivity down.


Is any of this really an argument against Arch adopting it? Arch and Gentoo and all the other rolling distros are the cowboy distros (I run Arch on my laptop). Literally anything can break with an update on Arch. Is Arch not the best place to try new ideas and see if they work out with a bunch of fairly technical users? I'm not about to switch all my servers to it but I'm more then willing to play around with it on my laptop.


Just for the record, if you leave keywords at its default setting rather than ~arch you can run a perfectly stable system under Gentoo.


> Literally anything can break with an update on Arch.

I can confirm that. After the 3rd time such a thing happened to me I switched to Ubuntu. Arch is a testing bed for people who like screwing around with Linux. Nothing against that but from time to time I'd like to be able to do actual work on my workstation :)


Shell scripts are awful for boot. They have no expression of a dependency graph and truly pathetic notions of state. Hell, starting a process in the background takes ACTUAL THOUGHT in a shell script. How insane is that? systemd may not be as thoroughly tested but at least it's designed thoughtfully and will eventually be more reliable. For now, let it be relegated to arch and let them test it. What are you even doing with arch on a server anyway?


> For now, let it be relegated to arch and let them test it.

FWIW, Fedora moved to systemd a year or so ago.


inssrv adds the dependency graph you're looking for.

Standard on wheezy. Allows for parallel launch of services.

I'll spend more time in hardware init (especially on servers) and fsck (even just journal replays) than service startups for most part. Even my servers (minimal services starting) take a while to come live, mostly due to the actual workload stack coming up. Then caches get to warm up and all that jazz.

Boot time is still a very small part of this.


> And the stated objective? "To reduce boot times".

boot times is only the third point mentioned:

Systemd has a overall better design than SysV, lots of useful administrative features and provide quicker boot up


Dependency problems on Red Hat? News to me, and I admin various RHEL servers.

I do know that there were issues maybe 10+ years ago. Bringing things up that were solved 10+ years ago is a bit pointless.

Also regarding your systemd summary is inaccurate. You give the impression you just don't like Red Hat, because you don't really say anything concrete about systemd aside from some very generic remarks.


> I'm also not much surprised that Red Hat, who've had such a historic problem with consistency and reliable dependency management within their packaging system (as compared to Debian/Ubuntu)

Yawn this again. Give an example please.


I think "historic" is important here. RPMs were still hard to use (mainly dependency hell) for a while after .debs were easy. This hasn't been true for years though.


I don't think it's relevant at all.


On server hardware, I perform boots infrequently, and really, really, really want them to work right.

My philosophy is different. Make a change to a server, reboot.

The goal is to eliminate surprises if the server is restarted unexpectedly. I'd rather have them during the maintenance window than at 03:30 after a power outage.

Anyway - to systemd.

I was appalled when we moved up to Solaris 10 and the SMF facility started to replace init scripts. It felt wrong.

I adapted. It's not wrong, it's just different. Better in some respects: you can still use bash scripts, but you have better control over them, a standardized way of managing things.

Now we're abandoning Solaris for Linux and ... I'm appalled that 'linux' default method is still .. init scripts. And a hodge-podge of stuff like djb, systemd, etc, all with competing fan boys and advocates.


I'm a big fan of knowing my systems will initialize properly as well.

AT&T ran into a little restart issue, as I recall, in 1990 when a software upgrade gone wrong crashed much of the phone network. Among the problems were that most of the switches had been upgraded in place, many over decades, and there had never been a cold-boot restart. There was some uncertainty as to whether the system would start up properly or not.

While long uptimes are nice, I generally prefer seeing a few reboots annually just to be sure things will come up right. There's a balance between "restart for every change" and "restart regularly enough to not be surprised at 3am".

http://catless.ncl.ac.uk/Risks/9.62.html#subj2


developers boot more often than normal users


I've been through phases where I've done frequent reboots. As noted above, hardware POSTs are usually the bulk of the cycle. There used to be other annoyances (Sendmail's 2-3 minute timeout on non-networked hosts was a real PITA).

Now it's other stuff. With chef and automated system management, repeat 'apt-get update' runs, which even with local caches and other tricks give about 120-150s per startup.


The opposition that is seen from some people to moving away from SysV style init is amazing to me. I don't have much experience with systemd, but upstart has been a refreshing change from the old shell scripts.

Having been both on the packaging side, as well as the admin side, I can't imagine not abandoning the daemonize and PID file paradigm. The number of package I've seen that have init scripts that don't properly stop or start the daemon, or don't check the pid file and or subsys lock file; or daemons that don't properly chdir, or don't release an errant file descriptor, make me want to scream. Not to mention the process monitoring and full lifecycle management, it just seems like a no-brainer decision.

There's a lot of noise coming from people saying that their laptop doesn't need it, booting isn't that slow, etc; the driving force isn't targeting laptops/desktop, it's targeting the largest use of Linux -- servers. The process management is the big win, and boot time is just a bonus.


On every server I've seen, firmware initialization, generally networking and storage controllers, takes far more time than the OS boot sequence. To the tune of 4-5 minutes for hardware vs. ~1-2 minutes (or less) from kernel boot to console login.

At which point actual workload stack initialization (webserver, application server, database, caches) generally takes additional time. Depending on where you're starting form, a few seconds to many minutes or hours (DB init/restore/replication from snapshots/backups/master).

Again: the few seconds you're going to save swapping out really stable infrastructure 1) isn't the problem and 2) introduces change and complexity (and hence uncertainty and unreliability) to a very critical system component.


Did you actually read the comment you replied to or you keep copying and pasting blindly your strawman argument in this thread? If you did read it, which part of "The process management is the big win, and boot time is just a bonus" was unclear?


1: boot time is Poettering's own argument in favor: http://0pointer.de/blog/projects/systemd.html

If the systemd team wants to drag goalposts all across the field, that's fine. I'm just going to note their original location.

If you want to build a better xinetd, or better SysV init based dependency system (insserv), or alternative (upstart), then do it. OK, upstart also fucks with init, but with a lot less whack then systemd.

As to the "I've seen poorly written init scripts": on my distro of choice (Debian), package maintainers do a very good job of providing sane scripts (which are a lot easier to follow than RH scripts, something I noticed when first cutting over to Debian), in part because the distro provides a solid, 18-years of evolution, SysV init based process, and a policy that tends to iron out occasional bouts of dumpth.


I agree, systemd seems to be awesome for embedded computers and servers while just a minor improvement for the desktop. I could care less about simplifying process management for my desktop. I do not manage the processes on my desktop, and do not write my own init scripts there either.

I believe though that systemd is actually targeted at the desktop. :)


systemd is so obviously better than anything out there, I'm surprised there is any controversy. I've yet to see a valid complaint.

"Poettoering sucks! PulseAudio!" That's not much of a technical argument against systemd, now is it? Pretty much everyone who complains about PulseAudio doesn't even know what it is; they just blame it when their audio doesn't work (usually for some unrelated reason).

"It's not deterministic!" You're probably talking abot the socket activation. That part is pletny deterministic - a message comes in for a service, that service gets started. Are the messages coming in not deterministic enough for you? You can add your own unit files that starts the service at boot, and you can even control what starts before and after the service.

"Shell scripts are so simple!" You know what's simpler than a shell script? A unit file. They are also more consistent. The various shell scripts are all written by different package maintainers and are rediculously diverse. Some are full-featured init scripts that can send signals to the service to make it do stuff; others can't even restart the service. Unit files are so simple that it's pretty hard for any different styles to really matter.

Also, the method for enabling a service is different in all the distributions with shell scripts. Debian is update-rc.d; Red Hat is chkconfig; Arch is vi /etc/rc.conf. With everything moving to systemd, we finally have one way: systemctl.


It doesn't matter if systemd is written by Poettering or somebody else and it also doesn't mater that we don't have shellscripts any more. The real reason systemd should never be accepted is that I need programs to configure or talk to other programs. I even need a special program to read logs. This is the replacement of files with APIs. It feels like an enterprise class product which means it sucks all the fun out of computing.


Technically, init is also a program that launches other programs with its own special config file; it's just so much more limited than systemd that it has to hand startup over to scripts.

You don't need a special program to read logs; you can use whatever syslog daemon you want; you just don't have to. I really enjoy journal's ability to filter the logs precicely by all sorts of fields.

Actually, it's just the opposite of "relacement of files with APIs". A program's command-line is an API that you call using a program; that program is being replaced by a file.

I don't think you've actually used systemd if you claim that it "sucks the fun out of computing". I find the the "systemd-analyze" and "systemctl dot" tools to be a lot of fun.


The lack of hackability is not fun. And some strange new programs won't make up for it. Bash might indeed be not the perfect thing for an init system but having no scripting capabilities at all forces systemd to implement everything hard-coded. What if that hard-coded Blob lacks a feature I require? What if that hard-coded blob contains errors? What if that hard-coded Blob contains security risks?

systemd opens a lot of doors for potential new Errors. I agree sysvinit sucks but worse is better in this case. Ideally an init system would be a lean and smart turing complete scripting language and every feature is implemented on top of it.


1) What's not hackable about C? 2) When was the last time you hacked on an init script?

> Ideally an init system would be a lean and smart turing complete scripting language and every feature is implemented on top of it.

You would probably really like NCD[1] as an init system. I was considering doing that in an embedded system I make until systemd came around.

1: http://code.google.com/p/badvpn/wiki/NCD


Hi, I'm the developer of NCD. I've experimented a little with using NCD as the init process, with some success. It's a very simple system now: http://code.google.com/p/ncdinit/ I think using NCD as init or otherwise makes a lot of sense in embedded systems, and with some work it could work for desktops and such too (consider adding services on the fly without reboot).


>1) What's not hackable about C?

Several tens of thousands of Lines of C code are a lot less hackable than a few lines of shellscript

>2)When was the last time you hacked on an init script?

A few months ago, writing an intelligent battery monitor for my notebook.

NCD looks great btw. but it is not an init system.


It's not intended to be an init system, but it would make a damn fine one.


I've got bash (erm, POSIX shell) interpreter on ALL my POSIX systems. I can guarantee you that.

C compiler? Not so much. Servers (security risk, more moving parts), embedded (space/power requirements).


Oh yeah. I'd totally managed to repress my memories of the systemd journal:

http://linuxkommando.com/2011/11/systemd-journal-will-revolu...

#whatcouldpossiblygowrong?


> I've yet to see a valid complaint.

Perhaps this is the main problem - the people pushing technology are not willing to acknowledge that people have different use cases to them. There are complaints, and they are valid - you just do not see them as valid, because they don't concern you.

Since you brought PulseAudio up, there's a very legitimate, common complain about it - latency. Yes, the people who care most about audio - the creators and audiophiles - are pushed aside as "not relevant," because Lennart is only interested in "appealing to the majority," not some minority fringe groups.

Really, the only reasonable solution for some people is Jack - but any hope of bridging the gap between Jack and Pulse is lost in a barrage of people pushing for a "single audio solution" - and prematurely claiming Pulse as the victor.

As a result, people write applications to only use PulseAudio, which are then unusable to someone using Jack (aside from hacks to make pulse act as a jack client). Professional audio software is still basically required to code for both Jack and Pulse/ALSA or whatnot - we're nowhere near a single audio solution.

But don't tell that to Poettering - because he is adamant that his solution is the only solution, and anyone who it doesn't suit is just playing with toys.

That's literally how he referred to Debain's choice to not push systemd because it targets multiple kernels, which systemd does not work with. (Which btw, Arch does too - although perhaps not the same people.)

Have you still yet to see a valid complaint? Of course kFreeBSD is not a valid complaint to you, because you've never used it, and never plan to - it's of no concern for you.

If we only cared about what's popular, Linux would not be what it is now - and you'd be on Windows. Linux is not the end-all solution to every problem anyway, as other lesser popular kernels have some great technology in them which is lacking in Linux. It's the only solution for Red Hat though - so you can see why they're happy to push such agenda. If you're not a Red Hat customer, your opinion is invalid.

Arch was built with a different mentality - the one of personal freedom to do what the hell you want with your desktop, not for the benefit of some company. You are free to use or reject any software you don't want.

Well, not any more. Pushing systemd on users breaks that mentality - because the choice is stripped from you. The choice is already there in Arch though - and has been for a while. If you want systemd, you can use it. If you don't want it, don't bother. Moving the other way is not really possible though - because if you build your system around systemd, you can't revert back (without taking the time to rewrite everything that depends on systemd.)

The dependency problem in itself is a complaint against systemd. Should udev users be forced to use systemd for example? Normally we would introduce another layer of abstraction to our code - such that we can share common code between systemd-udev and non-systemd-udev, and have a solution where everyone wins - the systemd users benefit from improvements in systemd integration, whilst everyone benefits from improvements to udev which aren't systemd dependent. This is programming 101.

Well, not if you're the package maintainer and have more political motives. Any proposal to introduce such a split with common code will probably be met with: NO, we're not interested - It's too much work - It's pointless supporting non-systemd - Fork it.

A fork it will be - and because the fork will be much less popular - it will obviously be a toy.

Just to be clear - I'm not against systemd and Pulse from a technical point of view, and I can very clearly see the advantages they have over alternatives, for linux. I'm not really against fragmentation either.

What I am against though, is the politics of it. The constant pushing of systemd down everyone's throat like it's a fucking panacea. One day saying "don't use if if you don't want," and the next blogging "you're fucking idiots for not using it." (Lennart's approach to Ubuntu.)


This takes away no choice, I'm sure if you want to use old style init that'll be available too, you'll just sacrifice easier config, faster boot, better diagnostics, logging and profiling of boot.


The old init will still be available, but the point is that anything built to run on systemd only, will not be easy to use with older systems without effort to backport them. This isn't much of a problem if you do choose to use systemd though, because it's backward compatible with older scripts.


It would be awfully hard to make a service that didn't work without systemd. Your software shouldn't depend on systemd, just as it shouldn't depend on init.d, rc.d, or anything else unless it's a tool specifically for working with that (like chkconfig or update-rc.d).


> Since you brought PulseAudio up

I brought it up primarily as an example of what is NOT relevant to systemd.

> But don't tell that to Poettering - because he is adamant that his solution is the only solution, and anyone who it doesn't suit is just playing with toys.

That's not what he said about PulseAudio. He specifically mentions the need for the other APIs and that the situation sucks. http://youtu.be/9UnEV9SPuw8

> That's literally how he referred to Debain's choice to not push systemd because it targets multiple kernels, which systemd does not work with. (Which btw, Arch does too - although perhaps not the same people.)

The advantages the Linux kernel provide (particularly, cgroups) is such a useful thing it would be stupid not to use it just because it doesn't exist everywhere. So either stick with your current init system, make a similar init system that is compatible with unit files (which are extremely simple), or add the necessary features to the BSD kernel. I'm not seeing a valid complaint here because systemd doesn't affect you unless you want it to.

>Arch was built with a different mentality - the one of personal freedom to do what the hell you want with your desktop, not for the benefit of some company. You are free to use or reject any software you don't want.

> Well, not any more. Pushing systemd on users breaks that mentality - because the choice is stripped from you. The choice is already there in Arch though - and has been for a while. If you want systemd, you can use it. If you don't want it, don't bother. Moving the other way is not really possible though - because if you build your system around systemd, you can't revert back (without taking the time to rewrite everything that depends on systemd.)

There is no "everything that depends on systemd" other than the init process itself. You say this choice is stripped from you, but what you want is to force the Arch developers to maintain init scripts. Take responsibility for it yourself - get together with other people who want those init scripts maintained and maintain them. You won't get things you want by just demanding that the world give it to you.

> The dependency problem in itself is a complaint against systemd. Should udev users be forced to use systemd for example? Normally we would introduce another layer of abstraction to our code - such that we can share common code between systemd-udev and non-systemd-udev, and have a solution where everyone wins - the systemd users benefit from improvements in systemd integration, whilst everyone benefits from improvements to udev which aren't systemd dependent. This is programming 101.

Where are these systemd-dependent parts of udev? They don't exist. There is no need for an abstraction layer to deal with the differences because there are no differences from a user perspective.

>What I am against though, is the politics of it. The constant pushing of systemd down everyone's throat like it's a fucking panacea.

Really, you're more political than anyone I've seen on the pro-systemd side of things.

>One day saying "don't use if if you don't want," and the next blogging "you're fucking idiots for not using it." (Lennart's approach to Ubuntu.)

Are people not allowed to express an opinion?


Well put.


I've been running systemd on my computers for a few months now, and it's great. Dramatically accelerates my boot times, easy to configure, and easy to use. The only problem I've ever had is when a daemon doesn't include a systemd unit file, and I imagine official migration will include resolving all the remaining cases of that.


systemd supports standard init scripts as well, allowing an incremental migration.


In most cases the needed file is so simple that it's an easy patch to send to the author or package maintainer.


There's a whole bunch of tools groping awkardly in a single direction here:

1. Give a graph to the computer

2. The computer makes the graph a reality

http://chester.id.au/2012/06/27/a-not-sobrief-aside-on-reign...

Puppet, Chef, Cfengine come at it from an on-disk direction.

Upstart, SMF, systemd, launchd come at it from a runtime direction.

They're still talking past each other. And it's annoying.

What I would really like is a system that does both as first-class citizens. I may be waiting a while.


  There's a whole bunch of tools groping awkardly in a single direction here:
    1. Give a graph to the computer
    2. The computer makes the graph a reality
The good ol' fashioned make tool does exactly this and, in the spirit of Unix, pretty much nothing else.

The problem is in determining whether the dependencies of a node in the graph are satisfied. Make does this by comparing file times. All the other 'graph resolving systems' you mentioned do something different. Obviously the solution is to abstract the dependency test from the graph itself. Then all these systems become pretty much the same.


> The good ol' fashioned make tool does exactly this and, in the spirit of Unix, pretty much nothing else.

Make still requires a manual step. I'm thinking of systems that do this themselves. Active control systems, constantly comparing the state of the world to the reference graph.

I don't want the unix tradition. The unix tradition is a pain in the rear end to actually administer. I want a single management framework with a single DSL that does it all.


I don't really understand the problems these "init" replacements wish to solve. I mean, actual, real-world problems. To me, it's just another instance of the same worrisome trend that brought us NetworkManager and makes everything evolve to integrate with D-Bus: do everything to make desktop-Linux better, even if it negatively impacts the use-cases where Linux is actually successful (everything else).

I don't care much for the desktop, but I do care for servers.

Boot times on servers are irrelevant. Reducing them brings little benefit. Servers shouldn't have to be rebooted that often for it to matter, and also, servers spend most of their boot time POSTing and initializing firmware for the various cards. In many cases, twice as much as it takes to boot a normal SysV init Linux install.

Not only that, but it is much more important to be able to effectively troubleshoot boot problems than to get some dubious features that nobody really felt missing for all these decades.

I don't care if Fedora or Arch do this, but I do care if more server-oriented distributions do. I still haven't gotten over the fact that RHEL6 now gives you the option of using NetworkManager (bleh) or configuring interfaces through (badly designed) configuration files. What's wrong with the old system-config-network?


I have little experience with servers, but... I must agree with your sentiment. One of the great things about Linux is how simple things are. GUIs are great, but not if I have to start a desktop just to get a network connection!


It's this kind of complaint that suggests all the unease is a result from resistance to learning new systems, rather than any actual flaw in those systems. NetworkManager operates headlessly, and can be manipulated just fine with nmcli.


It's too bad that opposition to changes like this is often expressed so trollishly. There's a real tradeoff here (of simplicity and of portability) that gets glossed over.


It's certainly true that the "minimal summary" of the features of systemd vs. sysvinit (or Busybox, or a BSD init script, which is even simpler) is hugely skewed against systemd. Understanding how systemd's process tracking works requires understanding cgroups, for example. Things that used to be one liners in a script ("mount hugetlbfs -t hugetlbfs /dev/hugepages") suddenly become first class "mount" objects with status. Systemd introduces a bunch of new jargon and new tools.

But at the same time systemd really does do much more than a classic "launch some stuff and call wait()" style init. And that stuff is pretty nice -- in a systemd world, no one needs to worry about writing a "daemon" any more. Any program that sits in a loop writing to standard output can be started, stopped and syslog'd.

And making this work isn't bad at all. You configure systemd with straightforward .ini file syntax and clear fields (e.g. "ExecStart=/path/to/my/program").

Basically it's complicated in structure (and the task of porting a whole distro to it strikes me as pretty scary) but simple in interface, and that's pretty much the right place to be. Most "systemd is too hard!" rants don't survive long past the initial implementation phase.


> Basically it's complicated in structure [...] but simple in interface, and that's pretty much the right place to be.

Thanks for posting this, this is a great analysis. It gets right to the heart of many of the disagreements I have with system design orthodoxy.

I think it's exactly the wrong place to be.


The trade off isn't simplicity it's familiarity. The shell based init system is a stuningly complex series of shell scripts that keep implementing the same basic functionality over and over.


see here the discussion `systemd vs upstart` (the default on ubuntu):

http://unix.stackexchange.com/questions/5877/what-are-the-pr...

and here the design docs of systemd:

http://0pointer.de/blog/projects/systemd.html

overall i think systemd is a much better implementation toward the same goal. some distros have already switched from upstart to it, not saying that upstart is not an extremely popular choice as well.


Allen McRae had an interesting, relvant post recently- "Are We Removing What Defines Arch Linux?"

http://allanmcrae.com/2012/08/are-we-removing-what-defines-a...


Here's the (much more readable) mailing list in gmane: http://news.gmane.org/gmane.linux.arch.devel

Currently (Friday 3:48 Pacific time), this thread is at the top.

Edit: Lots of +1s in the early thread messages; some elaboration surrounding migration issues in later emails.


In my biased opinion, once you have gotten over the learning curve, nothing beats daemontools for running services. It is a fantastic set of tools. Why some OS doesn't just embrace djbware I'll never understand. It compiles, smoothly, in seconds. (There's no need for distributing binaries.) And the chances of the author initiating lawsuits (as some Linux foundations are known to do), over something placed in the "public domain" are close to nil. He's got better things to do.

BSD's rc system is fine. Sometimes the scripts are too verbose. But the whole idea is the system is simple enough to understand that you can write your own scripts -- more concisely, if you wish. You don't need to read a book (e.g. Linux from Scratch), keep most things disabled by default and let the user turn stuff on as they need it.

I recently used Debian's live USB, the rescue version, for a little while and was amazed at how much stuff is turned on by default. I guess if you understand each and every choice that's been made for you it's OK. But if not, that approach is not very conducive to learning.

As for Apple, never mind all the XML fluff, good luck trying to understand what's going on behind the scenes with their computers anymore. They can't even manage to let you have an nsswitch.conf or equivalent.


Debian (and other apt-based systems) are the Lego blocks systems of the Linux world. If you install a service, the assumption is that you want it to run (if you don't want it to run, you can either uninstall it or deactivate it). Bootable / live versions tend to have more comprehensive lists of installed packages to allow for greater utility/flexibility -- though some (Knoppix) actually allow you to install additional packages (yes, booted RAM-only) into the booted system.

The is not the case on BSD systems (generally an integrated whole, though they've got package management) or RPM (poorer package management leading very frequently to a "kitchen sink" installation paradigm).

Yeah. RHEL's even got a package you can install to enable/disable postfix vs ... oh, whatever the default MTA is, I can't keep track (smail still? I know they've moved off of sendmail, right? Right?).


> And the chances of the author initiating lawsuits (as some Linux foundations are known to do), over something placed in the "public domain" are close to nil.

Could you give some examples of those lawsuits?


The link is to a proposal, rather than an announcement, but the replies seem very positive, which bodes well.


A proposal followed by a discussion without any opposition to the proposal, and then a decision.

Announcement will surely be made shortly :)


Which resulted in a discussion on a different thread? http://mailman.archlinux.org/pipermail/arch-dev-public/2012-...


they should just migrate to openrc, i heard debian might actually do that. would be there 2nd nice choice this month after switching to xfce as default de




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: