Hacker News new | past | comments | ask | show | jobs | submit login
Debian GNU/Hurd Status Update [pdf] (debian.org)
121 points by Tsiolkovsky on Aug 25, 2015 | hide | past | favorite | 54 comments



The preliminary rump kernel support that arrived just a bit more than a week ago is quite promising, if still quite rudimentary. That said, a patched MPlayer linking to rump libraries has been able to play OGG files on a Hurd system: https://lists.gnu.org/archive/html/bug-hurd/2015-08/msg00027...

Though obviously in the very initial stages, the implications of this development taken to its end could be enormous. Have the GNU Hurd implement translator or server interfaces to the rump drivers, plug it on top of an immutable package and configuration management suite like Guix (actually already being done), and for the first time in history you have a complete, general-purpose microkernel-based Unix-compatible (but beyond plain Unix) system that could rival the likes of GNU/Linux. The great thing about it is that the Hurd isn't afraid to break POSIX semantics where beneficial. This makes it have, among other things, processes running under multiple uids, true Plan 9-style namespacing, unprivileged mounts, token-based authentication, and so forth.

I've been playing with it on-and-off (including cross-toolchains) and been pondering about Mach kernel emulation on Linux. A pure userland server would be most desirable, but faces real changes with modeling port rights as file descriptors sent over ancillary data, how to reliably handle user-level page fault handling on a per-application basis, mapping VM regions across address space boundaries and properly emulating RPC. Kip Macy did successful work on a port of OSF Mach from the MkLinux sources as a FreeBSD kernel module for use with launchd, XPC and other OS X modules, so I think that might end up being a concession I'll look into taking one day. I'm occupied with other projects in the meantime.

It'd be also funny to have Linux "eat" itself in such a manner by becoming a host for the Hurd. Poetic justice, I suppose. But I'll wait to see if the rump kernel stuff pans out first.

(MINIX 3 is great too, much better microkernel, but it's more meant for reliability rather than exposing end-user flexibility like Hurd does with translators. It also doesn't come close to the ~81% of Debian building as is with Hurd, and the MINIX devs themselves are positioning it as an embedded platform rather than general-purpose.)


Guix is becoming increasingly ready for prime time. There's one big HPC install in Germany already running it. I think it's past the toy stage.

I guess Guix + Hurd is a great setup, with innovative features in the kernel and userland. Both very pure in their respective ways, with Guix having functional package management, and Hurd keeping the kernel minimal so that crashes or security breaches are not catastrophic.


Very cool to hear. Where can we read more about this? Or is this an insider comment?



davexunit, one day I will need to track you down in real life. I think we would geek out all day on Scheme.

I love when I see you and others mention your FRP Scheme in games work. Keep it up!


Thank you! I really appreciate the kind words. Happy hacking!


What's the difference between Nix and Guix anyway, and which is 'better'?


Nix is what Guix is based on. You can think of Guix as the 'GNU Version' of Nix/NixOS, meaning it provides free software packages, and IIRC it uses Guile Scheme, rather than Nix, to describe all the packages and OS configuration.

It's a toss up as to whatever you like. I think Nix has a bigger community, more packages, etc. But Guix, strictly speaking I think, is a 'superset' because changes from Nix often flow into Guix (and Guix can use Nix packages directly), as Guix is built directly on the same source-code as Nix - but not the other way around. There are some other differences, like the fact Guix uses GNU dmd while NixOS uses systemd, etc etc.

It's all just personal preference I think. I use NixOS because it's reliable and has a decently sized community, and a lot of packages. It also has really, really good Haskell support, and being a Haskell developer, that's a pretty big plus to me. Having non-free packages isn't so much of a stickler for me. I think the Guix people are doing good work, though, and a distribution for free software and the GNU project is very important - so I wish them the best even if I'm farther away.

You'd have to try both of them and make a decision for yourself IMO. But I warn you: the rabbit hole is deep, and will require learning. And when you come out - you may be immensely displeased with the current state of affairs. :)

(Full disclosure: IAMA NixOS developer.)


I'm giving both a try. Nix is more polished, has many more packages, and is better documented.

I prefer Guix choice of using a real programming language (Scheme!) instead of a DSL, but I really like Nix anyway.

Something that annoys me sometimes is that Nix has a few really bloated packages, compiled with all dependencies on. For example, if I try installing mutt I eventually get python as a dependency. This is a bit ugly. I'm aware it's easy to change this, but I'd still love to get thinner binaries from hydra. Otherwise, a really neat piece of software.

Guix tends to be a bit more like Slackware or Arch. Very vanilla things. I would love if Nix went a bit in that direction too with regards to packaging policies. It's more secure and nicer to humble devices, like cheap Chromebooks.


I don't get all this hate on Linux: it basically sustained GNU project for almost 25 years...


Diversity is a good thing. Systems-level research is at a woeful state right now because the status quo is Good Enough for a lot of people. Thus, the number of interesting ideas to try, to fail, to succeed is a trickle to what it should be. Database-based and object-based filesystems are little more than research projects because of the hard to shake belief of what stored items should look like and what their properties are.

We need to work on projects with little to no preconceived notions, so we can start poking at things, figuring out what can be done better. We indeed have a working model, but I'm certain we can do better than just working.


Are you saying that Linux folks should do a terrible job to trigger advances in systems research ?


No one is hating Linux. But Linux isn't exactly of much help towards advancing the status quo of OSes, either.


That's highly debatable: for example BeOS recreators/Haiku devs CHOSE to use a new exotic kernel instead of Linux or FreeBSD kernel.

But their reasons seems mainly because of NIH syndrome: those who like to create a new OS also to tinker with "their own kernel" instead of reusing something already existing.


Why the hell would you recreate BeOS on a Unix-like kernel? Are you even listening to yourself? That's not NIH, that's common sense.

It's far more effort to try and learn the ins-and-outs of a large kernel like FreeBSD or Linux, then try to shoehorn completely different OS semantics onto Unix, than to just roll your own.

Also, Haiku's kernel was forked from NewOS, which was back in 2001, and written by a former Be engineer.

Besides, how is their choice signifying a hatred for Linux?


> Why the hell would you recreate BeOS on a Unix-like kernel?

Because you wouldn't have to code all these drivers to make the result an useful OS?

> Are you even listening to yourself? That's not NIH, that's common sense.

1) You're being rude. 2) 'common sense' is not an argument.

> It's far more effort to try and learn the ins-and-outs of a large kernel like FreeBSD or Linux, then try to shoehorn completely different OS semantics onto Unix, than to just roll your own.

Well its depends whether your goal is to have an OS running in very few configurations or if your goal is to have an OS which is able to be run by many.

> Besides, how is their choice signifying a hatred for Linux?

It doesn't, I was replying to the sentence "But Linux isn't exactly of much help towards advancing the status quo of OSes, either." an OS isn't a kernel and there WERE discussions about whether using Linux or FreeBSD kernel or NewOS in Haiku..


Because you wouldn't have to code all these drivers to make the result an useful OS?

Writing a driver framework compatibility layer is quite a separate and doable activity from adopting an entire kernel. Haiku actually does have some FreeBSD driver compatibility, and then efforts like DDE have been around for running Linux 2.6 drivers on other systems. More recently rump kernels.

You're being rude.

You're being ignorant.

Well its depends whether your goal is to have an OS running in very few configurations or if your goal is to have an OS which is able to be run by many.

Absolute nonsense, as elaborated above.


Linux is required to support a lot of legacy at this point; to try truly new paradigms you need to be in a codebase with no backwards compatibility requirements to break.


Then again, if it is supposed to be a Unix/POSIX system, you've already got plenty of compatibility requirements…


Compatibility is mostly on the interface level. The underlying semantics are often different or extended, though kept reasonable. This emerges from the fact that you're not actually setting traps in the kernel, but sending RPC messages to servers.


Exciting! "processes running under multiple uids" Why?


https://www.gnu.org/software/hurd/hurd/authentication.html

It's a form of capability-based security and it makes some forms of sandboxing or access control trivial (i.e. removing rights from processes). If you want to block a process from accessing a certain subsystem, just rmauth its session token to the server.


I remember seeing a demo of a very early version of HURD in 2002, and I thought this was the most interesting thing about it. I think in the demo, Marcus Brinkmann showed how in the midst of editing a file in vi in user mode, you could open another file in superuser mode, do an edit there and then go back to the old file in user mode.


the editor of the beast. Now with root.


the reference, for those uninitiated: https://en.wikipedia.org/wiki/Editor_war#Humor


The biggest limiting factor at the moment with Hurd is their choice for the base microkernel, currently Mach which hasn't had substantial updates for years.

While Hurd improves its userspace support, the should also revisit the idea of rebasing the OS on L4 (where you can also load a Linux instance in userspace for compatibility) or Minix3 which has been working on its own kernel features.


which hasn't had substantial updates for years.

Not true. GNU Mach has been undergoing lots of refactoring recently with regards to locking, protected payloads for threefold lowering of RPC lookups, the VM subsystem and so forth.


So what sort of performance improvements have been made? Is Mach now in the same bracket as other, newer microkernels?


It's awesome to see Hurd continuing to progress. It doesn't matter that "Hurd isn't ready for mainstream use" or any of that stuff. Even if it never becomes "ready for mainstream use" it's valuable to have people experimenting and researching alternative approaches. This and Plan9 both still play important roles, and I expect that, at a minimum, developments coming out of one or both projects will continue to influence Linux and other "mainstream" OS's.

Besides, it's just plain fun to play around with operating systems. I think I have an extra box laying around here idle that would be a good place to experiment with Hurd a bit.



The "top dumb issues" page for porters has value even for non-hurd users. In terms of "don't do this". Its on page 13.


Practical question: If I was to get some cheap hardware, like a Chromebook, could I get a Debian GNU/Hurd development environment dual booting on it? What would I have available? I'd at least want a decent shell, tmux and vim and support for the display's full resolution.

I suppose the easier route would be a virtual machine, but I think being immersed in it would be good.

I've wanted to get into kernel or other low-level development for a while and this seems like a perfect entry point. Linux seems too complicating on the surface which is intimidating. And politics suck (see systemd).


bash, tmux and vim are all supported. You can even have a full Xfce desktop.

Hardware support is unfortunately flaky, in part because the GNU Mach code lacks some features like PCI MSIs and uses drivers from the Linux 2.6 era through DDE. Virtualization is definitely the route through which people run it.

The rump integration will hopefully change this situation, assuming it advances forward.


I figured this would be about the size of things. I suppose I'll start with a VM. I realized I mentioned Chromebooks in my first post -- reminder to anyone that ARM is not supported, so get an Intel-based model.


the last document linked to at the end is a really good read too - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.37....,

Also I agree with him on the fdisk/mke2fs, I've often asked myself the exact same question. aparently, its not a so popular opinion though, since they're always tucked away in sbin...


Not all linux systems do that. Arch symlinks everything to /usr/bin, I believe.


I was hoping they will mention something about crosshurd, but there was nothing, and crosshurd doesn't work on recent debian anymore. I'd love to see some way to install from an existing system.


You can either follow my guide [1] based on updated cross-gnu scripts, or try out gnuxc [2] for bootstrapping a full Hurd distribution, with the caveat that it requires your host to be Fedora.

[1] http://blog.darknedgy.net/technology/2015/07/25/0/

[2] https://github.com/dm0-/gnuxc


I found this bit in your post interesting: "kdbus (which was called “neutered Mach IPC” by Neal Walfield)" - does that mean mach ipc provides typed interface registration / binding between userland processes? I admit, I know very little about mach and thought it was for kernel/user-space communication only (both directions)


does that mean mach ipc provides typed interface registration / binding between userland processes?

Yes, practically all Mach subsystems other than virtual memory have ports implicitly created for them. This includes tasks (address spaces) and threads (units of CPU time), which together form processes.


I'm not drunk on wine at an OS research conference but I have to ask anyway: Aren't microkernels obsoleted by hypervisors?


Not really, no. A microkernel can be a self-reliant, real-time, event driven OS which can be embedded in any number of specific appliances with a specific skillset for a specific job.

A hypervisor is generally used as a way to run another bloated OS on top of a microkernel. Even if you were to run with the idea of containers directly on top of a hypervisor, you're still looking at more layers of indirection that are meant to generalise the use of the entire stack.

Microkernel = specific.

Hypervisor = microkernel with specific purpose of running general stacks on top.

That's how I would distinguish the two in this instance.


● Top dumb issues

● Not linux or BSD? #include <windows.h>

can someone explain the above to me? the slides didn't make sense.


Poorly written preprocessor bits in C code that assume that anything anyone ever runs is either Linux, BSD (including OS X), or Windows. They basically write the following:

  #ifdef LINUX
  // include Linux headers
  #elif defined(BSD)
  // include BSD headers
  #else
  // include Windows headers. What else would it possibly be?
  #endif
The "mach.h" one is similar: people make a bad assumption that OS X is the only Mach kernel people use. (Hurd is obviously a counterpoint, just an unpopular one.)


I think it's wrong to call it poorly written because the programmer only accounted their C preprocessor code for ~99.99% of desktop and mobile operating systems. The diminishing return for avoiding this approach is so impalpable it hurts. Windows, OSX/iOS/BSD, and Linux/Android literally are everything anyone ever uses, within rounding error.


The diminishing return approach is this:

     #ifdef LINUX
     // include Linux headers
     #elif defined(BSD)
     // include BSD headers
     #elif defined(WINDOWS)
     // include Windows headers.
     # else
     #error "unsupported system"
     #endif


If you develop for different platforms then it may actually make sense to do a "if linux / elif bsd / elif windows / else #error unsupported". At least it gives you a proper answer in case you forgot you have some wild crosscompiler in your path. But I understand it's not a popular concern...


Linking to the slides without the actual presentation should be punishable by death. The entire point of slides is to supplement the presenter's speech, so if they're completely self-explanatory without any talking, then you're doing your presentation wrong.


>Hardware support >● i686 >● start of 64bit support

WTF? They started 30 years ago and still on an 32bit architecture nobody cares anymore?

Beside that, I really don't get the point of all this, seriously. Not from a technological benefit, be warned, but from a user perspective the benefit is almost intangible. Why should the average sysadmin care? Linux is a rock solid kernel that does everything, what is the niche that Hurd is filling?


>>Beside that, I really don't get the point of all this, seriously. Not from a technological benefit, be warned, but from a user perspective the benefit is almost intangible. Why should the average sysadmin care? Linux is a rock solid kernel that does everything, what is the niche that Hurd is filling?

Well, that's debatable. Linux just crossed the 20 MLOCs bar. Granted, this is mostly driver code which you will only need a small subset of, but still: Linux is a huge complex blob. Linus himself said it's bloated years ago[1] and that it has a high entry barrier for new developers due to its complexity[can't find the link right now].

My guess would be that this is partly due to its classic monolithic architecture and partly due to their developement model[2].

After working with Plan9 for a while I can't help but take my hat off to the beauty and simplicity of the system. Especially as a sysadmin your life would be so much easier. No LDAP, no Kerberos, no NFS, no need to update or deploy software on/to terminals. But as it stands we're living in a land where every computer thinks it is a Mainframe.

Linux has done a great deal for OSS and will without question be around for quite some time but you have to wonder how computing would look like today, if history had taken a few diffenent turns.

FWIW: Is it just me, or is OS R&D gaining a little more traction lately after it almost stagnated in the 2000s ?

[1] http://www.cnet.com/news/linus-torvalds-linux-is-bloated/

[2] https://en.wikipedia.org/wiki/Criticism_of_Linux#Kernel_code...


I think there was some serious experimentation in 2000s, just not in the area of popular systems. Plan9 started to hit the news properly around then. A few OSes in managed languages were developed: MS Singularity, and smaller experiments like JNode and SharpOS.


You realize i686 is a catch-all term for 32-bit x86 processors, right? It's arguably the only 32-bit architecture anyone cares about.


ARMv7?


A lot of phones are 32bit.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: