Hacker News new | past | comments | ask | show | jobs | submit login

The preliminary rump kernel support that arrived just a bit more than a week ago is quite promising, if still quite rudimentary. That said, a patched MPlayer linking to rump libraries has been able to play OGG files on a Hurd system: https://lists.gnu.org/archive/html/bug-hurd/2015-08/msg00027...

Though obviously in the very initial stages, the implications of this development taken to its end could be enormous. Have the GNU Hurd implement translator or server interfaces to the rump drivers, plug it on top of an immutable package and configuration management suite like Guix (actually already being done), and for the first time in history you have a complete, general-purpose microkernel-based Unix-compatible (but beyond plain Unix) system that could rival the likes of GNU/Linux. The great thing about it is that the Hurd isn't afraid to break POSIX semantics where beneficial. This makes it have, among other things, processes running under multiple uids, true Plan 9-style namespacing, unprivileged mounts, token-based authentication, and so forth.

I've been playing with it on-and-off (including cross-toolchains) and been pondering about Mach kernel emulation on Linux. A pure userland server would be most desirable, but faces real changes with modeling port rights as file descriptors sent over ancillary data, how to reliably handle user-level page fault handling on a per-application basis, mapping VM regions across address space boundaries and properly emulating RPC. Kip Macy did successful work on a port of OSF Mach from the MkLinux sources as a FreeBSD kernel module for use with launchd, XPC and other OS X modules, so I think that might end up being a concession I'll look into taking one day. I'm occupied with other projects in the meantime.

It'd be also funny to have Linux "eat" itself in such a manner by becoming a host for the Hurd. Poetic justice, I suppose. But I'll wait to see if the rump kernel stuff pans out first.

(MINIX 3 is great too, much better microkernel, but it's more meant for reliability rather than exposing end-user flexibility like Hurd does with translators. It also doesn't come close to the ~81% of Debian building as is with Hurd, and the MINIX devs themselves are positioning it as an embedded platform rather than general-purpose.)




Guix is becoming increasingly ready for prime time. There's one big HPC install in Germany already running it. I think it's past the toy stage.

I guess Guix + Hurd is a great setup, with innovative features in the kernel and userland. Both very pure in their respective ways, with Guix having functional package management, and Hurd keeping the kernel minimal so that crashes or security breaches are not catastrophic.


Very cool to hear. Where can we read more about this? Or is this an insider comment?



davexunit, one day I will need to track you down in real life. I think we would geek out all day on Scheme.

I love when I see you and others mention your FRP Scheme in games work. Keep it up!


Thank you! I really appreciate the kind words. Happy hacking!


What's the difference between Nix and Guix anyway, and which is 'better'?


Nix is what Guix is based on. You can think of Guix as the 'GNU Version' of Nix/NixOS, meaning it provides free software packages, and IIRC it uses Guile Scheme, rather than Nix, to describe all the packages and OS configuration.

It's a toss up as to whatever you like. I think Nix has a bigger community, more packages, etc. But Guix, strictly speaking I think, is a 'superset' because changes from Nix often flow into Guix (and Guix can use Nix packages directly), as Guix is built directly on the same source-code as Nix - but not the other way around. There are some other differences, like the fact Guix uses GNU dmd while NixOS uses systemd, etc etc.

It's all just personal preference I think. I use NixOS because it's reliable and has a decently sized community, and a lot of packages. It also has really, really good Haskell support, and being a Haskell developer, that's a pretty big plus to me. Having non-free packages isn't so much of a stickler for me. I think the Guix people are doing good work, though, and a distribution for free software and the GNU project is very important - so I wish them the best even if I'm farther away.

You'd have to try both of them and make a decision for yourself IMO. But I warn you: the rabbit hole is deep, and will require learning. And when you come out - you may be immensely displeased with the current state of affairs. :)

(Full disclosure: IAMA NixOS developer.)


I'm giving both a try. Nix is more polished, has many more packages, and is better documented.

I prefer Guix choice of using a real programming language (Scheme!) instead of a DSL, but I really like Nix anyway.

Something that annoys me sometimes is that Nix has a few really bloated packages, compiled with all dependencies on. For example, if I try installing mutt I eventually get python as a dependency. This is a bit ugly. I'm aware it's easy to change this, but I'd still love to get thinner binaries from hydra. Otherwise, a really neat piece of software.

Guix tends to be a bit more like Slackware or Arch. Very vanilla things. I would love if Nix went a bit in that direction too with regards to packaging policies. It's more secure and nicer to humble devices, like cheap Chromebooks.


I don't get all this hate on Linux: it basically sustained GNU project for almost 25 years...


Diversity is a good thing. Systems-level research is at a woeful state right now because the status quo is Good Enough for a lot of people. Thus, the number of interesting ideas to try, to fail, to succeed is a trickle to what it should be. Database-based and object-based filesystems are little more than research projects because of the hard to shake belief of what stored items should look like and what their properties are.

We need to work on projects with little to no preconceived notions, so we can start poking at things, figuring out what can be done better. We indeed have a working model, but I'm certain we can do better than just working.


Are you saying that Linux folks should do a terrible job to trigger advances in systems research ?


No one is hating Linux. But Linux isn't exactly of much help towards advancing the status quo of OSes, either.


That's highly debatable: for example BeOS recreators/Haiku devs CHOSE to use a new exotic kernel instead of Linux or FreeBSD kernel.

But their reasons seems mainly because of NIH syndrome: those who like to create a new OS also to tinker with "their own kernel" instead of reusing something already existing.


Why the hell would you recreate BeOS on a Unix-like kernel? Are you even listening to yourself? That's not NIH, that's common sense.

It's far more effort to try and learn the ins-and-outs of a large kernel like FreeBSD or Linux, then try to shoehorn completely different OS semantics onto Unix, than to just roll your own.

Also, Haiku's kernel was forked from NewOS, which was back in 2001, and written by a former Be engineer.

Besides, how is their choice signifying a hatred for Linux?


> Why the hell would you recreate BeOS on a Unix-like kernel?

Because you wouldn't have to code all these drivers to make the result an useful OS?

> Are you even listening to yourself? That's not NIH, that's common sense.

1) You're being rude. 2) 'common sense' is not an argument.

> It's far more effort to try and learn the ins-and-outs of a large kernel like FreeBSD or Linux, then try to shoehorn completely different OS semantics onto Unix, than to just roll your own.

Well its depends whether your goal is to have an OS running in very few configurations or if your goal is to have an OS which is able to be run by many.

> Besides, how is their choice signifying a hatred for Linux?

It doesn't, I was replying to the sentence "But Linux isn't exactly of much help towards advancing the status quo of OSes, either." an OS isn't a kernel and there WERE discussions about whether using Linux or FreeBSD kernel or NewOS in Haiku..


Because you wouldn't have to code all these drivers to make the result an useful OS?

Writing a driver framework compatibility layer is quite a separate and doable activity from adopting an entire kernel. Haiku actually does have some FreeBSD driver compatibility, and then efforts like DDE have been around for running Linux 2.6 drivers on other systems. More recently rump kernels.

You're being rude.

You're being ignorant.

Well its depends whether your goal is to have an OS running in very few configurations or if your goal is to have an OS which is able to be run by many.

Absolute nonsense, as elaborated above.


Linux is required to support a lot of legacy at this point; to try truly new paradigms you need to be in a codebase with no backwards compatibility requirements to break.


Then again, if it is supposed to be a Unix/POSIX system, you've already got plenty of compatibility requirements…


Compatibility is mostly on the interface level. The underlying semantics are often different or extended, though kept reasonable. This emerges from the fact that you're not actually setting traps in the kernel, but sending RPC messages to servers.


Exciting! "processes running under multiple uids" Why?


https://www.gnu.org/software/hurd/hurd/authentication.html

It's a form of capability-based security and it makes some forms of sandboxing or access control trivial (i.e. removing rights from processes). If you want to block a process from accessing a certain subsystem, just rmauth its session token to the server.


I remember seeing a demo of a very early version of HURD in 2002, and I thought this was the most interesting thing about it. I think in the demo, Marcus Brinkmann showed how in the midst of editing a file in vi in user mode, you could open another file in superuser mode, do an edit there and then go back to the old file in user mode.


the editor of the beast. Now with root.


the reference, for those uninitiated: https://en.wikipedia.org/wiki/Editor_war#Humor


The biggest limiting factor at the moment with Hurd is their choice for the base microkernel, currently Mach which hasn't had substantial updates for years.

While Hurd improves its userspace support, the should also revisit the idea of rebasing the OS on L4 (where you can also load a Linux instance in userspace for compatibility) or Minix3 which has been working on its own kernel features.


which hasn't had substantial updates for years.

Not true. GNU Mach has been undergoing lots of refactoring recently with regards to locking, protected payloads for threefold lowering of RPC lookups, the VM subsystem and so forth.


So what sort of performance improvements have been made? Is Mach now in the same bracket as other, newer microkernels?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: