Hacker News new | past | comments | ask | show | jobs | submit login
The Use of Name Spaces in Plan 9 (1993) (hu-berlin.de)
63 points by vezzy-fnord on July 5, 2015 | hide | past | favorite | 28 comments



I've been reading about Plan 9 lately but I am still missing why exactly (or more-or-less) Plan 9 (and Research Unix, similarly) fell out? From what I've read it seems like the lack of ubiquity forced the Plan 9 guys out of Plan 9 and back into Ubuntu or OpenBSD. From their discussions of rc, etc it seemed like the lack of users caused them to move away from it until it just completely fell apart... But I'm not sure?

And where is Plan 9 now? If I want to get involved should I look into 9front? Inferno? Or this guys github mirror [0] because the Plan 9 website is down?

I understand these are some smart guys, but the cat-v website makes a ton of divisive statements and "leaves it as an exercise to the reader" to figure out why they hold these views. The trolling isn't awful, but in general, I'm having a really tough time fighting my way into the Plan 9 circle.

How have you dealt with this? Aside from my initial (more historical) question, any suggestions for getting started? Is it even worth getting familiar with Plan 9 or rc or acme or sam or the plethora of ported tools? Or should I just wait until Russ Cox and Rob Pike come up with a production-ready OS?

No disrespect meant, just have a lot of questions.

[0] - https://github.com/0intro/plan9


Plan 9 didn't catch on for a lot of reasons -- bad licensing, bad marketing, being too different from unix which was "good enough", etc. The fact that your average nix sysadmin, who can manage pretty well between BSDs, Linux, AIX, HPUX, etc, would be utterly puzzled when given the task of managing a plan 9 system, probably discouraged a lot of businesses from adopting it.

If you are not married to your current editor of choice, give Acme a try -- it's quite different, but I've been using it as my primary for about 5 years (I write mostly write C, Bash, Python, Go, Puppet. Hesitant to use it for more editor-dependent languages like Java and C#), and I've been very happy with it. It's available for Linux, BSD, and OS X as part of plan9port. For me, secret to being productive in Acme was learning to write my own plumbing rules, which let you turn plain text into hyperlinks based on pattern matching.

Plan 9 does not have to be a purely historical endeavor; If you manage to write a 9P file server, you can actually mount it on other OSes, either through the native 9P support in the Linux kernel, or through a wrapper program like 9pfuse. For example, I recently saw a 9P file server that turned jira into a file tree of tickets that you could edit with a text editor.


With the garbage ticketing systems I am used to, JIRA as a file tree, 9p or FUSE on Linux/OSX, has me almost aroused.

People wonder why I cannot do things fast in their GUI world, and then I show them a bash script that obviates 10x per page operation filing a ticket, and then they do not understand, but respect my annoyance at the "user-friendly" way.


http://9front.org/ is the most actively developed fork, and is the most likely to work on your hardware. They don't take themselves seriously, though, and are perfectly fine with being niche, so they don't put much effort into being welcoming to people who don't do their reading.


I got curious when you said they don't take themselves seriously. There are some pretty funny files in the /lib/ directory of their source tree. https://code.9front.org/hg/plan9front/file/0d00dd23c9db/lib/... for example.


I think at least part of the problem was the hardware required to run a more complete installation where you'd start to see the benefits of the approach. As the paper outlines, a Plan 9 network needed at least a CPU server, file server and one or more terminals, plus possibly an authentication server (the CPU server might have been able to do this too). I seem to remember that the installation instructions assumed you'd have this kind of hardware available too, so getting a working system was quite an investment.

The file server was somewhat specialised too: the Plan 9 one used an optical WORM jukebox to provide its long-term storage. If you didn't have one of those you could simulate it with disk storage, but there's a cost trade off there.

Without this investment in hardware it was like trying to understand how NFS or web infrastructure works with only a single machine to work with.

In some ways it's similar to the obstacles that hobbyists face investigating the Hadoop ecosystem today: the hardware required to build a realistic installation on which to experiment is quite costly. With Hadoop you can use virtual machines and/or cloud hosting to try things out. When Plan 9 came out you didn't have that option so you needed to assemble physical hardware yourself.


> As the paper outlines, a Plan 9 network needed at least a CPU server, file server and one or more terminals, plus possibly an authentication server (the CPU server might have been able to do this too).

... which can just be run on the same machine.

> Without this investment in hardware it was like trying to understand how NFS or web infrastructure works with only a single machine to work with.

Network transparency is an important feature, but not the only one. You are saying running X makes no sense because your X server and client are on the same machine.


> You are saying running X makes no sense because your X server and client are on the same machine.

No, I'm saying that it's hard to properly understand the advantages Plan 9 (or NFS or X or Hadoop) brings if all you have is one machine to run it on.


>From what I've read it seems like the lack of ubiquity forced the Plan 9 guys out of Plan 9 and back into Ubuntu or OpenBSD

That's… interestingly ahistorical. No one was force back from plan 9 to ubuntu or openbsd because when plan 9 "lost" neither existed. Plan 9's window of opportunity existed in the early '90s when linux and the web weren't firmly established.

Unfortunately Plan 9 wasn't released with a proper license until 2002, at that point Linux had already won the war (for half a decade), the problems that plan 9 solved had been solved in other ways (perhaps less elegantly) by the rest of the world, plan 9 lagged behind on a number of important areas and bell labs were being downsized significantly.


I said this after reading this conversation [0] on comp.os.plan9 about rc. Though it does appear that I misread that as well. However, my point was that it seems like the shift away from Plan 9 or 9front or whatever took place in the late 2000s when they moved to Ubuntu or OpenBSD due to work concerns because it was easier to support the masses. (I cannot find the conversation on comp.os.plan9 to support that though.)

[0] https://groups.google.com/forum/#!topic/comp.os.plan9/g2qBh0...


> my point was that it seems like the shift away from Plan 9 or 9front or whatever took place in the late 2000s

I didn't see that happen.


> From what I've read it seems like the lack of ubiquity forced the Plan 9 guys out of Plan 9 and back into Ubuntu or OpenBSD.

Sigh. Really?


Imagine what might happen if HN spent as much time maintaining Plan 9 as we spend talking about it.


Funny, I was thinking the same thing. I think that's because there were two mentions of Plan 9 in one day. Otherwise, we only see it every few days or so.


I was inspired by one of the users' puzzlement over Plan 9 over at the Scheme 9 interpreter thread to post this paper, since it explains the core concept of per-process namespaces.

And since no one else had done so before, bafflingly.

Also keep in mind that Plan 9 has involved significantly since the publication date. 8 1/2 and WORM have been superseded by rio and Venti, the 9P protocol is a bit more extensive, ndb and /net plus the auth subsystem have become more prominent since then, etc.


this paper still reads revolutionary now, but imagine how it felt to read it in 1993 and compare it with your average "system" either at home or at work.

windows for workgroups was released the same year, while we were struggling to set up IPX networks to play doom between two machines at home...


The early 90s had a lot of interesting OS theory work going on. Andrew S Tanenbaum, of Minix fame, was working on Amoeba at the same time, which also was designed to present network transparency to the end user. Sprite was another, which also offered things like seamless process migration.

But, it all stopped. Research budgets dried up, the computer got bigger than any one OS team could handle, and some OS out of Finland came in and scooped up all the mindshare. Since then, we've been stumbling in the dark, using scant shards of the tools they made, like python, with overall systems not much better than what they have. Lispers talk of the AI Winter, but the ongoing OS Winter is far, far worse in terms of the absolute stagnation of an entire force of society. We really do deserve better than what we have now.


My exact sentiment.

Earlier this week there was a thread on HN about Taos, another very interesting OS from that time.

https://news.ycombinator.com/item?id=9806607

Today, with multicore, heterogeneous cores, a huge range of communication types, IoT and the physical world becoming connected from the tiniest of pieces up to the largest structures, there should be ample areas where OS research could find interesting challenges and solve problems.

The most interesting I've seen in a few years is the library OS work from MSR. But coming from Oberon, the most interesting feat was doing it to Windows and get it to work. Unfortunately it doesn't seem to become part of Windows.

To be fair though, the amount of virtualization tech developed the last decade could be perceived as OS related.

One professor I met once claimed that the Mach OS, micro kernels was the bane of OS research. Suddenly there was an OS easy enough to work on for a Phd to test implement a feature and get a degree.


The library OS work from MSR has fed into Windows Nano, so will be a shipping product soon.

The revivial in systems work has already started, and there is a lot going on eg see some of the talks in [1]. Projects like CHERI show you can do research on a real production OS (FreeBSD) not a toy one, while SeL4 shows you can even do correctness proofs on a small OS. Unikernels and other library OS projects and performance oriented projects are putting what was system code into userspace where it can be iterated faster, and moving to high level and scripting languages. Comtainers and microservices are causing a huge rethink of the monolithic architectures too.

[1] http://operatingsystems.io/


Exactly. I've spent years telling people about all these projects and activities under way. Thanks for the link to some I hadn't heard of. Microsoft's VerveOS was clever. Genode is most innovative & practical from L4 family. Separation kernels (eg INTEGRITY-178B, LynxSecure) lowered TCB on full virtualization (esp Windows). Minix 3 is nicely doing the self-healing thing. IBM's K42 pushed cluster schemes pretty hard. Oberon still gets developed (eg A2 Bluebottle) & Wirth recently put it on custom hardware (again). Also notable that Oberon OS has garbage collection. Azul's Vega processors basically make hardware, GC, and OS for Java apps. MirageOS making smaller, safer TCB on Xen using Ocaml.

And so on and so forth. Much stuff being done in directions that might actually achieve something. There's still hope for those of us wanting something other than monolithic garbage whose uptimes still can't beat a VMS cluster from the 80's and whose security is a measure of how convenient it is to hackers.


seL4 is great, but it's still an incremental evolution. L4 has been a research interest for a couple of decades now (some Japanese students even ported Plan 9 on top of L4), so something like a formally verified microkernel was long coming.

CheriBSD is more of a hardware research platform. We've had capability-based hardware and similar protection schemes for decades, but they never caught on.

Containers are an absolute disappointment the way they were popularized with Docker, and I think it's naive to assume that "microservices" are anything but a buzzword. The microservice architecture itself is exceptionally old and basically a reapplication of OO principles to high-level software components.

Unikernels and libOS are kind of interesting, though.


seL4 isn't supposed to be innovative: it's supposed to be verified correct to a higher level than EAL7. It seems to have succeeded. Go to Genode.org to see innovation in L4 space, theirs and others'.

I agree on containers and microservices. IT rarely learns the good lessons of the past but often repeats its failures. The first good containers were Burrough's apps where they compiled source against a good API with checks at interface level (compile-time & runtime) with full reuse of code in memory to avoid duplication. Hardware-enforced, optional isolation & duplication if you wanted a brick wall. That has both an efficiency and security argument. This new shit might be an improvement on typical Windows or UNIX/Linux deployment but seems to be improving in wrong direction.

Best example is still them standardizing on complicated HTTP- and XML-based middleware instead of simpler formats (eg s-expressions) with a grammar I can auto-generate w/ validation checks on TCP/UDP-based middleware which I can also auto-generate w/ validation checks. Designing robust, efficient solutions with mainstream stuff is like walking through a friggin' minefield of problems.... where it can even be done! Starting to think they're all closet masochists...


Besides the sibling reply, MSR work on Singularity was used for the Windows Phone 8 .NET native compilation, e.g. MDIL, and upcoming .NET Native.


I've been giving credit to them over recent years for doing a lot of good work in OS design, programming languages, and verification technology. Unlike Microsoft Engineering, Microsoft Research is pretty kick-ass. Truth be told, much of what they do copies other work in some way and tries to improve on it. Example: VerveOS's Nucleus copied the old IME mainframes with a central component called Nucleus with similar features. They pretend like they were original cuz everyone forgets about the old systems. ;) Yet, their copying that approach to split development and verification was a really smart move. It's how they got results.

I encouraged Microsoft Research to continue investing in tooling and verification tech that represents The Right Thing approach to systems. That pieces of it drift to their commercial products is even better. :)


That is why I appreciate Microsoft's work with WinRT, Singularity, Barrelfish, Drawbridge.

The unikernels work going on.

Android's Java userland.

Apple's hybrid approach with containers.

This is one of the topics I actually agree with Rob Pike. UNIX is past its due date. There is so much to explore, specially since POSIX just perpetuates C insecurity model.


Anyone that likes Plan 9 should also look at Amoeba [1] and Globe [2]. Amoeba did a lot of interesting work that could be considered when doing a modern, Plan9-like effort. Globe did for WAN's what Amoeba tried to do for LAN's. I had hoped solutions like Globe would get more traction given Globe applications would be way better than upcoming Web applications. Globe team even proved it by doing Web on top of their own stuff: doing Web better than Web lol. Unfortunately, the mainstream led us down a path resulting in trying to accelerate Web applications powered by protocols from time-sharing days on monolithic UNIX and Windows OS's. A little less fun...

[1] http://www.cs.vu.nl/pub/amoeba/amoeba.html

[2] http://www.cs.vu.nl/~philip/globe/


Amoeba indeed had cool ideas, like most things A.S. Tanenbaum has worked on, but I always thought Sprite was more novel.

I haven't looked into Amoeba too deeply, I must admit, but other than its support for network transparency and single system imaging from heterogenous machines, I don't think it's very Plan 9-ish at all.


I'm glad you brought Sprite up. Been meaning to look at it closely. Far as Amboeba vs Plan 9, that was intentional on my part: get HN readers seeing something a bit different that happened in the same field. Probably just phrased the comment badly again (sigh).

Sprite homepage for HN readers following along:

http://www.eecs.berkeley.edu/Research/Projects/CS/sprite/spr...

Edit to add: The retrospective is definitely worth reading. I think most administrators, to this day, have it harder in some ways than people running Sprite due to their clever design choices in networking, storage, and single-system-image.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: