Hacker News new | past | comments | ask | show | jobs | submit login
Rob Pike: Reflections on Window Systems (2008) [video] (utoronto.ca)
117 points by pmarin on Sept 28, 2014 | hide | past | favorite | 47 comments



"To be honest, looking back on it, Unix today is worse than the systems that Unix itself was created to get away from."


He said similar negative things about Unix in a 2004 interview with Slashdot.

"I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!

"I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy."

http://slashdot.org/story/04/10/18/1153211/rob-pike-responds...


> The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!

This is a shitty argument. Dynamic memory allocation is a done deal, but it's trivial to bring down a system that doesn't limit it, and so you still need a limit beyond which users will still get errors.

And usually, if you keep getting it, it's down to bad habits (globbing instead of find + xarg, the latter which lets you take full advantage of multiple processors)


Find + xargs needs a lot more syntactic sugar before people start using it on a regular basis.


Pike also said: https://news.ycombinator.com/item?id=3075550

"..the Unix/POSIX/Linux systems of today are messier, clumsier, and more complex than the systems the original Unix was designed to replace.

It started to go wrong when the BSD signal stuff went in (I complained at the time), then symlinks, sockets, X11 windowing, and so on, none of which were added with proper appreciation of the Unix model and its simplifications."


> "To be honest, looking back on it, Unix today is worse than the systems that Unix itself was created to get away from."

That's my feeling about Go and C.


That quote stood out for me as well. Great talk though, one of the best I've seen going over all the window manager issues he worked on over the years.


Can anyone here corroborate that there are better alternatives to operating systems than the Unix-likes? Maybe anyone that agrees with Rob Pike's view on this in particular? All I seem to hear is that Unix-y OSs are great, but I don't hear much about any potential alternatives. So I'm curious about them.


What Rob Pike is refering to is of course Plan 9 and its derivatives (say Inferno). Whether you want to consider them Unix-likes or not is your choice (many HNers will say it's more Unix than Unix).

If you look for desktop alternatives to Unix, OSes to look at are BeOS (an open source clone is developed under the name HaikuOS) and the Windows NT kernel (don't hurt me: the Windows NT kernel is IMHO far more elegant than, say Linux or Mach. What is crappy is especially the WinAPI; please look deep below the surface: Here Windows gets nice).

The reason why you won't find many desktop/server alternatives to Unix can be read here: http://herpolhode.com/rob/utah2000.pdf Thus lots of OS reserach is either focused on embedded stuff or virtualization instead of desktop or server.

Accepting this, especially have a look at the L4 kernel family (a family of very small and fast microkernels). If you like highly secure designs, you'll probably like seL4, the first formally verfied kernel. Also QNX is worth a look (QNX is Unix, accepted; nevertheless a very elegant kernel design).



In the context of the NT kernel, I supoose ReactOS.org might warrant a look (I don't really know how closely (or not) the kernel design matches NT beyond being compatible with drivers...).


I'm not sure what huge technical advantage the BeOS model offers over contemporary Unix-likes in this day and age, other than its file system support for extended attributes, which are a controversial topic and nonetheless still used by things like XFS.

I'd also like some clarification on what you think is good about the NT kernel. It seems far too entangled with other cruft that forms the Windows stack.

That said, I'd also like to mention the Hurd. At this point, it's really a fragile ad-hoc reimplementation of 9P file servers on top of a modded Mach, but such a model was still quite daring for general purpose computing back during the Hurd's original window of opportunity (1989-1995, or so). Probably would have advanced the state of modern OS at least a bit, if it weren't for managerial incompetence.

It's still a Unix at heart, though, despite many important extensions.


> I'm not sure what huge technical advantage the BeOS model offers over contemporary Unix-likes in this day and age, other than its file system support for extended attributes, which are a controversial topic and nonetheless still used by things like XFS.

BeOS was very optimized for multimedia, which is an interesting property I think. Also the GUI library used multithreading from beginning. This surely isn't interesting for servers, but for desktop computers.

> I'd also like some clarification on what you think is good about the NT kernel. It seems far too entangled with other cruft that forms the Windows stack.

For a lack of time for explanations I will only give one example: How to do fast asynchronous I/O (You know: Asynchronous IO: The hot thing that node.js is about ;-) ;-) ). Under FreeBSD/Linux asynchronous I/O is just synchronous non-blocking I/O (FreeBSD needed to implement kqueue to even allow this in a fast way; the Linux developers implemented epoll (which is incompatible to kqueue :-( )). Under Windows NT it is an easy problem that has been solved (from beginning?). See

> http://sssslide.com/speakerdeck.com/trent/pyparallel-how-we-...

for details.


> BeOS was very optimized for multimedia, which is an interesting property I think. Also the GUI library used multithreading from beginning. This surely isn't interesting for servers, but for desktop computers.

BeOS claimed to be optimized for multimedia, but that does not mean that it was. I remember, that I was able to have fluid DivX (3.11) playback on PII-300 running Windows and Linux, but not on BeOS.

What BeOS did have is DirectShow-like media architecture, using nodes and pipelines. But at the time, it was not an effective architecture.

(And yes, the BeOS engineers never managed to support VESA GTF in their display drivers, meaning that the picture on my monitor was always shifted compared to other OSes).


DivX?

Back when BeOS was still on sale, Windows latest versions were Windows 2000 and Windows 98, with XP around the corner.

I never remember using DivX on those systems.


Yes, DivX.

The original hacked codec appeared in 1998. It got boost in popularity when the movie The Matrix came out (1999).

This happened in Windows 98 timeframe. Windows 2000 was in early beta, not yet on sale and XP was unheard of yet. The current linux were Redhat 5, 5.1 and 6; BeOS 4 and 4.5.


Actually I think I was still only using Real back then, but cannot really remember.

I would need to go dig into my Zip floppies collection, a few thousand kilometers away from my current location.


Ha, BeOS, whenever I watch one of these https://www.youtube.com/results?search_query=beos+demo I feel as amazed as sad.


They're of course not a desktop option at all, but there's also some very interesting work happening with unikernels, which are really only recently a viable option now that Xen can present a unified hardware layer, instead of the endless treadmill of specific drivers. I've been getting into OpenMirage [1] quite a bit recently, largely due to the energy of the team [2], but also looked a bit at HalVM [3]. I definitely recommend [2] for anyone interested in plausible ideas of what the one of the next generations of VM/apps might look like.

[1] http://www.openmirage.org/ [2] https://www.youtube.com/watch?v=UjonFD-2ATo or http://www.se-radio.net/2014/05/episode-204-anil-madhavapedd... (audio only) [3] https://github.com/GaloisInc/HaLVM


While I agree with you regarding Windows.

In respect to the WinAPI, OS/2 API shares part of the pain. And Xlib is way worse.

I am looking forward to WinRT fixing some of the WinAPI issues.


Yes, while UNIX was being developed there were these crazy guys at Xerox PARC created what has become, partially, the idea of modern computing.

A Lisp based workstation (Interlisp-D), a Smalltalk based workstation and a workstation with the first GC enabled systems programming language Mesa/Cedar.

The said systems offered a REPL interaction with the operating system that UNIX to the day still can't match. With powerful concepts of UI workflows, which OLE on Windows or the ill faited Taligent are based on.

They also had the first IDEs, unit testing, AST code transformations and live debugging capabilities.

Some of the Mesa/Cedar ideas were re-used by Niklaus Wirth on his Oberon and Lilith OS research, and corresponding derivatives.

OS/2 offered a OO based UI, and the OS component model (SOM) was way better than COM, allowing for inheritance of implementation and meta-classes.

Amiga UI could have multiple resolutions at the same time, thanks to its chipsets chew more graphics than X-Windows can dream of. Additional it had a concept of libraries where developers could extend the OS and existing applications via plugins. Suddenly application X would be able to work with new file formats.

Risc OS also had quite a nice set of features, but I will let someone else jump in on that one.


I often wish we could rewind to 1990 and start OS development again from there, using the ideas of AmigaOS, RiscOS, TOS etc.


> I don't hear much about any potential alternatives. So I'm curious about them.

Pike was talking about Plan 9. It was created by the research group that developed Unix as a more modern successor to Unix. From Wikipedia: "Plan 9 from Bell Labs was originally developed by members of the Computing Science Research Center at Bell Labs, the same group that originally developed UNIX and C. The Plan 9 team was initially led by Rob Pike, Ken Thompson, Dave Presotto and Phil Winterbottom, with support from Dennis Ritchie as head of the Computing Techniques Research Department. Over the years, many notable developers have contributed to the project including Brian Kernighan, Tom Duff, Doug McIlroy, Bjarne Stroustrup and Bruce Ellis."


He wasn't just talking about Plan 9. Rob was part of the generation that built Unix, and at that time operating systems research was an exciting frontier of new developments. Today, and at the time this talk was given, operating systems research is a stagnant, niche pursuit.


Today, and at the time this talk was given, operating systems research is a stagnant, niche pursuit.

All that effort has shifted over to the browser and the web as a platform on every device. What's the point of putting such a large effort into a complicated, multi-tasking and multi-user desktop operating system if it spends 90% of its time running a browser for one user?


Well, operating systems are used for more than just end user workstations. Servers are a one example.


i loved plan9. i was into it before the 3rd edition was released and i built a small career out of it. it was and still is the most enjoyable system to work with.

i'm happy it's getting its dues in retrospect and is now considered a cool thing, but i can't help but remember how just after the first open release everybody and their uncle kept complaining about trivia: "how can I become root?" "it doesn't do x11?" "its license isn't gnu-open?"

rob is probably right: licensing killed plan9, but so did everybody who couldn't see the forest for the trees. the linux juggernaut was too popular back then.


I'm not sure licensing was the only reason. A pitch that goes "we work like X, but better" is unconvincing. Especially when it ends with "in an incompatible way."


their motto if i recall was never that. they had two demos about being Unixier... one involved tar-ing a process in one machine, sending to another one (or mounting that machine cpu over the network, don't remember) untar-ing and the process continued from when it was first packaged with ui state and all.


That was Plan B. I've never used it, or read the papers or the manual, but everything was centralised on a single box. The reason the tar thing works at all is because all it contains is pointers to your centralised state. In case anyone is trying to wrap their heads around the above comment.

The closest Labs equivalent would be Protium, which is for writing programs like he mentioned with the sam/samterm split but with reattachability. Like all the most interesting application-level softwares for Plan 9, that actually do something for you, it wasn't released.


The Blit video he mentions: https://www.youtube.com/watch?v=emh22gT5e9k

A similarly interesting talk by him on CSP-inspired languages (Occam, Erlang, Squeak, Newsqueak, Limbo, Alef, Go): https://www.youtube.com/watch?v=3DtUzH3zoFo

Slides (not shown in the video): http://go-lang.cat-v.org/talks/slides/emerging-languages-cam...

Interesting as well: From parellel to concurrent (on Sawzall and Go): http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Fro...


In the middle of the talk, Pike said that the idea of a workstation in every office was one of the stupidest things he'd ever heard, but then decided not to digress and explain why. Has he discussed that at length in any other paper or recorded talk?


They're more expensive first of all. and each one has to be administrated individually, especially if it has a local disk. If the kernel is kept locally, then you have to go around each machine every time you compile a new one. Same thing for applications. Same thing for let's say /usr/dict or something stupid.

Plus it's another mechanical component which can fail.

If the user directories are local instead of on a central box, then you

1) need to be at a specific one to get your stuff, and you

2) can't just cd into someone else's /usr/*/src and collaborate.

The Labs word for timesharing was "communal" computing, if that helps.

For a school or company, in which the computers are provided by the organisation itself, the Plan 9 way is very obviously better than insular systems.

There's more, but it's another whole tangent.

Since you asked, the main Plan 9 paper ("Plan 9 from Bell Labs", 1995) does say

"the early focus on having private machines made it difficult for networks of machines to serve as seamlessly as the old monolithic timesharing systems. Timesharing centralized the management and amortization of costs and resources; personal computing fractured, democratized, and ultimately amplified administrative problems."

but even more to the point would be: with a terminal, you just plug it in and turn it on.


Instead of having a common /usr/src/* for a dev team (I remember I had one in the 80s) we have our sources on github now which allows for larger and more distributed dev teams. Not exactly the same way to collaborate but it's solving the same problem with a bigger central server. Time is round, old solutions always resurface with some differences.


"For a school or company, in which the computers are provided by the organisation itself, the Plan 9 way is very obviously better than insular systems."

Only if you can guarantee 100% (or close to) access to high-speed network connections at all time. The value of a computer with meaningful local storage and execution is that it continues to work when the network is gone or isn't performing well. And it takes a lot less bandwidth to send data to a user's computer and let local applications worry about dealing with the data than it is to send a full screen to the user, if you're talking about GUI. You can also smooth out latency issues a lot easier.

So I don't think it's obviously better in those cases. But it's very obviously worse OUTSIDE of those institutions. Here's Pike's description of his dream setup[1]:

"I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called 'terminals'. Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that."

That's fine, so long as all you want to do on your home computer is do more work. But that's such an incredibly narrow vision of computing. The truly personal computer enabled a lot of things that weren't possible under the old dumb terminal model Pike pines for, and has made computers accessible to many more people. We'll continue to get some of the benefits of this as the "cloud" continues to be integrated into things, but I don't think we're going back to the "communal" computing model, and if we do, it's going to be because someone looks at the benefits of the personal computer model and finds a way to provide them in the communal model, rather than sitting there and pining about how it was in the old days before the peasants ruined everything.

1) http://rob.pike.usesthis.com/, thanks to tjgq for providing the link below


> dumb terminal

> send a full screen to the user

I think you're confusing Plan 9 with thin clients. Even the Blit in the video was a smart terminal ("you can run programs on it"). The way Plan 9 works is

> send data to a user's computer and let local applications worry about dealing with the data.

It was apparently different in the 90's ("terminal was a computer but we didn't compute on it") but nowadays in Plan 9 you do nearly everything on the terminal. For different reasons: the WM, the browsers, the editors run on the terminal for responsiveness; image and music decoders run on the terminal because they need a higher bandwidth to the screen/speakers than to the fs (jpegs, mp3s, etc are compressed); plumber and factotum run on your terminal so that you get exactly one instance of them per session, and that they die as soon as you end your session.

The central boxes are only really used for FS/backup, auth, cron, maybe mail servers or whatever.

So nearly everything runs on the terminal, but it's a Plan 9 terminal so it's still just a matter of plug it in, turn it on, set up netbooting (once).

> it's very obviously worse OUTSIDE of those institutions.

What I was going to say before I decided that it was too much of a tangent was: and Plan 9 NOT being a proper distributed OS could be a plus: it's possible to use recover(4) or lapfs(4) and bring the laptop outside the network. No-one actually does that, and lapfs isn't even a real Plan 9 program in C. but it's still made feasible by the fact that your editor and so on is not on the far side of a connection and isn't going to go anywhere. unlike on e.g. an Amoeba terminal, which really is just a window manager, no editors locally, no browsers or viewers.

I dunno about using "backup" to describe the Plan 9 file server, but that's even more unrelated to my original comment.


Not really "at length", but he goes a bit into why he thinks computation should be decentralized here: http://rob.pike.usesthis.com/


Who's the guy at the end saying "let's forget where I work and where you work", asking the "hard" question?


Bill Buxton. He works for Microsoft now but used to work for Alias|Wavefront (also in Toronto as I understand it.)


Interesting, I'm gonna watch this just for him. As a Maya user, I think the people at AW were as high end as a team can possibly be. They were doing magic to my poor pentiumII.


I don't know how famous he is in programming circles, but in HCI and Interaction Design he is a well-known and respected for his fantastic research on two-handed interfaces.


I encountered him in a talk at Siggraph 2001. He is a charismatic guy and used a lot of creative examples of interface design. In particular he showed http://en.m.wikipedia.org/wiki/Ammassalik_wooden_maps

It is sad that Maya is now an Autodesk product: Autodesk is such a square, white bread, corporate outfit. The interface to their flagship AutoCAD is ludicrously clunky and antique, while something like Revit (another acquisition) is clever but really dumbed-down and limited in scope. Oh well.


For anyone else wanting to watch this on Android, or just have an mp4 and slides in pdf: the archive is linked near the bottom of http://genius.cat-v.org/rob-pike/


and more talks from DGPis40 here (Pike's link to the event is broken): http://hciweb.cs.toronto.edu/DGPis40/webcasts.html


but I guess we're stuck with it until we come up with a better way to install emacs


An interesting bit of trivia about http://en.wikipedia.org/wiki/Plan_9_from_Outer_Space: The film's title was the inspiration for the name of Bell Labs' successor to the Unix operating system.


Is there a mirror of this somewhere that doesn't require Flash?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: