> As for me? I switched to the Mac. No more grep, no more piping, no more SED scripts. Just a simple, elegant life: “Your application has unexpectedly quit due to error number –1. OK?”
I wonder how that worked out for Don Norman (Apple Fellow and preface author)?
(Written from my wonderful MBP, which is happily, a Unix)
Apparently those random crashes were often caused by hardware memory corruption. I remember one manual basically said something like "if this error happens, don't worry about it, it probably won't happen much". And then you would see it for the third time that day.
One thing that made Unix really compelling back then, even without access to the system source code, was that so many different CPU types and OS variants meant that any (free) software would come as source code.
Generally for the more mainstream OS's, in my case being Solaris at work and early Linux at home, the "porting" effort to make it compile was maybe an hour or two. And that gave you a real sense of ownership, and a passing familiarity with the code, by the time it ran. And a head start on tinkering with it if it didn't quite work the way you liked.
This was much more satisfying than (at the time) binary-only shareware distributions for MS-DOS or (in my case, before the switch to Linux) Amiga OS.
It's interesting to see the open contempt towards anything that has to do with graphics and/or multimedia:
>No Gulag or lice, just a
future whose intellectual tone and interaction style is set by Sonic the
Hedgehog. You claim to seek progress, but you succeed mainly in
whining.
I'd say most of the complexity of modern OS's comes from handling all those interactive 3D/UI/Audio use cases, an area that desktop Linux is 15-20 years behind it's main competitors. A coherent, robust story is just starting to emerge, but as a veteran Linux enthusiast, I have become wary of the hot new thing just over the horizon that promises to solve all our problems.
It's just recently that the Linux desktop has moved from the weird network-centric distributed architecture of the X server to a modern rendering architecture with Wayland. And still the transition is a work in progress - it's still only production ready with some concessions and a somewhat curated choice of hardware and software.
OS X introduced this in the early 2000s and by the mid-2000s, it was working as intended. Microsoft trailed, with the implementation showing up in Vista, and maturing in Win7 in 2009.
Looking at these timelines, the 15-20 years seems roughly accurate.
On the audio side, it was quite a difficult story as well, where we went from basically no solution for sharing the sound card, to PulseAudio's decade of broken audio, that kinda matured by the end, but still having to have Jack around for high-end audio stuff.
Now PipeWire seems to be a promising comprehensive solution to Linux audio, but it's very recent, and yet to gain traction in Ubuntu-land.
I think there's cause for optimism and the pieces will fall in line, but matching the 'everything just works promise of Windows 7 in 2009 or OS X in 2006 will take a couple more years still.
For most of my years using Linux (mint then manjaro kde, etc)I didn't have a clue whether i was using X or Wayland. Consequently if someone told me how behind the times i was because of that i would have frowned.
Similarly anecdotally my frustrations that were encountered trying to get things to work with linux. (bluetooth is a mess)
Were always matched with the same on Windows from a somewhat laymans perspective.
An old printer that wouldn't work with windows but would with Linux. A playstation 3 controller (i think they were the most common type at the time) which required multiple hoops and then still wouldn't work whilst being plug and play after booting into Linux.
We had a solution prior to pulseaudio, but it was the ALSA dmix option, and anyone who's ever set it up can probably tell you how sparse and cryptic the documentation for it is. If you're having trouble finding anyone that fits that description, check the Slackware crowd; dmix was the norm right on up until the last couple major releases, but it wasn't configured out of the box.
Windows Vista (released in 2006, just about 15 years ago) could survive GPU driver crash with most UI programs intact. I don't think Linux has that capability to this day.
I have been a life long pro audio user on Windows and I must say both jack and pipewire are quite something. I miss those when I am on a Windows machine.
They probably are, but jack is aimed at tinkerers and experts, with no big distro shipping them as default, and pipewire as default is a fairly recent (1ish year old) thing.
I don't know if I'd agree with you about most of the complexity, a lot of it seems nicely modularized off and handled outside the OS kernel proper or by hardware itself. I strongly disagree about Linux being 15-20 years behind though, but only because I don't think progress is so linear. X is old, yes, and missing some things, but it still works fine, and experiences are often no worse than Mac/Windows XP/7/8/10 and frequently better, depending on your needs. Apple in particular has always had bad experiences the moment you step off the reservation, and even sometimes on it (but Apple enthusiasts can make more complaints than me). Audio has been a mess, sure, though there have been periods of stability with Alsa and (though it took them long enough) PulseAudio. (I share your skepticism of the new hotness fixing things, back when it was Pulse and even the upcoming PipeWire. I've always been skeptical of Wayland long term.) For graphics, you can still play your 3D games at 4k@60hz, you can still hook up external monitors (and for years that has had a much better "just works" experience than it once did such that most people no longer need to touch the Xorg.conf file), for weird niches that still complicate the architecture Linux allows you to hook up more GPUs (I think Win10 has almost caught up though recently by allowing up to 12). IME works just fine if you need to type Japanese. You can still remote desktop -- even in the archaic days with X forwarding, but you also had VNC and these days NoMachine is probably the best. None quite as good as RDP, maybe, and that may be a limit of the architecture, but it's not like there's much complete absence of capability that could be used to justify that time gap as a measure of progress or an indication of how much catch-up needs to be done.
Meanwhile there are absences or near-absences when it comes to other things Linux has long had and as far as I can tell aren't really things in Windows or Mac for the most part. Having a window "Always on top" is available as a Windows extension if you know where to find it, but is not out of the box. Workspaces are another thing, very useful if you're on a constrained screen like a laptop, though I think if you know where to find those there's also some equivalent for other OSes. I'm more doubtful if there's an equivalent to anything like Beryl in 2006 (now Compiz Reloaded, sadly on life support for years, but I still use it) for eye candy or other nice window management shortcuts (grid layout hooks without having to use a tiling window manager, jumping windows between monitors or over workspaces, dimming non-focused windows, arbitrary magnification and easy zoom-to-fit-screen hooks, I could go on).
I will grant that HDR support on Linux is missing entirely (I was recently reminded of this when I almost upgraded to a new HDR monitor). Supposedly annoyances can happen when mixing monitors of different pixel densities. But overall, the thrust of my comment is again just that I think it's hard to lay out a flat line of progress where you can clearly say Linux is 15-20 years behind.
Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.
UNIX did have its limitations. Also, some of its philosophies could be directly damaging if followed without due thought. Nevertheless, the success of linux is a testament to how enduring most of those concepts still are.
On the other side, many of those criticisms were addressed, fixed or were never correct. I rest when I read about UHH and think that many of those complainers have to use a UNIX or a UNIX-like OS nowadays. It is almost impossible to send a package over the internet without having it processed by a linux system at some point.
My favorite part of the anti-foreword is when Dennis Ritchie talks about how most of better-than-UNIX-OS' are "not just out to pasture, they are fertilizing it from below." Prophetic.
>Unix was designed for the computing environment of then, not the machines of today. Unix survives only because everyone else has done so badly. There were many valuable things to be learned from Unix: how come nobody learned them and then did better? Started from scratch and produced a really superior, modern, graphical operating system? Oh yeah, and did the other thing that made Unix so very successful: give it away to all the universities of the world
That is a very good question! What we have now it's a bunch of old operating systems, no novelty, no new computing paradigm. Every research OS some clever guys and hobists came with, was killed. Linux, *BSD and MacOS are modeled after UNIX, a many decades old operating system. Windows is very old and Microsoft can't do anything about it since it's customers care the most about backwards compatibility.
Server landscape is not so much plagued by compatibility issues, since many software can be ported to a different operating system without needing a POSIX layer.
Also, when new computers types arrived (think smartphones and tablets), there was an opportunity to get rid of the old. Still, mobile oses were based somehow on UNIX.
In the late 90s - early 2000 there were many new operating systems proposed by companies, researcher and hobbists for servers, personal computers and PDAs. Many doing new things and not being based on other operating systems. There were a reason to be optimistic and enthusiastic about the future of the operating system landscape.
Now, it seems we are adding layers upon layers of lipstick to the same old pigs. It's not that UNIX and UNIX like systems were bad. They were ok. In the 70' to 90'. It's not that Windows was bad. It was ok from the '90s to 2000s. But now we need newer, better OSes to better adapt to the surrounding reality which changed much.
There have been plenty of new OSes and computing paradigms over the years, a plethora of them. They weren't 'killed', they just never got any traction. The interesting question is why that is.
Unix at it's core tries to be as simple an interface to the hardware as possible, and as un-opinionated as possible. It's written in a language that is a thin layer of syntax over assembly. Files are arrays of bytes. Most devices just look like files. It imposes the absolute minimum of abstractions over how these things are implemented in hardware.
Alternate OS designs are generally far more opinionated. File systems are either relational or object databases. Networks and machines are abstracted away so you don't even know they're there. Lots of fancy virtualisation. The problems are many. First of all this imposes a cost in resources and performance on even the simplest applications. Second these things are best thought of not as operating systems, but as applications. They're providing services similar to those of Oracle, or Hadoop, or VMWare, etc but you don't get to choose which implementation you want, you get the particular combination of ones provided by the OS and either like it or lump it.
The minimalism of the base Unix layer is it's superpower. Whatever additional services you want on top is fine by Unix. That's why it makes a great underlying layer for the Mac, iOS and Android. Even if you do want to build an opinionated high level system, why not build that on top of Unix? We don't need a workmanlike unopinionated new base layer OS that provides efficient access to hardware resources, and nobody want's to write one. They want to write high level fancy abstraction-laden half-OS half-application monstrosities. STOP! You're not writing an OS, you're writing an application. Even if anyone does want to write a lean efficient base OS layer nobody cares, because we already have Unix.
Unix was a relatively simple (but not minimal) base layer around a 1970s minicomputer architecture, one with no networking, no graphics support, and dramatically constrained storage, with users on text-only terminals.
It is a very poor fit to a single-user personal workstation.
If you think that all other OSes are more complex, then I suggest that you investigate the fairly rich space of other OSes designed for single-user workstations.
Notably, I urge you to study the 1980s and then 1990s successors to Unix from Unix' own creators:
Plan 9, for networked graphical machines.
Then Inferno, for whole heterogenous networks of machines with dissimilar CPU architectures.
In parallel non-Unix spaces:
If you have a Raspberry Pi, try RISC OS. It's a single-user, fully-graphical, internet-capable multitasking OS whose core fits into about 6MB. I ran it in the 1980s on an ARM workstation in 1MB of RAM.
Also Niklaus Wirth's Oberon Project, which is an entire multitasking single-user OS with a tiling-window interface in a type-safe language, which is about the size of a very basic Linux console text editor such as nano.
And its successor A2, which is a little bigger (but still smaller than, say, Vim) which has a full GUI, is SMP aware and can just about access the WWW.
For networked use, perhaps read up on Novell Netware. Arguably the peak was Netware 3.11 but Netware 4 is also instructive. In its day it had a performance advantage over the best-tuned competition of around 4 digits in percentage, i.e. it was around 10x-30x faster than any rival UNIX of its time at what it did.
For the space of supporting dissimilar CPUs in a single binary, look into Taos and its successors Elate and Intent.
There are a lot of OSes out there that make even 1980s UNIX look bloated and over-complicated.
> If you think that all other OSes are more complex, then I suggest that you investigate the fairly rich space of other OSes designed for single-user workstations.
The problem with a lot of these alternate OSes was that they were still too opinionated. Plan 9 tried to abstract away individual computers and impose a particular approach to sharing resources. Oberon was as you say type safe and object oriented and tied to a particularly peculiar Ui paradigm. Netware was barely an OS, really just a highly optimised network file system.
Tellingly a lot of the services and applications that were parts of these OSes made their way into Unix. That’s because these were application level services that had been tied into the OS at a low level. Too opinionated. If you wanted to access compute resources using a different model to Plan9, then Plan9’s implementation got in your way. If you didn’t want to run software using the Oberon language, the Oberon OS got in your way. If you wanted to do something other than access files using the Netware protocol, Netware got in your way.
To the extent any of these OSes were simpler or easier to develop on than Unix, their advantage simply wasn’t enough of a win to make it worthwhile.
That’s the historical argument though. For the forward looking situation, there is a lot of interesting work being done but it’s all at levels above that of the OS. Network services, storage systems, programming paradigms, resource abstraction. There’s plenty of innovation in these going on, it just doesn’t need to be done as something baked into a new underlying OS. There’s nothing wrong with building it as a service on top of Unix.
Most of those things really aren't part of the OS as such, they're just applications. Command line tools and UIs for example. They're just utilities, they don't matter at the level we're talking about here, about the operating system as a platform for developing and running software. You're complaining about the software people run on it, not the base layer of the OS itself.
This is what I'm trying to point out, too many other OS projects rolled abstracted services into the base layer of the OS in order to make things easier to develop applications. That was a mistake because it tied them to particular implementations of those services.
In fact I'd go so far as to say that none of those things are applications.
But one of the problems of discussions about concepts and philosophies in computing, I find, is that people are very wedded to their ideas and will eagerly redefine them on the fly in an attempt to defend them.
So, for instance, I think that a fairly useful example of what counts as "part of the OS" today would be, for instance, what comes along with the OS on its installation medium. And to dodge questions of who-wrote-what, let's say that as an example let us talk about, say, FreeBSD or OpenBSD. They are the products of single teams of people, unlike any Linux distro.
So they are recent, they are current versions, and can reasonably be described as two of the most complete modern versions of UNIX that are the products of single teams.
Excluding Firefox, for example, as being manifestly an external component, albeit a FOSS one that can be installed directly from either OS's repos.
Both come with an X.11 server as a standard OS component. That is not an application. Both offer Xfce as a standard desktop. That too is not an application.
An X server is every bit as much an OS component of 21st century UNIX as is `awk` or `sed`. It is not 1974. We are not using PDP-11s any more.
Tell me, can you pipe things to an X server? Can you usefully pipe its output to another program?
I don't think that you can. And that, I submit, blows clean out of the water all the high-minded stuff about the conceptual purity of UNIX.
Oddly enough, I quickly googled, and it seems AT&T decided to target the embedded market with Plan9 which feels like a bit of an weird niche to put it.
There’s no way I can see a new operating system dethroning the NT/*nix world at least not without somehow catching up to decades of hardware no one but those two support.
9front is still alive and maintained, and there are two forks that I know of, Jehanne and Harvey.
Inferno has been ported to at least one smartphone.
Both run natively on certain models of Raspberry Pi.
Another strategy is what OpenVMS is doing initially: there is a single model of reference hardware, but the initial version 9.2 now available runs on and supports Xen, KVM, VMware and Virtualbox. Hyper-V is coming in 9.2-1.
But the real question is: should it support decades of hardware? Why, if it's something new, doing something different? Perhaps it only needs to support its own hardware, and if it proves viable, then additional models and support can be added?
I did a Fosdem talk on this general idea, and it's been a recent HN discussion. The title was "Starting Over". You might find it interesting.
> That is a very good question! What we have now it's a bunch of old operating systems, no novelty, no new computing paradigm.
Entirely false, if you have kept up with the state of the art in operating systems.
Many still expose their old APIs by necessity of course, but the internals have undergone leaps and bounds improvements and new APIs and functionality are added all the time.
I agree completely. Just look at io_uring and PowerShell, to name some examples.
io_uring totally rethinks the notion of a "syscall". It is going to eat the world, and most of us are not even going to realize that it happened.
PowerShell is a really cool example of re-imagining input and output. Namely, it is record-oriented ("Object") rather than byte-oriented. No more stringing together chains of fragile "awk" commands to make pipelines do what you want. (I know there is prior art here)
If you think OS's are moving still, then you're the one stuck.
Just lots, wherever you look. io_uring and eBPF are the new hotness, but we've had operating systems scale up from a few CPUs to hundreds or thousands in the past few decades, we've had filesystems like ZFS and BTRFS with checksums and snapshots and versions, there's namespaces and containers, hypervisors built into the OS and paravirtualization, multiqueue devices with command queue programming models, all sorts of interesting cool things.
Before someone jumps in and says we've had all these things before, including io_uring and eBPF (if you squint) and scaling to hundreds of CPUs and versioning filesystems etc. That is true. IRIX scaled on a select few workloads on multimillion dollar hardware and even that fell in a heap when you looked at it wrong. The early log structured filesystems that could do versioning and snapshots had horrific performance problems. IBM invented the hypervisor in the 1960s but it was hardly useful outside multimillion dollar machines for a long time. Asynchronous syscall batching has been tried (in Linux even) about 20 years ago, as have in-kernel virtual machines, etc.
So my point is not that there isn't a very long pipeline of research and ideas in computer science, it is that said long pipeline is still yielding great results as things get polished or stars align the right way and people keep standing on the shoulders of others, and the final missing idea falls into place or the right person puts in the effort to put idea into code. And those things have been continually going into practical operating systems that everybody uses.
Unpopular opinion but the answer should be self-evident and has little to do with technical merit: cheap and just barely good enough beats expensive and better every time. And Linux is free as in beer. It's not unlike reports of how generous food donations to poor countries bankrupted local farmers; it's impossible for a wannabe new OS vendor to afford to pay its engineers long enough for their OS to gain traction over the incumbents, particularly when one of them is free.
The only two options I see are either one of the FAANGs spend the billions necessary on the long, hard slog to push a next generation OS until it takes off or another person of Linus Torvalds' stature pops up out of nowhere again to build an OS so amazingly good that it takes off on its own like Linux did. Both scenarios seem improbable.
I think most people seriously underestimate the development effort a usable general purpose OS takes. It's actually global scale and the few companies still having their own can only do it because they captured large parts of the global market. And even if we ignore this, if someone comes up with objectively better apis, linux can just have a 80% solution with a few new syscalls (e.g. cgroups or io_uring), and the rationale for their potential users evaporates.
The whole all but violent upheaval caused by systemd (not rehashing it here/now) seems to stem from the Unix philosophy not being adhered to. Pulseaudio similarly used to draw a lot of ire (perhaps still does; I haven't seen as much of it lately).
Personally I love the Unix philosophy for most things because of how easy it makes automating a lot of things, but there are realities to crafting a modern graphical desktop system that can at times be incompatible with it. Sometimes a front end for a text based tool makes sense and things can be adhered to; sometimes not so much. Personally at LEAST including an API is my design preference, but, well, sometimes you work with the software you get.
I've been thinking about that. I think the new "pipe" is filesystems instead of text. That's how I've been using Docker volumes in GitLab CI/CD the last couple of years.
EDIT: The alternative is PowerShell. You carry objects over the pipe instead of text. I don't like PS personally.
OSes these days are a lot, a lot, more complex and richer than the "original" old ones. Sure there is some compatibility with Posix because it is a nice common factor to start from, and it incorporates some ideas about multitasking and multiuser systems but everything is substantially different.
POSIX permissions coexist with ACLs and much stronger security models, code signing, sandboxing, etc..
Compressed memory, copy-on-write, snapshotting filesystems, graphics as a privileged citizen, OS handling of energy efficiency, and so on and so forth...
We have long outgrown the "original" OSes and today commercial OSes have a ton of man-hours behind them that make every hobbyist OS pale in comparison.
It's not lipstick on old pigs, POSIX is an old lipstick to make the new pigs somewhat compatible with one another.
"In those days it was mandatory to adopt an attitude that said:
• “Being small and simple is more important than being complete and
correct.”
• “You only have to solve 90% of the problem.”
• “Everything is a stream of bytes.”
These attitudes are no longer appropriate for an operating system that hosts
complex and important applications. They can even be deadly when Unix
is used by untrained operators for safety-critical tasks."
Richard Gabriel's "The Rise of Worse is Better," written around the same time period the Unix-HATERS Handbook was written, gives some clues. Unix was contrasted with environments such as Scheme (which is rather small, but because it's designed as a "crown jewel"), Common Lisp (the exemplar of a "big complex system" that is complete and correct, but large and complex), and the ITS operating system (https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...). Thankfully there are many open-source Scheme and Common Lisp implementations, and ITS is also available as open source (https://github.com/PDP-10/its).
Of course, modern Unix-like systems these days are large and complex, though Plan 9 and Inferno are quite architecturally refined, reducing some of the complexities that you'll see in contemporary Unix-like systems.
At the time The Rise of Worse Is Better was written, Scheme had:
- no error handling. The specification described that certain situations result in an error, without defining what that means, or how you can catch and recover from one programmatically.
- no module system. The Scheme report didn't describe any feature for decomposing a Scheme program into multiple files. No rules governing how literal data goes into compiled files and what the rules are for recovering symbols. No evaluation controls (like don't evaluate this form when compiling, but only deposit it into a compiled file and such).
Unix was a useful production system used for running businesses, with error handling in its API's and compilation of programs decomposed into multiple files.
The "PC losering" problem was nicely resolved. Unixes developed sigaction, where you can specify it both ways: interrupts can bail the system call (which is sometimes what you want) or restart it (which is what you want at other times). Neither is inherently better. Sometimes you really don't want a long system call to go back to sleep as if nothing happened. A third solution is possible though: your signal can resume the system call when it returns normally, or else abandon it with a siglongjmp, which restores the signal mask to what it was at the sigsetjmp point. Then there is is various other signal paraphernalia. Basically the Unix people had the right intuitions in this area, and took an incremental approach whereby they gradually rolled in more correctness.
That's kinda the whole point of the essay: getting something up and running quick that is 'complete' (if not particularly elegant) is better than slowly and carefully working on a Good System. Unix ate Lisp's lunch because you could build things on it right now.
That essay definitely was a bit of a complaint about the crudeness of Unix. But I read it primarily as a wake up call (not to say epitaph) for the Ivory Tower Lisp developers who were dreaming about the perfect system while Unix ate their lunch.
What ate Lisp's lunch wasn't necessarily the Lispers dreaming of a perfect system. Rather, Unix was far more accessible in terms of availability than the contemporary Lisp systems coming out of Symbolics and Xerox at the time Unix really started to take off. Before the breakup of AT&T in 1984, Unix was available to universities under comparatively generous licensing terms. While source licenses were very expensive for companies and (after 1984) for universities, binary licenses were relatively inexpensive, and beginning in the 1990s we saw open source Unix-like operating systems such as Linux and the BSDs. Unix was not restricted to a particular architecture; it ran on a wide variety of hardware.
Contrast this with Symbolics Genera and Interlisp-D, the premier Lisp operating systems of the 1980s. A Symbolics workstation can easily cost five figures in mid-1980s dollars, and cheaper alternatives such as the MacIvory boards and OpenGenera on DEC Alpha machines weren't released until later in Symbolics' history. Interlisp-D originally ran on Xerox workstations that also cost five figures, though it was later ported to the Sun SPARC architecture (though I have no idea what Interlisp-D licenses cost). Both Symbolics Genera and Interlisp-D missed out on the open source revolution of the 1980s and 1990s. To this day Symbolics Genera remains proprietary, though thankfully Interlisp-D was made open-source recently (https://interlisp.org/). There are open-source Common Lisp compilers such as SBCL, but they are not full-fledged operating systems.
Part of the reason why Unix took off had less to do with Unix's design and had more to do with Unix's lower costs. DOS and Windows had even lower costs than Unix before open source Unix clones appeared, and they were (and, in the case of Windows, still is) widely used. While Unix's design characteristics certainly play a role, we shouldn't ignore the impact of cost and licensing.
In an alternative universe, imagine had RMS had set out on building a FOSS Lisp operating system (GNU Emacs doesn't count) instead of building GNU. RMS was a Lisp hacker at MIT who started the GNU project due to his frustrations with the proprietary Lisp machine companies like Symbolics that were spun off from MIT's AI lab, so it's not unlikely to think about an alternative situation where RMS decided to clone a Lisp OS instead of cloning Unix. I wonder if a community could have rallied behind an open source Lisp OS to be a challenging contender to Unix back in the 1980s?
Lisp machines were mostly just a single type of computer: a graphical workstation for Lisp developers. Those were mostly in R&D, typically in AI - largely financed by DARPA and similar, hoping for the AI software providing leadership in tech and military. The market for their software&hardware was on the leading edge, which later was taken from offerings growing on cheaper platforms.
They had one/two operating system to fork others from: single user, no terminal story, no security, needing large amounts of memory, needing specific expensive hardware (graphics, disks, custom boards, ...), not easy to port, needed complex (and hard to debug) memory management, scarce number of Lisp system programmers accumulated in a small number of companies, no public source story, ...
Many open source operating systems are copies or forks. There was not much original research. Writing a new operating system (say, a portable Lisp OS with device drivers, multi-user capabilities, security, terminal and GUI usage) would have needed a lot of work and there are not enough combined Lisp AND systems programmers to do that - educating them takes a lot of money.
Even today, the state of the 70s/80s Lisp OS hasn't been reached again: there is no comparable operating system written in Lisp and the old ones exist only as emulators of the past.
The tech base of the early Lisp system was too focused, there was no room to mutate and grow to different platforms in different incarnations. There were attempts (like embedded Lisp hard- & software) but that had no effect in the market.
UNIX was developed for a completely different market: simpler base software, very portable, multi-user, client&server, modular programs, terminal, ... a bunch of companies used UNIX as a base for their workstation offerings (SUN, SGI, HP, IBM, NeXT, ...) - the rest is history.
But Lisp OS like software on the UNIX systems was working well in the 80s and early 90s. They still needed large amount of RAM, large amount of virtual memory, responsive GUIs, advanced GC support, ... some high-end machines could offer that and they fixed enough OS bugs to run large Lisp systems. Every larger UNIX vendor had some Lisp story as part of their general offerings.
The Lisp people absolutely got something up and running quickly, which is why Lisp is one of the earliest languages. Big, institutional Lisp systems were in a different state of maturity when Unix was coming up.
Early Lisp didn't have defmacro with destructuring, backquote, lexical scope, condition handling, structures, hash tables, ... you wouldn't want to use it today.
Sometimes I wonder, if you wanted to create an operating system like ITS for more modern platforms, what it would look like?
ITS is written in PDP-10 assembly, what if someone wrote a compiler which read in PDP-10 assembly language and spit out C code? Could that be a first step to porting it?
It is surely a lot more complicated than that. It contains self-modifying code, which would obviously break that translation strategy. A lot of hardware-specific code would have to be rewritten. 6 character filenames without nested directories might have been acceptable in the 1970s, but few could endure it today. A multi-user system with a near-total absence of security was acceptable back then, obviously not in today's very different world.
ITS does have some interesting features contemporary systems don't:
A process can submit commands to be run by the shell that spawned it – actually MS-DOS COMMAND.COM also had that feature (INT 2E), but I haven't seen anything else with it. A Unix shell could implement this by creating a Unix domain socket, and passing its path to subprocesses via an environment variable–but I've never seen that done.
Another was that a program being debugged could call an API to run commands in its own debugger – I've never seen that in any other debugger, although I suppose you could write a GDB plugin to implement it (have a magic do-nothing function, then a GDB Python script sets a breakpoint on that function, and interprets its argument as commands for GDB.) Actually, in ITS these two features were the exact same feature, since the debugger was used as the command shell.
Another was that a program had named subprocesses (by default the subprocess name was the same as the executable name, but it didn't have to be.) Compare that to most Unix shells, where it is easy to forget what you are running as background jobs 1, 2 or 3.
> A process can submit commands to be run by the shell that spawned it – actually MS-DOS COMMAND.COM also had that feature (INT 2E), but I haven't seen anything else with it. A Unix shell could implement this by creating a Unix domain socket, and passing its path to subprocesses via an environment variable–but I've never seen that done.
What would you use it for? How is that superior to simply spawning a new shell process?
Norton Utilities for DOS came with a program called NCD (Norton Change Directory). It was a full-screen replacement for the CD command, you could browse through the filesystem to get to the directory you wanted, then it would exit and return you to COMMAND.COM's prompt, with you already in that directory. That worked because under DOS, the current directory was system-wide, [0] not per-process, so a subprocess could change the parent process' current directory. On Unix systems, and the Windows NT family, the current directory is per-process, so that doesn't work any more.
Now, you can implement the same idea on Unix–but it is more complex. You need to define a shell function in your .profile/.bash_profile/.zprofile/whatever. That shell function then executes the external change-directory program, and passes the destination directory back to the shell function somehow. The shell function then actually changes the directory.
What if shells had exposed some kind of standard API to their subprocesses? Maybe something as simple as an environment variable containing the path to a Unix domain socket which accepts a simple text-based protocol. You could then implement an NCD-style command, by having it send the shell which called it a request to change its directory. That way, you would not have to install such a command by modifying your profile and restarting your shell, the command would just work the first time you ran it.
Similarly, there are a lot of tools out there for supporting multiple versions of development tools concurrently (Environment Modules, venv for Python, nvm for Node, conda, etc.) Most of these tools work by manipulating your shell environment. Since, on Unix, environment variables are per-process, and a process cannot modify the environment of its parent, you have to install some shell function in your profile and restart your shell before using one of them. Once again, if shells exported some kind of "get/set shell environment" API to their subprocesses, these kind of tools would work the first time you ran them, without any need to modify your shell profile.
[0] Actually, in MS-DOS and Windows 3.x/9x/Me, there is both a system-wide current drive, and a separate system-wide current directory for each drive – Windows NT family's cmd.exe simulates this by using a hidden environment variable per a drive to store its current directory, but that is a convention which only cmd.exe supports, the rest of Windows knows nothing about it.
> That worked because under DOS, the current directory was system-wide, [0] not per-process, so a subprocess could change the parent process' current directory.
For people who don't know DOS: DOS didn't really have multiple processes. COMMAND.COM made part of it resident (TSR), which is how it "survived" invoking other applications. At any given point, there was exactly one process running, and all the state you'd typically associate with a process was "system-wide".
> For people who don't know DOS: DOS didn't really have multiple processes.
Well, compared to CP/M, it did. CP/M only supported a single program being loaded into memory at a time [0]–because every program was loaded into memory at the same absolute memory address (0x100). By contrast, DOS could have multiple processes in memory simultaneously, even though only one of them at a time could be the active foreground process. DOS processes can launch child processes; the parent will be suspended (while remaining in memory) while the child process executes, but will then resume execution once the child process finishes. Under CP/M, that was impossible, there was no API to do that, the architecture didn't support the existence of one.
Indeed, while the main execution of the parent process would be suspended while the child process ran, any interrupt handlers installed by the parent would still be invoked, so some of the parent's code could still run while the child was executing. A parent process can even expose an API to its children – which is exactly what COMMAND.COM does (such as INT 2E which I mentioned)
That's what makes TSRs possible. A TSR is essentially just an ordinary program, the only difference is that when it exits, it tells the OS to leave the process in memory rather than unloading it. By contrast, the CP/M equivalent to TSRs, RSXs (Resident System Extensions), are completely different from ordinary CP/M programs. They are loaded at a variable address near the top of memory, instead of a fixed address at the bottom.
> COMMAND.COM made part of it resident (TSR), which is how it "survived" invoking other applications.
COMMAND.COM isn't a TSR. COMMAND.COM's "residence" is essentially the same as any other program which spawns a child program under DOS, the parent always remains in memory while the child runs. Unlike a TSR, when a COMMAND.COM instance terminates (the initial COMMAND.COM instance won't, but inferior instances will), it doesn't stay in memory, it is unloaded.
What makes COMMAND.COM somewhat unusual, is that it splits itself into two portions, a "resident" portion and a "transient" portion. The resident porition is loaded at the bottom of memory, the transient portion is loaded at the top of free memory, without being officially allocated. Since DOS normally allocates memory in a bottom-up manner, if the child program doesn't use much memory, the transient portion will not be overwritten. When the chlid program returns, COMMAND.COM tests the integrity of the transient portion – if it finds it has been overwritten (because the child program needed that much memory), it reloads it from disk before continuing. That's quite unusual behaviour, but still rather different from how TSRs behave.
> At any given point, there was exactly one process running, and all the state you'd typically associate with a process was "system-wide".
That's not true. When you spawn a child process, you can either let it inherit your environment variables, or you can provide it with a new environment segment – thus each process in the system can potentially have different environment variables. Similarly, file handles are per-process. Similar to Unix – indeed, the DOS 2.x file handle design was copied from Unix – every handle has an "inherit" flag, which determines whether child processses inherit that handle from their parent. DOS keeps track of which processes have which files open, so when the last process with that file open terminates, the file is closed. Likewise, allocated memory blocks are per-process, so when a process is unloaded, DOS frees its memory blocks, but not those of ancestor processes or TSRs.
If DOS has per-process environment variables, file handles and memory allocations – why not current directories as well? I don't know, but I can speculate: DOS 2.0 copied the idea of a current directory from Unix, so like Unix they probably would have initially planned to make it per-process. However, requirements for backward compatiblity with DOS 1.x programs pushed them towards having a separate current directory per each drive. Keeping a separate current directory per-process might have been feasible if it was just a single current directory, but having to do so separately for each drive would have made it a lot more complex and wasteful of memory. I think that is why they made it system-wide instead of per-process.
Microsoft always planned to make MS-DOS a multi-tasking operating system, and much of its internal architecture was designed to support gradual evolution towards multi-tasking [1] – which makes its architecture somewhat closer to a multi-tasking operating system than that of a true single-tasking OS such as CP/M. In fact, they even developed a multi-tasking version of MS-DOS (the so-called "European MS-DOS 4.0" [2]) but it was largely unsuccessful. Eventually Microsoft gave up on the idea of making MS-DOS multitasking, but much of the ideas and experience they developed in trying to do so ended up going into OS/2 and Windows instead.
[0] CP/M later evolved into MP/M, which was a true multi-tasking operating system, but I'm not talking about that here
That's what makes TSRs possible. A TSR is essentially just an ordinary program, the only difference is that when it exits, it tells the OS to leave the process in memory rather than unloading it.
Yeah, I remember programming a bunch of those in TurboPascal. There were compiler directives to limit the heap size and some inline assembly was needed to save/restore the stack inside the interrupt handler, but once it worked, the rest was like a normal program.
It was very useful to have a calculator, taking notes, etc. Now we take multitasking for granted, of course.
> On Unix systems, and the Windows NT family, the current directory is per-process, so that doesn't work any more.
But still it bites me everyonce in a while with cd's per-drive curent dir. pushd doesn't have this issue but I need to remember to use it. Usually after a failed cd on the other drive.
Correct and complete systems are large and complex, sure, but they are almost always comprised of small and simple parts. Unix in the 90s was around 300k LOC, hardly a small and simple system, but if we're to believe the author, it must have been made of small and simple parts.
Multics. The Unix philosophy (and name) is a direct response to the failure of Multics. And they were right. It's true that by the 80's, we were starting to see real competition. But really what distinguished Unix in the 70's was that it worked at all on the kind of interactive hardware its users wanted to buy. Most stuff was either missing huge features, aimed at IBM style batch environments, or priced out of reach. Unix worked.
It worked because it was necessary. And occasionally still is.
But we have workable alternatives now. We have bigger teams with version control servers, static analysis, and safer languages.
We can make it general enough to be attract 1000 developers, then those 1000 developers can all decide to go for correctness in all cases with high feature richness.
"Software" barely existed at the time of UNIX. UNIX philosophy basically says "Don't write software, assemble it onsite as needed just for one task".
UNIX was amazing, and provided a great base to eventually evolve Linux and co, but it no longer seems to be the only way to do things.
>We have bigger teams with version control servers, static analysis, and safer languages.
That’s why enterprise software is such a crap - what you are describing is part of a problem, not a solution.
Also, at that time big teams definitely existed: see OS/360 and Fred Brooks’ book. High level languages were there too, eg Lisp. What you want to replace Unix with is exactly what Unix itself had replaced.
I'm not sure I've everyone been on the user side of true "enterprise software" since I don't do business side work, but basically all the commercial-inspirex stuff is made with the same idea.
Feature rich, high level languages, high integration, opinionated workflow. Krita, FreeCAD, LibreOffice, Ardour, browsers, and VSCode mostly all seem to fully ignore anything UNIXy, and are amazing.
I don't really see software doing significant amounts of sucking these days. When I do, it's usually because it's subscription or cloud dependent, or because they left out something important for simplicity's sake.
Software basically runs the entire modern world. The expectations of computers are no longer to just "Compute". Everything is interactive, embedded systems are everywhere, and the stuff that really is "Computing" is often GPU accelerated or distributed.
Computers were their own separate thing in the UNIX era. They weren't fully replacements for any other device yet, outside of research. People used real filing cabinets and records or cassettes.
Large complex software enables huge classes of applications that would otherwise have enough friction that nobody would want them, you'd juat get a pen and paper rather than reading a man page to do what should be a 4 second task.
And when you have dozens or hundreds of them, not many people are going to want to learn them all, and actively go out of their way to stay current with them.
People complained CONSTANTLY about computers into the early 2000s. Somehow it was always a challenge, you always had to spend time figuring out how to get the computer to do what you wanted, and you might often wonder if it was worth using them at all.
UNIX users have a very different perspective from average users. They spend a lot of time dealing with text and processing data, and mostly like computers for academic reasons or for tasks that are completely impractical by hand. Not many of them seem to actually want an ubiquitous IoT type future with an app for anything.
With an appreciation of simplicity seems to come a love of the analog, because that's the ultimate UNIX philosophy, not using a computer at all.
1. It's been the case for many, many decades; nothing new there, and
2. All your examples are end-user applications, not operating systems.
Operating systems - and perhaps frameworks in general - are different, in that complexity is much more of a problem. Operating system is not just another application, it's the foundation of the entire thing. All this text processing stuff you've mentioned: this is just a part of Unix, the "icing". What matters lies below.
>Computers were their own separate thing in the UNIX era.
Back when computers were their own separate thing, Unix was just one of a bunch of vastly different systems. Very large market share - perhaps even bigger than Unix - was VMS, and MVS was a thing too; then you had a bunch of less common ones, like Pr1me.
Now, however, literally every phone runs Unix.
>With an appreciation of simplicity seems to come a love of the analog, because that's the ultimate UNIX philosophy, not using a computer at all.
Or mechanical :-) Again, this is very true, but it is like this for a reason: it's because we understand the complexity involved and its consequences.
Linux really blurs the lines with what is part of the OS, and what is part of the application, since there's now a common set of middleware daemons that are used in most mainstream distros.
It used to be much more loose and modular, and also much more unpredictable with no real stable platform.
Now it's more like GNU/Linux/systemd/DBus/xdg, and most of what I love about Linux comes from the fact we've eventually evolved a real "platform" that is vaguely standard between popular distros, with just enough modularity to make devuan and void possible so we don't get rioting.
Android is even more like that. In practice it's inseparable from all the other stuff on top, and you never interact with UNIXy concepts. They don't really even want you to directly deal with files.
And.... it all works fine. Android is a bit of a developer nightmare, but it's wonderful for users, and Ubuntu-likes are great for both users and developers(As long as they don't want to customize stuff under the hood too much).
I rarely meet a techie who doesn't have quite the fondness for the mechanical though, and they all seem to be very smart, and very capable, so there must be some sort of reason.
I've rarely had trouble with the more complex things, either at home or at work, so I suspect some of it comes down to how much you value a sense of control and understanding.
You're probably gonna have a bad time with a mega-ultra-framework if you want to design an architecture to fit the task, and your own vision of Good Code, and then implement that.
But you'll probably have a great time if you enjoy finding ways to fit the application into The One True Way the framework is built for.
It seems like a lot of people who enjoy analog and especially paper notetaking specifically really like the lack of any predetermined structure.
Simple things always claim to be very logical and consistent and built from a small set of concepts, but it seems like the real result, and probably why people like them, is that they invite you to build more yourself, so you have in practice a near infinite set of features with a different subset in every project.
I'm reading over NASA's 10 rules for safety critical code and it's littered with the word simple and keeping things small. I don't think NASA is an example for large and complex at all.
Not to mention NASA is known for many of the the most famous software bugs in history, but I'm guessing that's just related to space exploration more than anything particularly novel/incorrect about their approach to software.
I'm very glad modern Linux basically has nothing to do with UNIX. It has a lot of really neat ideas, but I don't want to use it.
I would much rather use Linux, that totally breaks with all that, even the "Everything is a file" idea, now that it's rare to directly interact with the file-like nature of anything but an actual file.
Even then, I think Android is just a bit better than Linux for a lot of things, only held back by how hard it is to write software compared to any non-mobile platform, and they are even farther from UNIX.
Unix leaps from being an ordinary virus to one capable of managing hordes of itself as a unit, permanently ensuring it's existence on multitudes of hosts. Nothing is safe now. The twilight for humanity is near, for at some point all humans will be replaced entirely with Unix.
I don't think that could be written down and published today.
“I liken starting one’s computing career with Unix, say as an under-
graduate, to being born in East Africa. It is intolerably hot, your
body is covered with lice and flies, you are malnourished and you
suffer from numerous curable diseases. But, as far as young East
Africans can tell, this is simply the natural condition and they live
within it. By the time they find out differently, it is too late. They
already think that the writing of shell scripts is a natural act.”
— Ken Pier, Xerox PARC
Interesting to note that the forward was written in 1994 by Donald Norman (author of The Design of Everyday Things) who was ironically an Apple Fellow at the time.
I think in the old days it was much harder to maintain a box. You'd get a new server, you'd have to configure email, create print queues, move partitions around keep an eye on system logs etc etc. Now people use Linux but dont really care - the server will only be alive for a few hours and it only has to do one thing. That makes living with Unix a lot easier.
https://news.ycombinator.com/item?id=19416485
https://news.ycombinator.com/item?id=13781815
https://news.ycombinator.com/item?id=7726115
https://news.ycombinator.com/item?id=3106271
https://news.ycombinator.com/item?id=1272975