> There is only one exception to the rule of identical treatment of files on different devices: no link may exist between one file system hierarchy and another. This restriction is enforced so as to avoid the elaborate bookkeeping which would otherwise be required to assure removal of the links when the removable volume is finally dismounted. In particular, in the root directories of all file systems, removable or not, the name .. refers to the directory itself instead of to its parent.
Is this saying that if I had a removable disk mounted at /mnt/foo and issued “cd ..” in that directory, I’d remain in /mnt/foo instead of moving up to /mnt? When did this change to the current behavior?
That is the current behaviour. And it's not saying what you state.
"no link" literally means no link. It's the straightforward meaning of (an ordinary, not symbolic) "link" in Unix filesystems. They cannot cross devices.
And on disc, ".." in the root is a link to the same directory. (POSIX allows for it to be this, which is the conventional Unix behaviour, or not to exist, which is the case on some non-Unix filesystems and operating systems where conceptually there is stuff "above" the root.)
Executing "cd .." ignores what is on disc at a mount point, and traverses the mount upwards.
Remember that "filesystem" has three meanings: the on-disc format of a DASD volume, the overall tree abstraction presented by the operating system, or what is presented by an FS driver.
I love this paper and always recommend it to engineers I work with who might otherwise not read academic papers. It is a well written paper that introduces still relevant concepts like images, processes, the shell and pipes very succinctly, albeit with some amusing artifacts of the 1970s.
> A “crash” is an unscheduled system reboot or halt. There is about one crash every other day; about two-thirds of them are caused by hardware-related difficulties such as power dips and inexplicable processor interrupts to random locations.
The bigger ones definitely required decent periodic maintenance. Typically you'd contract DEC or a third party to come out semi-annually to vacuum things out, apply field fixes (yes, run wires on boards and the backplane), and run hardware diagnostics.
There were a lot of wire-wrap connections on those things. If they had been done right you were good. Done wrong, and your system was haunted until someone found the loose wire. We had an 11/45 with a flaky Unibus bay that could be "fixed" with a little percussive ablation. Finally got DEC to get serious about the problem, and they spent several days tracking down a bad socket.
I don't miss wire wrap. At all.
We have it good these days. Rack a bunch of servers, run them hard for years with only DIMM replacements or maybe the odd SSD, and recycle them once the bathtub failures start edging up. I don't want to think about the comparable compute power; my wristwatch runs rings around that 11/45. We live in the future.
At least one RSTS/E release amusingly came with patch notes that included wire wrap instructions. I distinctly remember a great deal of grumbling when the patch caused issues and the downgrade included undoing the wire wrap patches. If you thought sharing your screen and coding or debugging was high pressure, imagine pulling out one of the drawers holding your computer's guts and dealing with this while a room full of people hovered over your shoulder. https://upload.wikimedia.org/wikipedia/commons/a/ae/PDP-11-3...
I guess it's a matter of perspective. By the standards of the decade before that, a computer which could run for 2 - 3 days reliably without hardware glitches would have been remarkably reliable. For such a machine to sell for under $100,000 would have been stunning.
The reliability we think of with modern computers is, I think, mostly a consequence of very high integration. Few solder joints to go wrong.
The PDP-6 predates the integrated circuits of the PDP-11 models, and had a board as part of the ALU path that almost always blew at least one of the 36 boards (one for each bit in the machine word) on a power cycle.
There are some interesting points where current design is almost exactly the same, only slightly developed. Stderr was missing in this paper, for example.
I guess I can't really appreciate the benefit they describe here, because I'd need to know the alternatives and prior art:
> Another important aspect of programming convenience is that there are no “control blocks” with a complicated structure partially maintained by and depended on by the file system or other system calls.
> The discussion of I/O in §3 above seems to imply that every file used by a program must be opened or created by the program in order to get a file descriptor for the file. Programs executed by the Shell, however, start off with two open files which have file descriptors 0 and 1. As such a program begins execution, file 1 is open for writing, and is best understood as the standard output file. Except under circumstances indicated below, this file is the user’s typewriter. Thus programs which wish to write informative or diagnostic information ordinarily use file descriptor 1. Conversely, file 0 starts off open for reading, and programs which wish to read messages typed by the user usually read this file.
Linux distributions might be a lot smaller if a) the kernel only included drivers for a single hardware platform, single file system, text terminals, and no network devices or protocols, b) /usr/lib only included libc, and c) everything was still 32-bit.
There are certainly examples of very compact Unix-like OSes that nonetheless offer memory protection, paging/virtual memory, preemptive multitasking, and a system call interface.
> Linux distributions might be a lot smaller if a) the kernel only included drivers for a single hardware platform, single file system, text terminals, and no network devices or protocols, b) /usr/lib only included libc, and c) everything was still 32-bit.
As for that last: The PDP-11 was a 16-bit architecture, which got somewhat interesting to work with when software began to grow functionality beyond what would comfortably fit in that address space. The 32-bit PDP-11 was going to be called the Virtual Address Extension, or the VAX; instead, the VAX was turned into a machine rather more elaborate than a "better PDP-11".
The elephant in the room is more, like, the conceptual simplicity, interface stability, and portability compared to the never-ending proliferation of Linux kludges ^H^H^H innovations turning out to bring as much problems as they're trying to solve, or more eg https://lwn.net/Articles/679786/
The real overhead now is in interfacing: your CLI screen still needs to output (probably) VGA graphics, you'll need USB host support, wired and wireless networking, more sophisticated file systems...
From my point of view that is still kind of catching up with big iron UNIX and mainframes, revienting many of their ideas that were already a thing in the late 90's.
For example, my first experience with containers was with HP-UX Vault, likewise for most fancy filesystems that GNU/Linux might be getting.
The submitted title ("Classic Paper: The Unix Time-Sharing System. Highly Readable and Relevant") broke the site guideline against editorializing. If you'd please review https://news.ycombinator.com/newsguidelines.html and follow the rules, we'd appreciate it.
"Please use the original title, unless it is misleading or linkbait; don't editorialize."
Introduction and Overview of the Multics System https://multicians.org/fjcc1.html
Thirty Years Later: Lessons from the Multics Security Evaluation https://www.acsac.org/2002/papers/classic-multics.pdf