Hacker News new | past | comments | ask | show | jobs | submit login
A Requiem for a Dying Operating System (1994) (umd.edu)
246 points by aphrax on Sept 4, 2020 | hide | past | favorite | 276 comments



Pretty much each point raised in this post(?) are correct, current and relevant even 26 years later.

POSIX is a monolith and really deserves to be improved. It's been around forever, yes. It will probably keep on being around forever, yes.

    Take the tar command (please!), which is already a nightmare where lower-case `a' means "check first" and upper-case `A' means "delete all my disk files without asking" (or something like that --- I may not have the details exactly right). In some versions of tar these meanings are reversed. This is a virtue?
Raise your hand if you've never broken Grep because the flags you gave it didn't work. Anyone? Congratulations, you've worked on a single version of grep your entire life. Have a cookie.

Pretty much the only consistent grep flag I know is -i. There's never been a standard for naming and abbreviating flags, which means that for EACH program you will have to learn new flags.

This becomes truly terrible when you get around to, say, git and iptables. Have you ever tried to read git documentation? It is the most useless godawful piece of nonsense this side of the Moon.

There's Google now, which means that the fundamental design issues of POSIX will probably never get issued. "Just google it and paste in from stackoverflow" is already standard, and people are already doing that for 5-10-year-old code/shell commands. What about 10 years from now, will googling best DHCP practices still find that stupid post from 2008 that never got actually resolved? How about 20 years?

I have honestly no idea how to even start fixing the problem. A proper documentation system would be a start.


> Have you ever tried to read git documentation? It is the most useless godawful piece of nonsense

I ran "man git" for the first time ever.

https://www.man7.org/linux/man-pages/man1/git.1.html

Heey, that's actually pretty good! I don't think it's "godawful". In the second sentence it recommends starting with gittutorial and giteveryday, for a "useful minimum set of commands".

https://www.man7.org/linux/man-pages/man7/gittutorial.7.html

https://www.man7.org/linux/man-pages/man7/giteveryday.7.html

I must admit, I still occasionally (regularly?) search for "magic incantations", particular combinations of flags for sed, git, rsync, etc. But the man pages are my first go-to, and they usually do the job as a proper documentation system. It's better than most software I've worked with outside (or on top) of the OS, with their ad-hoc, incomplete or outdated docs.


The issue with git is that no matter how well documented, the user interface is horribly designed. For starters, how many different things does "git checkout" do, and how many of them actually reflect an intuitive meaning of "checking out" ?


> the user interface is horribly designed

I see this type of remark against git quite often on HN and I think it's exaggerated.

I agree some of the porcelain are misleading and overloaded as convenience functions such as checkout, however a decent chunk of it is inline with the underlying data structure. Nothing is perfect, and git is pretty damn good - horribly designed? no, could do with some breaking porcelain re-writes? sure.


The git UI is absolutely horribly designed, as demonstrated by the mercurial or darcs UIs which, while completely different, were significantly easier to discover, intuit and remember.

> git is pretty damn good

UI-wise, it really is not.


The concept of the git staging area is utterly superfluous. All local changes that are propagated to the version control system should go directly into a durable commit object, and not to a pseudo-commit object that isn't a commit, and that can be casually trashed.

That commit object could be pointed at by a separate COMMIT_HEAD pointer. If you have a COMMIT_HEAD different from HEAD, then you have a commit brewing. Finalizing the commit means moving HEAD to be the same as COMMIT_HEAD. (At that time, there is a prompt for the log message, if one hasn't been already prepared.)

Your "staged" changes are then "git diff HEAD..COMMIT_HEAD", and not "git diff --cached".

Speaking of which, why the hell is "cached" a synonym for "in the index"? Oh, because the index holds a full snapshot of everything. But that proliferation of terminology just adds to the confusion.

I can't think of any other area of computing in which "cache" and "index" are mixed up.


I like the staging area, though–at any given time I always have code that I do not want to put in a commit object (perhaps I changed some build flags, or my IDE touched some files I don't care for, or…) However, I do agree 100% that all the terminology is pretty bad.


The sin of Git's staging area is that Git forces it into the default interaction path—requiring you to take it into consideration whether or not you're interested in only committing some of the changes. Git should default to including all changes, and iff you direct it to (i.e. by explicitly specifying `git add`) should you have to take into consideration the notion that some changes are staged and others aren't.


I'm generally also one of the git sceptics - Though I loved staging for a while as for the fine grained control. `git add --all` just might not do the right thing - for paranoids like me. Just recently got to know you can skip the staging by appending the paths of the changed files you want to commit after `git commit -m "awesome commit"` - neat.


You can skip the explicit staging using

  git commit --patch
then interactively select the specific changes by diff hunk.

It combines with --amend.


Would you like the staging area less if it was an object of exactly the same type as a commit, referenced by CHEAD (commit head) instead of HEAD?


Only the workspace can be built and tested, so the workspace is what should be committed. We should be stashing anything we don't want to test and commit yet.


I'm trying to keep out of this fight, but how are you planning on just stashing one hunk without staging?


The stash is another feature that should use the regular commit representation. Well, somehow. Yes, the stash is different in that it preserves (or can preserve) uncommitted changes, as well as staged changes, and it can tell these apart.

However, if the staging area is relaced by a CHEAD commit ("commit head") whose parent is HEAD, then the problem of "stashing the index" completely goes away. You don't stash staged changes because they are already committed into the staging commit CHEAD.

That said, the stash feature could work with this CHEAD. Stashing the staged changes sitting in CHEAD could propagate them into the stash somehow (such as by a reference to that commmit). Then CHEAD is reset to point to HEAD, and the changes are gone. A single stash item consisting of work tree changes and staged changes could simply be an object that references two commits: a commit of working changes committed just for the stash, and a reference to the CHEAD which existed at that time. It could be that one is the parent of the other. So that is to say, a commit is made of the working copy changes, parented at the CHEAD. The stash then points to the SHA of that commit.

Intuitively, I know this would work, because in the existing Git, I could easily implement this workflow instead of using the stash. Given a tree of local changes, I could "stage" some of them by creating a commit with "git commit --patch". Then "stash" rest of them into another commit "git commit -a". Then, create a branch or tag for that two-commit combo, and finally get rid of it with "git reset --hard HEAD^^". Later, I could easily recover the changes from that branch, either by cherry picking, or doing a hard reset to them or whatever.

Speaking of which, an example of how stashes are limiting because they aren't commits, think of how you can't do:

   git reset --hard stash@{0}  # wipe it all away and make it like this stash
You can't do that because a "reset --hard anything" cannot reproduce a state where you have outstanding working copy changes and/or an uncommitted index, but "stash apply" or "stash pop" are saddled with that requirement.

The requirement of reproducing working changes and staged ones from a stash represented as a two-commit combo is very simple. You cherry pick one normally and make it the CHEAD (the aforementioned special head for pointing to a commit being staged). Then the other one is cherry-picked with -n, so it is applied as local changes.


“git stash push --patch” lets you choose hunks to copy into the new stash and clean out of the workspace. It’s pretty similar to “git add --patch” for choosing hunks to stage.



Git has weird terminology. Though a lot of commercial SCMs are also a bit strange. Example: Perforce which has depots and shelves. At least with git, I can create a branch without waiting 2 weeks for the IT department.


Perforce shelves make total sense. "Shelving" something means literally what the command does - setting them aside and saving them.


It “sort of” makes sense. When a company shelves a movie, they’re probably never finishing it. When I shelve my code, I’m probably coming back to it at some point soon. For example, I worked at a place that would have everyone “shelve” all their code before code review. In that context it didn’t make sense to me.


Single data point, but I used hg for three years at work and never warmed up to it the way I've warmed up to git (and that's the "porcelain" too, I've never done a deep dive into the plumbing).


> I see this type of remark against git quite often on HN and I think it's exaggerated.

Indeed, in more erudite forums with smarter users, more level-headed, less biased opinions of git circulate.


In recent versions of git (since 2.23) the two main `git checkout` functions have been split into two newly supported dedicated commands: `git switch` and `git restore`.

Of course the next step is unlearning `git checkout` muscle memory and moving to using `git switch` and `git restore` more regularly.


It's the command that does "Reset working directory/Discard changes/Revert to last commit"! You'd think that's what "git reset" would to, but of course not.


> You'd think that's what "git reset" would to, but of course not.

Ahem, the command for "Reset working directory/Discard changes/Revert to last commit" is "git reset --hard".

That's the one I use.

"git checkout -f" does the same thing, but only because their different functionality coincides when there are no other arguments. When given a non-HEAD commit-id or branch-id they do different things.


That’s a beautiful example of the problems of git :-)


Some time ago, some UI designer asked, on HN, for what open source program should they build a UI to establish their reputation. I suggested "git". That was rejected as too hard. They just wanted to put eye candy on a command, not have to rethink its relationship with the user.


A few years back, a designer waded into the middle of the echochamber on some HN thread about inculcating people from other disciplines. They wrote that "as a designer" they did not consider Git (GitHub?) to be thoughtfully put together or well-suited for the kind of work they do or something like that. It was a short comment, about as long as that, and there was no flaw or faux pas or even anything incendiary about it. HN wasn't having it, though, and downvoted it mercilessly. (There were no responses to say why it had been downvoted; the subthread dead-ended there.) It's things like this that remind me of the now-infamous comment in the Dropbox thread.

I didn't think at the time to bookmark it with my "hn sucks" tag, and over the past year or two, I've tried several times to find it again, for reasons similar to[1], but I've been unable to.

1. https://news.ycombinator.com/item?id=22991033


It was very good idea. Though perhaps establish reputation is probably not a good starting point.


Well they were a UI designer and not a UX designer.


The fundamental assumption is that people will either ask how to do something, or read the documentation/manual. It's not that we'll try to figure out how something work by experimenting.

When I first started using UNIX/Linux after learning the DOS shell, I never said that using commands like rm or mkdir were not intuitive and that it should be like using del or md instead. I just learned the different commands by reading through documentation.


And what if you don't have anyone to ask? The article very clearly states why the documentation is useless for a beginner.

Other OSes have a concept called forgiveness that allows you to easily reverse a change you made explicitly so that you can experiment with it and figure things out for yourself. The problem is that Unix fundamentally doesn't allow you to figure things out by yourself. You absolutely need either a manual (that you will never find if you haven't already been told how to find it), or you need to have someone you can ask questions to.


Is it true to think that because they made some choices early on that those choices forever blemish its value even is said choices are later addressed?

Much of the complaints about checkout have been split to other commands in newer versions. Does this make Git still invalid in your opinion?


I've had a lot of issues with git UI but git checkout seems among the more sane ones. Compared to how, say, git add can remove a file...


git has added `git restore` and `git switch`, intended to replace `git checkout` :)

https://git-scm.com/docs/git-restore https://git-scm.com/docs/git-switch


The one thing that all the negative commentary fails to acknowledge is that even in the face of this somewhat overstated inconsistency across all these command line tools and applications, is that for the knowledgeable and motivated, it is quite simple to wrap the more complex invocations in simplified scripts or, at the other extreme, a completely functional native GUI.

They also fail to acknowledge that contemporary unix, aka Linux in it's many derivations and flavors, is entirely malleable at the source code level by it's users. That is a feature provided by literally no other operating system that is deployable at scale, and is, in fact, the singular feature that drives it's adoption -- not only is it 'free', you can hack it together in any fashion you damn well please, and you can use it to build peer-grade native applications, typically with little more than a tip of the hat as 'overhead'.

tl;DR: Some folks might miss the point because they are not sufficiently motivated to engage the *nix world with the degree of articulation required to tap into it's less than casually accessible capabilities.


Sigh. Why are there still lots, heaps, and tons of horrible, inhumane, broken legacy technology still around in active use? Because its users/proponents are "knowledgeable and motivated" enough to keep pushing through. Sort of a Stockholm syndrome of computing, really.

My brain is really quite small compared to all the knowledge about computers that is out there. And my willpower too is very limited. So I would rather learn things I'd rather like to know, and be motivated to do things I'd rather get done instead of spending those precious resources of mine (and time! I will die in less than 25,000 days, that's a pretty small amount of time, you know) on something of dubious value.


>>"It is very easy to be blinded to the essential uselessness of them by the sense of achievement you get from getting them to work at all."


It's one thing to learn something like physics for dumb engineers. Or Thermodynamics. Mechanical dynamics. Differential equations. Where it's hard to get your brain wrapped around. But there light at the end of the tunnel.

Vs obtuse half broken shit people created out of whole cloth and refuse to fix.


They are still around because, through historical accident, they are what everyone knows and uses.

Making a special snowflake that fits your brain better is good for you, but not necessarily anyone else.

Making something good for everyone will, almost inevitably, become a design-by-committee monstrosity that is as problematic as the tool being replaced.

The truth is, I am skeptical that these tools can be replaced by something that requires no effort to learn. At least, for the tools we already have, if you dont want to learn them, you can roll the dice and copy / paste from google overflow.


> Making a special snowflake that fits your brain better is good for you, but not necessarily anyone else.

Yes, and that's why the parent's argument that "you can hack it together in any fashion you damn well please, and you can use it to build peer-grade native applications, typically with little more than a tip of the hat as 'overhead'" is not a convincing argument. Yes, you can learn a lot about it and tinker with it and "harness its power", as opposed to using something that's less flexible, but is already pretty damn ergonomical, and is much more accessible and easy to learn about (maybe to the point you're not even realizing you're learning).


Some of it is pretty good. But some of it is, or at least was, so legendarily bad that it inspired this:

https://git-man-page-generator.lokaltog.net/


> I ran "man git" for the first time ever.

> But the man pages are my first go-to

It's not strictly a logical contradiction, but doesn't make much sense either.


You got me there. I should have said, man pages are my go-to for POSIX commands.

For Git, I usually turn to online documentation (at https://git-scm.com/docs) or, more often than not, search for keywords and, yes, end up at StackOverflow.

That supports the root-parent comment's point, that there are "fundamental design issues of POSIX", if users new and experienced must resort to such channels. It also implies that "man" as the default documentation system is not sufficiently meeting our needs.


Completely agree. One of the problems is of course the freedom of choice a Unix system gives you. Instead of a single shell with a single set of commands, people can pick and mix. For beginners it's a nightmare but for power users it's, in general, very empowering.

Getting help on Unix commands, particularly in Linux, has always been a mess. On most Linux distros, typing "help" will give you help about the shell built-ins. Then, discovering "man", you soon find that the bundled GNU utils of course would rather you use their "info" system, which in turn may refer to a web page(!) for info.

I remember coming from the Amiga to Linux: I would not have gotten far without word-of-mouth help (and helpful computer magazine articles explaining a lot of the particularities of Unix). The Amiga, on the other hand, was a cheap home computer with an exceptionally thick manual detailing every single command clearly and succinctly.

The Open Group has the POSIX util spec published online[0] and also allow free downloads of it for personal use. Since I discovered it, I find myself using it much more often than man pages. I've made a little alias in bash that launches Dillo with the appropriate command page.

[0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/c...


> For beginners it's a nightmare

I've taught undergraduate students some basic shell use for being able to compile their C programs. It's not really that bad. You have them use bash and you teach them some basic syntax and a few shell-usable programs, including man. You tell them that there is a lot of things the shell can do that we won't be discussing, so they have to be careful not use arbitrary symbols and to double-quote names. And I also tell them that bash is just one kind of shell, and that some systems have other shells by default, but we are working on a system which defaults to bash. I then tell them to check with "echo $SHELL" if they're on another system than the one we are working on, and if they don't see "/bin/bash" or "/usr/bin/bash" then they should ask someone for help.

That's enough to satisfy newbies in my experience.


> I then tell them to check with "echo $SHELL" if they're on another system than the one we are working on, and if they don't see "/bin/bash" or "/usr/bin/bash" then they should ask someone for help.

Some systems will be weird. EG my default shell is bash, but my interactive shell for all my terminal emulators is fish. So "echo $SHELL" returns "/bin/bash", even from fish.

Of course I know this, I set it up this way deliberately so that only actual interactive shells (or correctly shebanged scripts) would be fish and everything else could be bash. But it would definitely confuse a beginner!


Hmm... interesting. But - echo $SHELL should tell you what the current shell is, not what the default shell is.


$SHELL should typically be set to reflect the configured login shell, I.E. the shell specified in /etc/passwd. Or, as POSIX[0] puts it: "This variable shall represent a pathname of the user's preferred command language interpreter." The currently running shell is not necessarily the "preferred" shell, for example if you're doing "xterm -e zsh" or running a specific ksh script when you normally prefer (log in to) bash.

Being vaguely defined, it is of course open to interpretation and might vary from system to system, thus being a prime example of the kind of frustrating Unix gotchas that spawned the original article.

[0] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1...

Edit: To tie in to my earlier comment - where should I look for info on this variable in Linux? I know it exists, but how do I find out more about the behavior? 'apropos "\$SHELL"' gives me nothing. 'apropos SHELL' gives me a list of commands which doesn't include much of interest. Digging deeper, there's login(1), which briefly touches the subject, and environ(7), which gives a good description of it but of course dives in head-first and starts off by describing an array of pointers. Not overly helpful for the novice.


No, It tells you what program image file is going to be used when a program "shells out" or spawns a shell to run a shell command (e.g. via the ! command in mail or the :shell command in nvi).

It is a way for applications to know where the shell to invoke is, not a way for a user to find out what program xe is using to run commands. echo $0 is more informative on that score.


I didn't find it that hard. Just realize the different backgrounds and cultures that all these projects originate from. They all have their own ways of doing it.

man pages work pretty well for a lot of stuff that is POSIX. C interfaces and basic CLI programs you'll all find documented in man (with documentation beyond what is specified in POSIX).

The GNU folks have tried to push their info system, so maybe you'll find more details there for their commands. I guess most people use man because it's quick to access and a single page is easy to grep through. If that doesn't help, I'll just do a web search.


Been using “man” for years, understand it’s section system and have made a few man pages for various utility programs I’ve made.

The man page for built-ins is never the quick reference I want, so I generally end up falling back to google for that stuff rather than try and figure out where it’s documented.

Anyway, TIL “help” is a command I can try.

I haven’t used it in awhile, but “cht.sh” is basically what I expect “man” to give me, but comprehensively, for everything, and by the program author(s).

Public repos with easily searchable source and a good README are even better tho, of course.


I think the availability of "help" depends on what shell you're using. It's a builtin in bash.


In fish it launches a help web page if you're in a graphical session (but still has a fallback without).


> Completely agree. One of the problems is of course the freedom of choice a Unix system gives you.

This is because Unix (or large parts of the environment) evolved over time rather than being designed at the beginning.

We started with the Bourne shell, then got the C shell, with the Korn shell splitting the difference between the two. Bash then came along taking lessons from each of those three, and then Z shell.

That's a few decades' worth of changes.


The 4 things I love most in Powershell, from the point of view of a maintainer of scripts, is the relative verbosity of commands (at least I don't have to go hunting for obscure acronyms, or recursive puns such as yacc, when I read code), auto-completion an auto-documentation of scripts (when I have to change something), and object pipe (as a maintainer I hate awk and regular expressions in general).

Only defect is they did not go with Yoda speak (Get-<TAB> is a much worse filter that NetA<TAB> to search for Get-NetAdapter for instance).

It's still a bit green on Linux environments, but it already beats many of the alternatives.


You know, it's funny, the topic in this article is the forced abandonment of VMS.

Microsoft at one time had a video interview on their "Virtual Academy" with one of the PowerShell creators, and he talks about how they kept trying to create a "unixy" tool for managing Windows machines, and it never felt or worked right. So then they looked at VMS and realized it was the perfect inspiration.

Most of what you love about PowerShell comes from VMS.


Which also shouldn't be surprising given VMS also had a huge inspiration (and some key figures) on early NT Kernel development. In some ways modern Windows is "son of VMS".


Friend of mine's company guy bought because they had an Ethernet solution for VMS. The company that bought them wanted it ported to NT. He said it was 'almost trivial'


I agree completely. Powershell is a pleasure to write in and read and the various modules for managing different services makes my life way easier.

For anyone who's been put in charge of managing Zoom for their organization, may I recommend: https://github.com/JosephMcEvoy/PSZoom


> the relative verbosity of commands (at least I don't have to go hunting for obscure acronyms, or recursive puns such as yacc, when I read code)

"yacc" is not a command. It's an executable file that's read and executed. In fact, most of the things in a shell script are not commands, but files that are loaded and executed. If someone built a shell with everything built in it'd be a bloated monster full of inconsistencies and incompatibilities.

OTOH, PowerShell has "commands" or aliases named after Unix executables, such as "curl", that don't replicate the switches one would expect from the curl program.

> It's still a bit green on Linux environments, but it already beats many of the alternatives.

Maybe for Windows transplants. In general, not at all.


> Maybe for Windows transplants. In general, not at all.

care to elaborate

In my experience, Powershell is so much nicer than bash that even on my work mac I tend to use it when doing stuff for me (not going to force it on my team)


Powershell originated on Windows and garnered a fan base on the platform. It's not much appreciated outside that niche and most people who like it are people who use or used it on Windows.


I get where you are coming from

I find it odd that you consider Windows and PowerShell a niche, when it seems to have been enthusiastically embraced by the community


yacc is not a recursive abbr, it stands for Yet Another Compiler Compiler.


One of my pet-peeves on a similar note is the inconsistencies between different ssh commands on if it's -p or -P that signifies the target port.

Apart from the obvious compatibility and legacy factor, I think a major reason is that by the time someone has both enough knowledge and experience to formulate a proper solution and have felt the pain-points, they're already deep enough that they've internalized that this is The Way It Is and are somewhat comfortable with it, those annoying flags aside.

We tend to settle on the lowest common denominator, because consistency and time-to-ready trumps any actual improvements. For example, I'd so much prefer vim bindings in tmux but stopped using customizations like that completely since it turns out it's less of a hassle to just get used to the crappy standard ones instead of customizing it on every new host I start up.

If you can't get your friends off Facebook, good luck getting engineers off POSIX.


Argh yes, scp -P vs ssh -p. I get bitten by this regularly as we have something running on a non-standard port.


Two points about bad documentation:

The documentation (and syntax, or lack thereof) of "tc" are significantly worse than git's. Unfortunately, the network management tool you are supposed to use these days, ip, is made by the same people, though somewhat less bad.

Second, take git documentation with humor: https://git-man-page-generator.lokaltog.net/


I don't find `man git` to be bad at all. Git is complex, and its man pages do the right thing by referring to sources for basics info like giteveryday, and referring to in-depth guides as well. Individual man pages are also pretty good, see `man git-rebase`. It starts with an overall explanation of what rebase does, with examples, and then covers configuration options and flags. It's a lot of stuff, but it's pretty good as far as documentation goes.

GNU packages often have documentation that's bad in the typical "Linux docs are bad" way. Try `man less`. First, it commits a grave sin in having a totally useless one-line summary, which reads "less - opposite of more". Funny, but totally useless and doesn't even remotely suggest what the command does (if you know what more is on UNIX, you surely know what less does). Or `man grep`. It's a reference page, very good for knowing what all the options do, but with no useful everyday examples, and with gems like these:

> Finally, certain named classes of characters are predefined within bracket expressions, as follows. Their names are self explanatory, and they are [:alnum:], [:alpha:], [:cntrl:], [:digit:], [:graph:], [:lower:], [:print:], [:punct:], [:space:], [:upper:], and [:xdigit:].

Self explanatory? Yes, if you used grep in the 90s. Is alnum alphanumeric or all numbers? Is alpha short for alphabet, as in what most people would intuitively call letters? What's xdigit? Extra digits? Except digits? Oh, it's hex digits. Pretty obvious that periods and commas are in punct... but also + * - {} are punct, among other stuff.

`man tar` is extremely comprehensive, an impressive reference, but very hard to figure out if you've never used tar.

I've been recently looking at FreeBSD documentation for common commands, and the source code as well. Both are so much better than the usual GNU versions you find on Linux.


> I don't find `man git` to be bad at all. Git is complex, and its man pages do the right thing by referring to sources for basics info like giteveryday, and referring to in-depth guides as well. Individual man pages are also pretty good, see `man git-rebase`. It starts with an overall explanation of what rebase does, with examples, and then covers configuration options and flags. It's a lot of stuff, but it's pretty good as far as documentation goes.

Having used both Mercurial and git, it is my general experience that Mercurial has a much better documentation system. Git's documentation has improved, but mostly only in the more-well-used commands; when you want to reach for more exotic stuff, you start to find that the documentation is too full of jargon.

As a recent example, I wanted to get a list of files managed by git. Since I know mercurial best, I wanted the equivalent of hg manifest. Its documentation is thus:

> hg manifest [-r REV]

> output the current or given revision of the project manifest

> Print a list of version controlled files for the given revision. If no revision is given, the first parent of the working directory is used, or the null revision if no revision is checked out.

This is unusually bad documentation for mercurial--the short description and command name are reliant on jargon, and it's not aliased to "hg ls" or something like that. Okay, how about the equivalent git command? git ls-files looks promising. Here's its short description:

> git-ls-files - Show information about files in the index and the working tree

But its description is, um:

> This merges the file listing in the directory cache index with the actual working directory list, and shows different combinations of the two.

Mercurial suffers from a bit of jargon, but reading its description would enlighten you as to what it does without understanding the jargon. Git's documentation here starts with jargon, and then doubles down on it so that the more I read, the less sure I am about what it actually does. [In the end, by actually running it, I did verify that it's basically the equivalent of hg manifest].

Now both mercurial and git have a glossary (help glossary), but I've never seen anyone actually point a newbie to either one. Of course, here you can also see the world of difference in the documentation quality. Mercurial's glossary entry for the jargon term "manifest" says:

> Each changeset has a manifest, which is the list of files that are tracked by the changeset.

Now compare git's glossary entry for "index":

> A collection of files with stat information, whose contents are stored as objects. The index is a stored version of your working tree. Truth be told, it can also contain a second, and even a third version of a working tree, which are used when merging.

... and that is why people like me say that git suffers from poor documentation.


Git's documentation tends to tell you how it does what it does, without telling you what it's trying to accomplish. It's the equivalent of the classic beginner comment:

a += 5; // Add 5 to a

Entirely accurate, and totally useless.


> There's never been a standard for naming and abbreviating flags, which means that for EACH program you will have to learn new flags.

How is this different than web pages or GUI apps? Everyone is different and a button that does one thing in one app/page does something different in another.

Have you tried to read GUI help files? They are written for 5 year olds and provide nothing you need as a dedicated user. Have you had to inspect the DOM of a website to try and intuit what something does it does not do?

Least with command line apps usually you have a --help or man page.


GUIs have "discoverability" and "affordances". I've haven't read a single help file for any GUI application in 20 years (including apps on Windows, Linux, and Android) and somehow I can navigate and use them perfectly fine.

That's simply impossible with CLIs, you need at least read a "How to Get Started Immediately" note.


PowerShell does an okay job at command line discoverability in my experience. When using a cmdlet I'll think "I hope there's an argument for X" and then I can hit tab after '-' and cycle through all the available arguments. As another post mentioned, this unfortunately falls down a bit with cmdlet names themselves because they start with the verb instead of the noun: Get-<tab> isn't helpful the way NetAdapter-<tab> would be.


Fish is similar in this way, if you're not on Windows and thus don't have access to PowerShell.


PowerShell is available for Linux.

There's also Elvish, Nushell, and a few other attempts in a similar vein.


I take 'similar vein' as grand euphemism.


PowerShell is besides Linux also available on Unix.


Unix is a family of OSes that includes Linux in particular.


Cycling tabs is not effective. PSreadline supports CTRL+SPACE completition - just type - and then CTRL+SPACE and it will show menu with ALL arguments. The same works if you start argument (i.e. -P<CTRL+SPACE> shows all params starting with P)


Ctrl-Space on PowerShell is outstanding, as long as your devs are actually properly commenting their scripts (I assume your in-house PowerShell is put into modules that get installed on user machines).


It has no relation to comments.


grep was standardized around 1990 by POSIX.2. In the last 25 years, I haven't had any problems with the POSIX compatible flags of grep, so maybe those points are not so relevant.

Using grep for '$' without being aware that grep patterns are regular expressions ((g)lobal search for (re)gexp, and (p)rint), and redirecting the output to the printer without first seeing what it might be (e.g. with .. | head -50) is pretty stupid.

Consider that this person's idea of solving the problem of "move occurrences of $ character to a different location within the line" in a bunch of files was to begin by searching for lines containing those $ characters and sending that to a printer. What? How is the hard copy going to help? Are you going to sit there manually typing in those paths and looking for those line numbers, to do the edit? If that is really the VMS way, who wants anything to do with it?


> Using grep for '$' without being aware that grep patterns are regular expressions

There's always `fgrep` or, IIRC, the POSIX-compliant `grep -F`.

More often than not, people don't want regular expressions.


I learned it as meaning—

Global Regular Expression Parser

that seemed plausible enough I never questioned it.


> POSIX is a monolith and really deserves to be improved.

Care to explain what's intrinsically wrong with monoliths? I'd have thought that the most important point of a solution is whether or not it solves the problem, not the arhictecture by which it solves the problem.


I'm not sure it is a monolith. It is a set of standards and that's about it.

But, as far as "what's wrong with monoliths" the biggest issue, IMO, is security. The more code you have, the more likely you are to run into security issues. By their nature, most security problems end up granting all access that a given program has. A monolith, by it's nature, usually has a LOT of permissions and a LOT of code.

Of course, this only matters when security matters. If you are making an app that isn't exposed to the internet then by all means make it a monolith. Otherwise, the best thing you can do for security's sake is to push for microservices with as limited a permission set as possible. That makes it so the exposed surface area is relatively small if any one microservice is compromised. (It's about risk management).

This is also why microkernels are so interesting to me. It's the same problem, a compromised kernel driver can do a whole lot of damage. So how do you solve that? By keeping the "root" kernel at a bare minimum and force drivers to run in user space as much as possible. That keeps drivers with security holes from giving an attacker full system control.


> I have honestly no idea how to even start fixing the problem. A proper documentation system would be a start.

Have you ever heard about OpenBSD?


> POSIX is a monolith and really deserves to be improved.

I want it to be improved but I fear it is becoming irrelevant. There are very few OSes left to be compatible with...


It's depressing to think that the Cambrian explosion that lead to a variety of hardware, software, operating systems, and web browsers and great freedom and power for the end user is gradually getting culled and turning into a monoculture of walled gardens and the end users are just getting screwed.


Linux is considered a walled garden? Really?

I mean yeah, there were more operating systems before, some of which were open.. but I'm not convinced it's necessarily bad to have one open system win.

If it didn't, I'm pretty sure there would be a lot more people using windows servers, which I think would've been far worse for the open community.


To be clear I wasn't calling Linux a walled garden. But I was talking about overall trends. For example, there some are efforts to push Linux in this direction, most recently with some centralized app/package store.

Also Linux Foundation was setup and is funded by big corporations like Microsoft, Google, etc in order to find ways to exert influence over Linux's growth.


> Linux is considered a walled garden? Really?

Kinda, yeah. At least in the Desktop space it seems like it desperately wants to be and Canonical in particular works to push it in that direction. For instance, it is highly discouraged to install software from outside your distro's repository.


Talking about Canonical (which advocates Snaps as a supplement to the distro's repo) and "it is highly discouraged to install software from outside your distro's repository" in the same breath is rather odd.

As is thinking that Linux of all OSes is in any way a walled garden.


Snap is very canonical centric. You cannot set up your own snap store, automatic updates are mandatory, etc. It's is for all intents and purposes a second Ubuntu repo with even stricter control.


Multiple repos using different applications then.

And the general proliferation of Appimages, Flatpak, Nix, Guix, Docker containers, and of course local building of software all tell against the "using software from outside the distro's repos is discouraged" representation.


Of those listed, only AppImage is as easy to publish and install as your average Windows software (Flatpak is a not-so-close second with significantly more limitations). And then you get prominent FOSS developers like Drew DeVault saying that those distribution methods are terrible ideas because they are dangerous. The way things work in the Linux Desktop and its community are just not conducive to simply passing around software without middlemen the way it has been in real personal computing systems since the 80s.


You can make assertions like this if you like, but they're simply untrue. Windows is nightmare to install software on, while on Linux one usually has multiple, easy-to-install-and-keep-updated options (Appimage being the worst choice, because it is the most Windows-distribution-like, and requires the application itself to check for updates etc.).

You can also distribute binaries on Linux easily enough. There's just no general reason to want to do so.


git has quite extensive documentation, and it is, in principle, really useful. The problem is that it suffers from what Geoffrey Pullum called "Nerdview" <https://languagelog.ldc.upenn.edu/nll/?p=276>: It is written from the perspective of the author of the program, rather than the user, and therefore it is easiest to understand if you are already thoroughly familiar with the underlying architecture of git.


When I'm on a strange system for the first time and I need grep, the first thing I'm doing is grep --help or man grep to check to see what I'm doing with.


Those articles from mid-1990s are great to read; they are both funny and informative. Even as many of the points they make are still valid today, even more valuable is the ability to look at succeesses or failures with 20+ years of hindsight.

I feel the pain of the user in this particular case. But I also understand the frustration of the people who wanted to write their own smaller programs with less restrictions than the well architected but highly constrained VMS architecture. And ignoring such users can topple a better technology. That is why (a technically horrible) DOS spread like wildfire on personal computers, super unreliable Windows (Win 95 had to be rebooted daily) killed a much more robust OS-2, etc.

We can call such users who want capabilities quickly, even if they are not fully reliable "ignorant lemmings" or whatever, but ignoring them when a competitor does not is very risky. My 2c.


A similar argument has also been made about JavaScript. When it comes to market share I guess most users don't care how elegant the solution is under the hood.


I was about to post the same. The first thing that came to my mind after reading "technically horrible" part of comment


We see it all the time with feature driven development. Users in general always want more.


We had to use VAX/VMS when I was a freshman, decades ago. The “Computer Center” had dozens of VT220s hooked up to it via serial “hubs” in the main building basement (the very stereotype of a nerd dungeon).

It wasn’t half bad. As multi-user systems went, it was actually quite good and we ran a number of projects on it before moving off to PCs, Sun Workstations and Linux in general.

I remember all the staples of the era: using Kermit to upload our assignments to the thing, dialing in from home at 2400bps, hacking our way “out” to the Internet, running out of our 50MB quota due to mailing-lists and uuencoded files fetched via mail gateways.

It was a lot of fun, and AFAIK there are some working VMS emulators around that I could install on a Raspberry Pi (and likely get a faster multi-user system than what we had then for hundreds of students).

I say it was an experience worth having, but largely (functionally) indistinguishable from a UNIX machine when accessing it via teletype (glass or otherwise).


> but largely (functionally) indistinguishable from a UNIX machine when accessing it via teletype (glass or otherwise).

I disagree. I first sat down at a VMS terminal in a library in North Carolina when the OPAC broke and dropped me back to a prompt. I knew Linux passably well at that point but had never used VMS before.

I typed ls, tried dir, that worked, and finally tried help. In half an hour I was investigating the university's network. It was a remarkably easy system to get oriented on. Conversely, no one can be expected to get anything vaguely comparable when sitting down unattended at a unix shell for twenty minutes.


I just tried 'help' on a linux system.

It is absolute garbage.


If you had tried it on an actual Unix system, your experience would have been rather different. AT&T Unix System 5 Release 3.2 had a standalone help command that provided an interactive menu-driven help system. It looked something like this:

    $ help
    help:   UNIX System On-line Help

        Choices      description 
        s            starter: general information

        l            locate:find a command with keyword

        u            usage: information about command 

        g            glossary: definition of terms 

        r            redirect to a file or a command 
        q            Quit
    Enter choice >


One of my first tasks as a systems programmer for the Schlumberger corporation in 1986 was to rewrite the VMS help command for use on SunOS. As it turns out, my rewritten version ran nearly twice as fast as the VMS version on VMS.

That design (a nested heirarchy of help topics/subtopics, with some navigation hacks to make getting around easier) was quite good.

But then came hypertext in the form of the web, and at that point, the "help" command just looks like crap.


That's because 'help' on most Linux-based systems is actually a shell builtin which explains other shell builtin commands. This, in turn, is related to the fact that Linux-based systems are flexible - you could use a totally different shell which does not even have the _help_ builtin - while VMS was a comprehensive single-sourced top-down controlled system. Both have their strengths and weaknesses, the fact that you are nearly certain to be using several Linux instances right at this moment gives some indication of which of these systems eventually turned out to be the strongest, warts and all.


help on VMS wasn't a builtin. It was a system-provided program, with data files that could be accessed via other software.


The same was true on MS-DOS, DR-DOS, and OS/2. Microsoft provided a help file compiler with some of its DOS development tools, and IBM provided an INF/HLP compiler and (if memory serves correctly) a message file compiler with the OS/2 SDK (HELP using both hypertext help files and message files on OS/2).


In a normal sized terminal the most useful part is pushed away by the list of built-in commands. IMO it should be printed last, so that it's guaranteed to be displayer to a user who doesn't know how to scroll up:

    GNU bash, version 5.0.18(1)-release (x86_64-pc-linux-gnu)
    These shell commands are defined internally.  Type `help' to see this list.
    Type `help name' to find out more about the function `name'.
    Use `info bash' to find out more about the shell in general.
    Use `man -k' or `info' to find out more about commands not in this list


> largely (functionally) indistinguishable from a UNIX machine

Although they were functionally similar, there were some practical considerations...

I will say that path names in unix were a simple and elegant , compared to what VMS used.

I recall VMS paths were something like [foo.bar.bletch]something.txt

At the time this was a little cumbersome, but looking back it is much worse.


That's a cosmetic difference.

The real difference is when you try to work out whether your binaries are (supposed to be) in /bin, /usr/local/bin, /sbin, etc, whether settings for a specific application and/or daemon are in $config or $.cfg or $.conf or $.cf or $d_config or .ssh and .bshrc in your personal directory, and where your web server and mail logs are.

Because they might be in /var/log - or equally they might not.

Unix was "designed" by hyperactive comedy racoons with ADHD. There's no reason - beyond lack of attention span and professionalism - why basic features and expectations couldn't have been standardised. But the Unix way is to get something sort of working without paying much attention to what other people are doing, lose interest in it, and move on.

Or to hammer away at something for decades adding more and more obscure edge case config options in a text file, all of which need to be set carefully because otherwise the application fails - probably silently, maybe leaving a log message somewhere completely unexpected - and most of which are irrelevant to 90% of users who just want Something That Works.


It's not a cosmetic difference. Unix has a single-rooted filesystem. VMS did not (like DOS). That doesn't invalidate your complaints about file naming and location, but it isn't a cosmetic difference.


Well for starters, those are different folders because they mean different things. If you’re ever trying to figure out where a binary is located, `which` and `whereis` will do that for you. Also programs will list in their `man` page where config files can be set etc, and sometimes they allow multiple options.

But yes, your inability to search anything up or to understand the basics of Unix constitute a complete design failure.


And then there was DECnet ....


> that's a cosmetic difference

yes, mostly.

> Unix was "designed" by hyperactive comedy racoons with ADHD.

I have met the authors of lots of significant parts of unix and I have found them - universally - to be smart, competent and introspective.

I also found it interesting that most (85%) of top athletes were introverts.

Now to compare VMS with unix, you'll find there are some things that are better because capitalism works. Someone was in charge of VMS development, and people were paid to solve specific problems for the operating system or customers. But some things are worse because people were in charge of making trade-offs for current customers vs future direction.

On the other hand, unix also has some points in its favor. A lot of things have appeared that would not survive in a commercial business. Ideas seem to live or die depending on their merits more than by fiat.

The thing is - when people want "something that works" it usually involves tradeoffs that people don't want. You can run an older version of centos/rhel and you will find more 'it just works' at the expense of (relatively) older software. (or you can go look at the source and fix it yourself)


"[foo.bar.bletch]something.txt"

Why is that better though - isn't that just sheer familiarity?

NB It looks ugly to me as well but a lot of syntax looks ugly until you become familiar with it (C, PostScript, Lisp, Python all seemed pretty weird looking to me first time I used them and I came to really like all those languages over time).


Yeah, hence the “functionally”. SYS$[HOMES](something username) was weird as heck, but it worked largely the same way.


That looks similar to the first OS I used, RISC OS, which had paths like

  ADFS::Symbiote.$.SchoolWork.English.Essay
That's Filesystem::DiskLabel.$.Directory.Another.Filename. There were no file extensions, but there was a file type stored as metadata (I think on the directory node?).

Given a current filesystem and a current disk, just the path from the root ($) directory was needed.


Similar story here - as a physics student, I used VMS in the first year of uni, mainly for its Fortran compiler (WATFOR) and the NAG libraries, and Unix thereafter. Both had advantages and disadvantages, VMS was touted as more secure (as in, security was a fundamental consideration in its design, whereas it wasn't the case for Unix). I particularly liked the automatic versioning of files, making reversion to earlier known-good files simple.

VMS knowledge came in handy years later when I had to work in VMS Fortran on the company VAX - I even coded the VMS Fortran random number generator in g77 (thanks to the excellent documentation) for consistency when we ported our code to DOS after the VAX was shut down.


My freshman exposure to VMS was for an assembly class. The course also seemed to double as a history of computing. This was back in the early 00’s, for reference.


> [the name] Grep suggests to me that the author of this one had been reading too much Robert Heinlein (you grok?), or possibly --- and this is in fact quite likely --- was under the influence of psychotropic substances at the time.

As funny as this is, the actual origin for "grep" is even more interesting – and, at least to me, quite mnemonic. "grep" comes from ed, and stands for the command "g/re/p", that is

  global/regular expression/print
https://en.wikipedia.org/wiki/Grep


"Where GREP Came From" by Brian Kernighan from Computerphile:

* https://www.youtube.com/watch?v=NTfOnGZUZDk

For those unfamiliar, Kernighan is the "K" in K&R C and the "K" in AWK:

* https://en.wikipedia.org/wiki/Brian_Kernighan


Great interview with Brian on Lex Fridman's podcast - https://www.youtube.com/watch?v=O9upVbGSBFo


I listened to that one last night, and then a few more. He's been in rotation for a while now.

I don't know how I ended up subscribing to Lex Fridman's podcast, but it's both wonderful and somewhat bizarre to me.

On the one hand, he comes across as the kind of character you'd expect in a horror film. Meticulous, well-dressed, friendly, but affectless in his speech and oddly emotionless and formal.

But then the actual questions he asks, and the observations he makes, are IMO a step above most interviewers. I can think of a number of 'greater' interviewers, but he's definitely well in the 'very good' range.

My apologies if Fridman reads this comment. I definitely don't want you to stop doing things the way you do :). It's just somewhat different, in a strangely 'boring' way, that I'm not used to from most good podcast hosts that I'm familiar with. Most are, sometimes to the point of irritation, exceedingly affable and chatty.


> On the one hand, he comes across as the kind of character you'd expect in a horror film. Meticulous, well-dressed, friendly, but affectless in his speech and oddly emotionless and formal.

Are you aware of the The Report Of The Week channel run by 'Review Brah'?

* https://www.youtube.com/user/TheReportOfTheWeek


> was under the influence of psychotropic substances at the time

He's saying it as if it's a bad thing?...


Note that it's not "global" or "print" - the commands are the literal single letters "g" and "p".


A once asked my then-mentor if it was an abbreviation of "GNU rep". He was highly amused.


I really miss VMS. Adding a command wasn't just simply a matter of throwing an executable on the path, it had to be declared, with two options: One was to declare a command that had its own argument parsing or the better option, there was a way of doing external configuration of the arguments using (iirc) a .cld file which provided a way of specifying all the arguments and options for the command in a straightforward way. I did this for all the TeX programs around 1990 (I think this might have been the first TeX version which didn't use separate binaries for iniTeX vs TeX). It was nice because the OS automatically managed things like abbreviations so you only needed to type as much of a verbose command or option as was necessary for it to be unique. So while the command might have been `DIRECTORY /FULL` one could just type `DIR /FU` to get the same result. The VMS help system was also fantastic and allowed for easy browsing of available commands and their instructions in a way that man+apropos only approximates (not to mention that the tradition of using relatively verbose but understandable commands and options made it supremely usable.


never understood why so many people complain about the short commands, but nobody contributes a central list of aliases with verbose names to them.

alias copy_files_from_one_place_to_another=cp

this tells me short names aren't a problem to begin with. And everyone would have the same discoverability problems with longer ones.


Verbose is a relative thing. `copy` is an intuitive name, `cp` not so much. Likewise, it's a lot easier to remember how to do something like copy /exclude=* .bak [.work] [.prod] (that space after the asterisk shouldn't be there but markdown) than the equivalent command in a Unix shell, apparently:

$ shopt -s extglob # to enable extglob

$ cp work/!(*.bak) prod/


You realize all that has a huge IF you are a native english speaker, right?

the glob stuff is a case of power vs convenience. but you can do very similar with bash [] or {} syntax.

not being apologist, but I like unix approach of blurring the lines of a computer user and programmer. It makes everyone up their game both ways. Users being more demanding, and code being more accessible.


Which is not to say there aren't things I like about unix. The ability to chain commands through pipes is very useful (assuming that the program is pipe-friendly), and the syntax for directories in unix feels more comfortable than the same syntax in VMS and both are by far much better than the hot mess which is the DOS/Windows shell.


I remain amazed that the #1 shipper of Unix systems today is .. Apple.

I grew up on Unix in the 80's, cut my teeth on MIPS RISC/os and then Irix and SunOS and all the joys of the very first days of Linux, oh my .. and I was fully prepared to be an SGI fanboy for the rest of my life - and then, they abandoned Irix and shipped NT. sadface

So when the tiBook came out, and it was promised to have a Unix on it, I jumped off my Indy and O2 workstation onto Apple - a company I'd never imagined, in my wildest 80's and 90's fever dreams, would become the one company still standing in the Unix wars. The tiBook was just soo good, and despite all of its warts in the early days, MacOS X' underpinnings with Darwin were just good enough to swing the decision to use it as a Unix-developers machine. And, it has been solid for 20 years as a platform in that regard, although the writing is definitely on the wall for us Unix geeks who nevertheless carry a Macbook.

If only SGI had made a laptop, and not been wooed away from sanity by the NT freaks. Can you imagine if SGI (Nvidia) had made that laptop before Apple did .. ? I sure can.


They made a rather strange o2 laptop that never made it to production.

A number of legendary companies with great potential misstepped, GRiD, Blackberry, Be, MasPar, Thinking Machines, GO corp, Intergraph, heck go back to the Evans & Sutherland LDS-1 in 1969 or when BBN decided not to get into hardware after making the first internet hardware ever, the IMP. Or how about how SRI fumbled the ball after Englebarts work or SDS, who made a bunch of the NASA Apollo hardware.

The world is littered with great technology companies that didn't stick around because we determine success and failure by handshakes on the golf course.


Yeah, I saw the Indy-laptop once during a demo in Hollywood .. was definitely a clunky pile of junk.

You're right about that golfing handshake.


Sgi hardware wasn't that great from about o2 onwards. I used a lot of those machines and they were kinda dogs


It's been said pretty often -- and I think it's true -- that the introduction and success of OS X drastically slowed the adoption of Linux as a mainstream desktop option.

I take no position on whether this was good or bad overall; it's just a statement of fact. Lots of us who would've otherwise needed to shift to Linux for LAMP development or whatever after the dot-com crash migrated to the Mac instead, because it meant a unix laptop that Just Works that also allowed us to run native MS Office, etc. That was a powerful value proposition (and remains one).

" the writing is definitely on the wall for us Unix geeks who nevertheless carry a Macbook."

I do not yet see this writing.


Were you amazed back in the days when one of the most popular Unix flavours was produced by Microsoft? Yes, Xenix.


No, because I was using a real Unix (RISC/OS) and saw the writing on the wall for Xenix soon enough to know it was a dead end. I knew absolutely nobody in my region (SoCal) who used it.


"Anyway, have you ever tried to use man? It's fine as long as you know what you are looking for. How would you ever find out the name of command given just the function you wanted to execute? You can't. "

One of my gripes with UNIX systems is how opaque they are


I have learned unix with sunOs in 1990. Man pages were excellent. The first command I learned was "man man". There was a paper version of the man pages available in the computer room (among many other sun documentations). I became pissed by man pages only when I switched to linux. At that time, I have discovered the crypt(3) command by looking at the index of man pages. One week later, I had cracked 10% of passwords of the school (creating my own dictionary).


I remember getting new Sun workstations at that time and part of the fun of getting new Suns was taking the boxes of printed documentation that came with them and slotting them into the supplied ring folders.

Later on they would ship the Adobe red/blue/green PostScript books with OpenWindows.


A full set of Vax documentation was at least the size of the Vax. I still have my 3 volume copy of HPUX's man page.

Including the famous bug for tunefs which is still current in FreeBSD (talk about slack in addressing the real issues!):

"You can tune a file system, but you cannot tune a fish."

https://www.freebsd.org/cgi/man.cgi?query=tunefs&sektion=8#e...


The docset was large, but it was not as large as a VAX.

I had to learn VMS in a hurry in the last two months of 1986. I had the full binder set on a shelf in my toilet (Cambridge, UK). You could not have put a VAX in there :)


> "You can tune a file system, but you cannot tune a fish."

Well of course not; that would be "tunefsh", which no one has gotten around to writing.


DEC documentation was always very good but then it was a product you paid for.


Lots of people paid lots of money to DEC and Sun and HP and IBM and others for Unix as well.


PowerShell takes the UNIX philosophy, cranks it up to 11, and makes commands trivially discoverable.


There are things that powershell does well, but I wouldn't say that it's really all that unixey. It relies far too heavily for the user to have an understanding of the windows object model, which frankly a lot of sysadmins don't have and refuse to learn. In unices the pipeline is just that, a way to send a stream of bytes from one place to another. In pwsh it's more complicated than that since it passes an object along with the preceding command's metadata.

In my personal experience, powershell is fantastic for writing scripts and tooling, but not really so great for actual use as a shell. What makes a good shell is speed and 'muscle memory', imo.


Thats nonsense really. What windows object model ???

Your 'simple' pipeline becomes hard core once you take into account all other things you require like grep, awk, sed, xargs, mount etc. Hack, even basic boolean stuff is from another dimension with executables like `[` or `true/false` (yeah, I know mostly builtin nowdays)


> your 'simple' pipeline becomes hard core once you take into account all other things you require like grep, awk, sed, xargs, mount etc.

That's the whole point of unix. Non-integrated tools that talk to each other using plain text.


It's exactly the same for PS.

Just as you have to look at a text output of a unix command to figure out how to parse it and extract the subset of information you need from it, so do you have to look at the help metadata of a PS command to figure out how to extract the subset of information you need from its output.

The advantage of having objects instead of text is that if you thought you could parse the filename of grep matches by substring each line of `grep -H`'s output with `:`, you failed to account for filenames with colons. With PS's sls, its help metadata tells you it outputs `Microsoft.PowerShell.Commands.MatchInfo` objects, and the documentation for that type tells you it has a `Filename` property of type string.

One PS command is not "integrated" with another PS command; they're all integrated to .Net and a bunch of built-in PS types.


Thanks for your answer, and I see where you are coming from. But your unix is not the same as my unix. For me, unix means programs that can be easily written using only the getchar and putchar functions. If you have "types" and whatnot, it's not unix.


It's not often you hear someone criticise a tool for not having enough unnecessary encoding and parsing steps involved.


Are you talking about people who promote JSON-RPC and whatnot? Sure, they are nuts, but what does it have to do with unix?


Funny enough given the linked post - Powershell owes it philosophy to VMS as well, e.g. the verb-noun command structure.

https://devblogs.microsoft.com/powershell/verb-noun-vs-noun-...

https://ilovepowershell.com/2013/08/05/microsoft-virtual-aca...


How to discover commands?


Powershell pretty much has universal tab-complete. Typing a command, '-', and repeatedly hitting tab will cycle through all available arguments for that command, and arguments will be quite consistent between commands. But you can also tab complete variable names and the commands themselves.

Discovering the cmdlets is not as trivial as it could be, because they are named verb-first instead of noun first. Get-<tab> is not as useful as NetAdapter-<tab> might be, but since things are named sensibly you can do a little guess work and use tab-completion to find what you're looking for in the vast majority of cases.


> Discovering the cmdlets is not as trivial as it could be, because they are named verb-first instead of noun first

Irrelevant. Use

    Get-Command *network    #network anywhere in noun
    gcm *-network*          #network as first word in noun
PowerShell has it all, its just that people don't bother to learn it.


Which good documentation do you recommend?


Poweshell in depth, or in month of launches.

But, experience is more important. Simply do everything in posh, even if there is a handy gui. After a few months you are closer to pro then with any books.


should't the first one be "network as last word"?


Indeed. My bad.


[flagged]


Yes it does. Learn it before commenting.

Furthermore, its definitely way more unixy then any *nix shells.


Anecdote this jogged from my memory: my first Unix experience was NetBSD on a SEGA Dreamcast game console, of all things. This must have been some time in 2000 or 2001, but I remember sitting there trying every possible command I could think of trying to find one that did anything at all so i could keep going and learn more. The DOS commands I knew obviously didn’t do anything. Neither did anything like ‘help’ or ‘?’. After five minutes or so of trying commands off the top of my head I tried ‘exit’, and that was the end of my Unix experience until I mail ordered a Mandrake Linux CD-ROM set in 2002 :)


I also ordered one :) probably around 2003 - came with printed user guide


Whilst you have to know it to be able to use it, man -k THING will let you search for a manual, today. (--apropos if long options are supported, which isn't exactly clearer.)


You should try 9front (Plan9) it's kind of the evolution of the Unix philosophy.

http://9front.org/

https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs


And afterwards try Inferno, which is what Plan 9 wanted to actually be like.

http://inferno-os.org/inferno/limbo.html

http://doc.cat-v.org/inferno/


I tend to share your thinking on this, although at least one knowledgeable online-imaginary-friend vigorously disagrees with me.

I have looked but failed to find a VM image or even images of installable media to build one online. If you know of any, I would like to know -- and if you don't know of any, but have the skills, I think you could do a big service to the OS research community by making and sharing either install media or a VM or both.


The OS research community knows that it is located at http://www.vitanuova.com/inferno/downloads.html.

Ever heard of search engines?


> which is what Plan 9 wanted to actually be like

No, not really can you proof that in any way?


Yes, really.

It was developed by the same core team that created Plan 9, and Limbo is the evolution of Alef, that Rob Pike wasn't happy to have to drop for Plan 9 3rd edition.

https://en.wikipedia.org/wiki/Alef_(programming_language)

> Alef appeared in the first and second editions of Plan 9, but was abandoned during development of the third edition.[1][2] Rob Pike later explained Alef's demise by pointing to its lack of automatic memory management, despite Pike's and other people's urging Winterbottom to add garbage collection to the language;[3]...

> The Limbo programming language can be considered a direct successor of Alef and is the most commonly used language in the Inferno operating system.

https://en.wikipedia.org/wiki/Inferno_(operating_system)

> Inferno was based on the experience gained with Plan 9 from Bell Labs, and the further research of Bell Labs into operating systems, languages, on-the-fly compilers, graphics, security, networking and portability.

For some strange reason many stop at the middle station, instead of going all the way to the end.


One of my favorite features of plan9 is how it handles the bin directory.

For one, you can have sub-directories inside the bin directory. This lends to a nice hierarchy of commands and encourages writing small single purpose commands instead of large monolith commands. An example of this is the `ip/ping` command.

Another feature uses the union file system which prevents me from having to mess with $PATH locating commands.


"man -k" (or "apropos") will let you search for commands. It is pretty primitive though, definitely not a Google quality search.


Read a blog, book or cheatsheet? Once you know the command it's much easier to remember (and explain) than click here, then there, then double click that, etc


man -k

apropos

Are 2 options


apropos


man -k "key words of the thing i'm searching for"

^^ also works fine ..


Finding man, cold from a prompt, is hard. If you type help, even today, you might get the shell one which doesn't take you there. Ditto info.


Oops, looks like Bash at least accounted for this case. Really, I think you want to route new user to "man intro" first, if they just asked for help.

    $ help
    GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
    These shell commands are defined internally.  Type `help' to see this list.
    Type `help name' to find out more about the function `name'.
    Use `info bash' to find out more about the shell in general.
    Use `man -k' or `info' to find out more about commands not in this list.


Current macOS (Catalina with zsh):

   (base) user@myMachine ~ % help
   zsh: command not found: help


Are we going to completely ignore the fact that this purported Requiem for VMS tells us in great detail why Unix suxx - but literally (and I do mean literally) not a single thing about why VMS should be mourned.

If this is how VMS advocacy looks like, I'm not surprised it disappeared.


Isn't it?

What's more, and this keeps behind repeated and emulated, I do not understand how being arrogant, condescending, insulting, full of bad faith scorn and hatred is supposed to make me be part of a group or use a tech, API, stack, framework or what not.

"I'm a VMS user", they seem to say, "watch me be a nasty person."


The complaints here don't seem to be much about how Unix works in a deep way, but rather the particular language/syntax which is used to interact with it ("rm", etc.). While that's certainly annoying in many ways, so are almost all languages in common use. You could write a lot about how horrible English is (or German, as Mark Twain famously did). But English is a useful standard - politics, inertia, and the value of a common language are forces far stronger than the fact that the language is more annoying than it could be.

Am I being trolled? Probably :(


The back story to this is that the DIGITAL Command Language (more or less the equivalent of an Unix shell), with its excellent filesystem-level features and the environment around it (e.g. the well-written and extremely thorough documentation) was very much light-years ahead of anything you could get on most Unix environments at the time. Going back to the Unix shell felt a bit like a step back.

FWIW, there are plenty of complaints about things other than syntax after the fifth paragraph or so, too ;).


Ah yes!

Documentation for VMS and the whole shebang of layered products.

Until this day, in my book, the absolute gold standard when it comes to documentation.

Having the arguably best tools suite to develop (LSE, Debugger, what have you) also didn't hurt.

I still believe it the best operating system I ever worked with.

Ironically the article describes quite precisely why DEC failed.

Most of the company had a visceral hate of anything Unix and anybody involved with anything that even smelled faintly of Unix was a second class citizen.

Well, looking at Ultrix may have been the main reason for that. Ironically and decades later they had one of the best Unix offers on the market with Tru64 Unix.

DEC was in many, many respects an awesome company. Shame that Compaq (and later HP) never really had a clue about what they actually got there.

Source: I worked for DEC from 90 - 94.


>Most of the company had a visceral hate of anything Unix and anybody involved with anything that even smelled faintly of Unix was a second class citizen.

Reminds me of ye olde UNIX-Hater's Handbook[0]

0:http://web.mit.edu/~simsong/www/ugh.pdf


> Until this day, in my book, the absolute gold standard when it comes to documentation.

Amen to that! I started my career in a VMS shop, did a lot of DCL scripting. I have never seen better technical docs, before or since.


DCL was so much better than MCR. In DCL I could use "DELETE sys$home:* .* ;* " but with MCR I had to resort to "PIP/DELE [7,1]* .* ; * " DEC just kept evolving for the better.

Of course, sh was just "rm -rf * .??*" and that is so much more discoverable because nothings says "delete all your files" more than some line noise.


You are right. Some other complaints have have no aged well either (especially things like case insignificance, which we all know is extremely difficult to get right once you leave the confines of English).

Unix has a lot of issues on a lower level, but very few people discuss that, since it's a complicated topic.


Quote from the article:

Douglas Adams may well have had Unix in mind when he described the products of the Sirius Cybernetics Corporation thus: "It is very easy to be blinded to the essential uselessness of them by the sense of achievement you get from getting them to work at all. In other words --- and this is the rock solid principle on which the whole of [its] Galaxy-wide success is founded --- their fundamental design flaws are completely hidden by their superficial design flaws."


I just finished a contract at a client that still has a live OpenVMS system running a mission critical application. It was interesting to compare my 25 year old fond memories of the OS with the practical experience of using it on a daily basis. No command line history, crazy long file paths, and case insensitive passwords were a shock. On the other hand, the built in DCL programming language was a saving grace. My wife, amazingly, found my decades-old copy of the "Writing Real Programs in DCL" book. Because of that, I looked like a VMS superhero.


I think the author is arrogant and closed minded. He underestimates the levels of complexity of software systems and strives for some utopian "elegance of design" which I think doesn't exist. The only thing which matters is to deliver results and move forward. Requirements change: back in 1990 the internet existed in a much much smaller form, and WWW was just about to be invented. Linux had yet to be invented by Linus. Unix and Linux survived and adapted, VMS (or whatever it was named) didn't. Also the argument about "speaking English" is again so arrogant and closed-minded: not all of us are native English speakers. To me rm or delete or del or banana doesn't really matter...


I have a feeling that Windows Nano Server w/ Powershell + .NET Core might make this text relevant again soon. Having a consistent OS that scales from containers to servers and desktop is a big benefit in corporate. Now if it matched performance of an Alpine, played well with WSL, and Microsoft would manage to push the major open source stuff for compatibility. I'd give Linux 5 years. But only time will tell..


I really, really hope that you're wrong.

I optimistically hope a not-too-future (within 10y?) major version update of Windows is in reality a *nix distribution. As long as they can keep runtime compatibility with older versions of Windows software, I think it'd be a big win for Microsoft for various reasons.

The only real challenge would be to get driver vendors in line.


I don't think either is likely. Linux is embedded into the cloud space and the embedded space.

People have been down the Windows CE and Windows RT rabbit holes before. Windows Nano does nothing there.

In the server space, it's a possibility, but why? Other than running MS specific software, what is the benefit?

On the other hand, even with WSL2, Linux is not going to have it's mythologized "year of the desktop", unless you count Chromebooks. So Windows will maintain its niche there.


Why would you want that? That is the last thing I want.

It is like saying that in ten years we will only have Pepsi Cola. As the only soft drink. You can get Pepsi Cola Mint ,Pepsi Cola Cherry, Pepsi Cola regular etc.

But no matter what it will always be Pepsi Cola.

I want more, a lot more viable operating systems than we have ow. Now is a sad place to be.

Linux was never created to be a modern operating system.

Parts in Windows NT -> were derived from VMS, though more and more of it removed. Some parts were removed and had to be reinvented (WSL).

Bacm in the days, you could pick different hardware you could pick different oses (often tied to specific hardware).

I liked Atari ST TOS/GEM. I thought it was way ahead of its time.

I did not think that PCs in the beginning of the Atari STs were even comparable. Lots of peple loved the Amiga, great machine. Some people had the Archimdes. (ARM). You had Macs with PowerPC. Lots of choice and lots of competition.

Now you buy a computer of a specific design with little direct competition, though that is improving now.

And you can pick between Linux or Windows.

A single computer architecture. A singe choice +1 for operating systems.

I would really want to own a POWER powered Linux machine but the are tragically expensive.For the most part they do share the same architecture still.

On mobile, we have Android or iOS. Android has a lot of shared architecture for obvious reasons and iOS most certainly does.

It is like the American election, which white geriatric misogynists would you like?

Linux is fully geriatric. WindowsNT is getting there too.

Can we please have a couple of teenage operating systems? Some new viable babies?


Some that you forget: BSD (Net, Open, Free, Dragon), macOS, OpenVMS, Haiku.

And hey, I’m sure if you work on implementing one maybe others will be interested in writing software for it. And that’s kind of the point, I guess: it’s a huge undertaking with little financial value for a business as opposed to extending on existing work. Standards give us a common language.


Haiku, maybe?


> I optimistically hope a not-too-future (within 10y?) major version update of Windows is in reality a *nix distribution.

I very much hope you are wrong, as this will signal the death of personal computing.


Why so? Imagine the WSL will eventually molding into an LSW. The end-user experience needn't be affected apart from the more technical power-users.


Because frankly I think the Linux Desktop as it exists is incompatible with a good personal computing platform. I have rambled on at quite some length about this elsewhere on HN. Were Windows to become just another cobbled-together leaky abstraction of a desktop on top of Linux, it would suffer from all of the same problems.


How much have you worked directly with the WinAPIs? Because oh lord.

I imagine it'd be more holistic and maybe not so similar to current desktop distros. No X11 for sure, but not sure if it'd even be wayland.

May not even be Linux - BSD seems more likely of the two, especially considering licenses. Remember that Apple did the same transition with OS X (which has roots in BSD).


> How much have you worked directly with the WinAPIs? Because oh lord.

What does that have to do with anything? Yeah, they're often anachronistic, but that's to be expected of something with over 2 decades of ABI compatibility.

> No X11 for sure, but not sure if it'd even be wayland.

WDDM and DWM have supported features for over a decade that Wayland still doesn't support.


> There is no effective way to find out how to use Unix, other than sitting next to an adept and asking for help

That's a rather unfair claim. Hundreds of books and tutorials have been written to introduce people to Unix...


Now.

This was written decades ago, when computers mostly lived in universities and the only way to learn Unix was by etc.


Well it's written in about 1994 by the look of it. So, this is the point where I was learning Unix and it's after the point where a Finnish university student's "hobby project" is building one from scratch, a pretty good one it turns out.

It's also coming either after, or at the same time as books like "The Magic Garden Explained" which explains SVR4 in considerable detail.

You should read stuff like this as at least 90% sour grapes. Is there value from some of the criticisms along the way? Sure, but mostly they're just angry because the thing they liked is clearly going away.


You're kind of making the article's point. What's the need for hundreds of books and tutorials if the system doesn't provide access to great documentation with just a "help" command?


My first corporate job in computing, in 1994, was at TeleCheck. They used an all VMS environment, though by then the machines themselves were about 30% Alphas and not actual Vaxes.

The denial about the platform's future was SUPER strong. TeleCheck IT and software development, at least in those days, was staffed by people who would either move on quickly or stay forever. They mostly hired right out of college, too, so the lifers were people who had never worked anywhere else. (This kind of employment monoculture is pretty destructive, IMO -- get some new ideas in there!)

This created a super weird environment. Everywhere else I worked back in those days was rife with industry publications, curiosity about how other systems worked, excitement about developments in software or networking even if they were on stacks or platforms other than whatever the site used, etc.

Not so there. I think this may have been mostly because to read, say, InfoWorld in 1994 would have made it much harder to avoid how narrow a niche they were occupying. The nature of the systems and software there meant people who stayed were gaining skills not useful anywhere else; pretty much everything (even the database system) was built in-house.

People were out the door constantly, going to other big tech employers in the area to either (a) pick up more marketable skills or (b) pick up a huge bump in pay. That's what I did after 2 years. (Turnover in the dev group was something like 35% a year, which is HELL on institutional knowledge.)

All that said, you could see SOME of the appeal of staying on VMS from their POV. Clustering was a big deal, because downtime cost dollars. File versioning in the OS -- which I still haven't seen implemented the same way anywhere else -- was fucking genius and made rolling back a bad release almost trivial.

But when you decide to ignore where the market is going, and stay on a doomed platform, there are real costs to pay.


I remember I was upset when we moved off OpenVMS (because market pressure) but it was so long time ago, that I don't even remember why I was upset. But I definitely resented the incredible gamble they took with Itanium, even before the gamble crashed.


The last modified header suggests (2000)

    Last-Modified: Fri, 08 Dec 2000 14:02:09 GMT


The letter is from 1994. See Starlink Bulletin #13, pg : http://starlink.eao.hawaii.edu/starlink/Bulletins


It also mentions 1969 as being 25 years prior.

Which also makes this article closer to 1969 than the present day.


Meh... I think the author of this article was cherry-picking a bit. VMS is no cake-walk either, and has its fair share of idiosyncrasies.

So you're a system administrator, and you want to change a user's password?

$ set default sys$system

$ mcr authorize

UAF> modify jimbob /pass="whatever"

UAF> exit

...while on just about any nix, one might simply:

# passwd jimbob

<< enter the new pw twice when prompted >>

For every example of the original author complaining about *nix, I could probably find a counter-example of VMS being awful. Really, I think it reduces down to this: people prefer to stick with what they're used to, and what they were trained on. Anything else is "awful" and "inferior."


VMS - Dying, dead, or reborn? For those who lament the loss of VMS, know that Windows NT carried on. [0] (By the way, did you know that WNT is a Caesar cipher for VMS?)

Long ago I had my personal trials and tribulations with Unix. Fortunately, OS/9 [1] was there to help with that journey of discovery.

[0] https://www.itprotoday.com/compute-engines/windows-nt-and-vm...

[1] https://en.wikipedia.org/wiki/OS-9


Some context might be useful.

Starlink was one of two somewhat similar subject-specific networks set up in a period of enlightenment by the UK Science Research Council (as I think it was then) operating in the 1980s, particularly for largely interactive analysis. The other was an un-named and ill-publicized one for nuclear structure as opposed to astronomy. I don't know about Starlink, not being an astronomer, but I guess it was also rather ahead of its time, like the nuclear structure one.

Obviously it had changed in astronomy by then, but I didn't see the attitude that physicists (and later, structural biologists) shouldn't write software or build the necessary hardware before or around that time. We had people and job titles like "physicist-programmer", and did what was necessary, and we did fit the facilities to the problem (when not working at foreign labs). The software systems were designed to be user-extensible anyhow, in our case.

Personally I was glad to have the lightning-fast interactive graphics system on the nuclear structure GEC systems, and not the stodgy performance -- even when they weren't running something like Macsyma -- of all the VAXen I used. VMS was somewhat inscrutable to a physics hacker anyhow, so I don't understand why Unix made it more difficult, though I hold no particular affection for Unix.


Came for a discussion of OS/2, stayed for the comparison of Ken Thompson to E. E. Cummings.


> One can only conclude that the makers of Unix held, and still hold, the ordinary computer user in total contempt, and this viewpoint seems to me to be mirrored in the attitudes of the people who are inflicting this awful system on the rest of us. Does Unix's enormously steep learning curve have any function other than to deter the faint-hearted, those who may want to use computers without necessarily dedicating their lives to them? It does not seem fanciful to suggest that Unix is primarily about separating out the elite from the proletariat, the real programmers from the quiche-eaters.

I think it's worth pointing out that in the 80s/90s (and in the research in personal computing that preceded that period), there was a very different attitude about what it could mean to "use a computer." The demarcation between a "user" and a "programmer" was not so clear. Today it is stark. The author is sort of joking with this line, but it's likely that the unixification of computing has contributed to this divide.


> but it's likely that the unixification of computing has contributed to this divide.

It took a long time until a meaningful part of humanity got access to computers. The fraction that had access to timeshared machines via serial terminals was tiny.

In the age of the PET, the Apple II, the C64, it was usual to boot up your home computer and be greeted by a BASIC prompt (REPL, if you allow me). It was an immediate introduction to programming - a language was there, built into the computer's firmware, and you could just start writing code one second after powering the thing up. You could write programs that looked comparable to what you could buy at a store or type in from a magazine.

CP/M and then MS/DOS were a bit different. You were dropped into an OS shell rather than a programming environment. In order to program, you needed to explicitly start an interpreter or a text editor. You still could write programs that looked like professional apps on the platform, but you had to get a language, which was not usually provided.

Then the GUIs came. Now, in order to write programs for Windows, or the Mac (oh boy!), or anything else graphical, you needed to get developer tools. Microsoft's came in a crate. A Hello World app would be hundreds of lines. In order to make something useful that looked decent, you'd need to learn a lot. By then, most timeshared system users were left behind, as terminals couldn't cope with user expectations.

What was once a small crack is now a vast chasm.


I'm with you, but want to add that this transition isn't the fault of GUIs. The group at PARC that invented the GUI that everyone else commercialized had invented the GUI equivalent of "booting into a BASIC" -- Smalltalk.

Which brings me to another point: this divide has gone hand in hand with the idea that parts of a computing system should be universally swappable. But that prevents any "holistic" nature to a computing system, and therefore we get caught up in the idea of individual languages rather than environment and how everything works together.


As someone who grew up with Win 95 and remember the Atari and Amiga in the early to mid 90s, Linux would surely seem archaic even at that time. I'm sure my preteen cousin who was showing my even younger self cool games and apps on the Atari didn't write a single line of code and I doubt a large percentage of ordinary computer users were writing much code at the time. Someone using a computer could be seen as more technical to people who didn't want to touch it, but not more than a school kid showing her grandpa how to use an Android phone nowadays.


I think about tools like Hypercard, Applescript, and other systems that really allowed "users" to easily do things that today we might view as strictly the purview of "programmers". These kinds of things are no longer popular, but it's not because they were never popular, nor is it because they didn't "work." Our culture changed.


Unix is the ultimate example of "worse is better." VMS, which is what the article is about, was definitely more elegant, easier to grok, and far better documented. So was IBM's VM which had what we now call "containers" working simply and reliably 30 years ago.


> VMS, which is what the article is about,

Interesting then that it barely mentions VMS.


Yes. It's poorly written and spends too much time talking about Unix without explaining VMS, which purports to be the topic.


> VMS was definitely more elegant

Please, can you offer an example for this?


After 30 years away from it, I can't. But others have echoed my comments. Perhaps Bitsavers has some VMS doc. One thing was clear: you could learn VMS from the manuals. That wasn't the case with any Unix manuals I've ever seen.


> One thing was clear: you could learn VMS from the manuals.

Is that a defining characteristic of "elegance"? So far from the threads here all I've come away with is that VMS was easier to learn and/or had better docs, not that it was necessarily any better at actually performing work.


Turing proved that the only difference between any two computers is how easy they are to program and how fast they run. All computers are functionality equivalent in that every computer that can be made can solve all the same problems as any other computer which has ever (or can ever) be made.

So, in that sense, yes. VMS was more elegant than Unix because its design, syntax, and documentation made it easier and faster for users to grok and complete their tasks. As far as how fast or well the programs run, that's all about implementation details.


This brings back memories (of ten years ago).

I worked in astrophysics for a while, and maintained a large codebase that included a ton of Fortran, and had a lot of references to Vax VMS systems within the comments as that's what the team I worked with used mostly through the 80s and early 90s before moving to Linux.

One of the things that didn't seem to happen was C - there was some code here and there (I wrote a module for IDL - IDL is a proprietary scripting language that mimics and interoperates with Fortran in a lot of ways). Fortran was able to stick around because of a lot of work on great compilers, and I think, in general, the great speed up of commercial hardware has made the need for hyper optimization less necessary for a lot of day-to-day physics study.


Funny, today one of my coworkers was talking about how she used IDL at one of her previous jobs, and later I happened to see this comment. Neat coincidence :)


I think there is a bias for that, but I always love hearing of others who’ve used IDL. You should ask her if she knows of the coyote guide to IDL - I lived by it around 2008. Not sure if it was niche to my field or more general, or if it wasn’t a great resource before or after it was great for me :)


I started at a company in 1984 where VMS was pretty standard, running on 80x25 terminals. (They also had Xerox Stars, some people ran 1980s-era Macs, and there were a few LISP machines.)

Then someone introduced me to a sort of minimal unix that was running on top of VMS. Although I had never seen Unix before then, I immediately saw the advantages and switched over. If I had to do something on VMS, it was painful.

Later the company got first generation Sun workstations, running unix on metal, with large (for the time) graphic displays. SO advanced...and so much fun (except when I had to do stuff in C--mainly our team used LISP or Prolog). Eventually we even got an ARPAnet connection. But I digress.


Unix was at first a quite different experience, I think. The Bell Labs Unix had only a few utilities, each one was simple, and the documentation was good. There's a nice relevant quote: "Cat went to Berkeley, came back waving flags." It's about how the commands (even cat) got many additional options after third parties got their hands on them (or reimplemented them).

There are manuals for old Unix versions here, maybe start with V7 if interested: http://man.cat-v.org

The paper "Program design in the UNIX environment" or the book "The UNIX Programming Environment", both by Rob Pike and Brian Kernighan, might be of interest.


One of the first multiuser systems I had access to, back in the 90's, was a VMS box. Many years later, I bought an Alpha off of ebay, and have a VMS box of my own! I rarely boot it up because it sounds like a jet engine. It's a whole different world.


You can run it on a Raspberry Pi these days:

https://blog.poggs.com/2020/04/21/openvms-on-a-raspberry-pi/


Yes, I know. There is something about having the actual bare metal machine that I enjoy. I am a bit of a retrocomputing enthusiast. I am also running Alpha, not VAX.


Well, maybe more people would have used VAX/VMS or OpenVMS if it had been open source.


Well, that is the only reason why UNIX won and we got stuck with C, it is hard to win against free beer with source code available.

The prices that Bell Labs was allowed to charge for symbolic UNIX licenses were a gift, when compared against traditional commercial OS prices in the 70's.


One more advantage was that C and UNIX were designed for portability. They started on PDP-7 and spread onwards from there.

VMS was designed to sell mainframes so your choices were limited outside of that until the pedestal and workstation market arose around 88 (Alpha), MicroVAX, etc. Even then it was only DEC hardware. That's how you kill an OS.


There were already portable systems outside Bell Labs back then, and anyone that has written UNIX software knows how "portable" C actually was back then.

Had UNIX been sold with a price tag similar to VMS, without source available for universities to play around, and it would have been long dead by now.


Unix in the 1970s was completely irrelevant to the context of the article. I don't know to what extent we paid separately for the OS, but we had source for at least MVT/MVS and OS4000, of ones I used, which we didn't later for SunOS.


When I first learned of OpenVMS I was incredibly confused at the name. I guess I'm so used to OpenX projects being opensourced.


There was a FreeVMS[1] project but as a complete rewrite according to specs. It didn't get very far...

[1] http://freshmeat.sourceforge.net/projects/freevms/


à propos means apt/appropriate as a noun in french and in English, but the phrase « à propos de … » means “about …” ; which is much more (wait for it) à propos. It is kind of weird that it was used in Unix wherein only the noun form made sense ; and even with french in mind the “apropos <command>” syntax (without « de ») makes it sound weird. In practice I don't think I've ever used it, and when I was a noob I often lamented the lack of a high-level documentation of Linux which would introduce the appropriate man/info pages.


> In practice I don't think I've ever used it, and when I was a noob I often lamented the lack of a high-level documentation of Linux which would introduce the appropriate man/info pages.

It was introduced in 3.0BSD, it's pretty old :). As for usage, while I'm not a native French speaker, I don't think I've heard it used as a noun that often. I can't speak (heh :P) for Canadian speakers of French -- perhaps there are different norms for the use of various idioms there, and I learned French in Europe -- but over here I don't think anyone ever had trouble figuring out what the apropos command would do...


Not saying it didn't exist, just saying it's not an introductory high-level Linux user-space documentation.

Also not saying apropos is incomprehensible, just that 1 it sounds weird as a Frenchman, 2 there could have been a simpler alternative: “about”.


Lots of Unix commands could have had better names :).

FWIW, though, the noun form is even less uncommon in languages that borrowed the expression from French. In my native language (also a Latin language, so borrowing it was straightforward), its use a noun is very limited. Saying that a book is "full of à-propos" (as in "pleine d’à-propos"), for example, wouldn't mean it's apt, or very relevant, it would mean that it's full of subtle, possibly contrarian or lewd motives. That's more or less how it's used in English, too.

To non-French speakers -- given when apropos showed up, the Unix audience was very American-centric --, who've only heard it used in the other sense, it actually sounds very appropriate, possibly even more appropriate than about, given how apropos (the program) works.


Yeah, I guess it's not that bad. (Also it's not a noun, but an adverb, my bad.) Then my only gripe is that it doesn't really sound correct in the <verb> <noun> syntax, but then again many other commands don't either (like cd, more/less/head, man, du)


"apropos of nothing" is a phrase I've heard Canadian English speakers use, and so far nobody else.


Key quote for me :

> Actually, the answer is easy. Operating systems cost hardware manufacturers, and therefore consumers (us), money. Unix is free, and whatever its defects at least no one blames the hardware manufacturers


>is "rm" (pronounced "remove") really synonymous with "delete". Do I call the deleters when I want to move house. How would my first cousins once-deleted feel?

I don't think it's good to make the command name an English word. Perhaps it would make the command easier to discover, but it would also introduce certain unpredictable connotations.

For instance, if the command was called 'remove', then someone might presume that it was being 'removed,' and 'replaced,' somewhere else. 'Remove' has the connotation that you are taking something and necessarily putting it somewhere else, which is not what 'rm' does.

'delete' has less ambiguity, but I'm not confident that similar mixups wouldn't happen. It's better for the command to be something ideosyncratic so that the user goes in with no assumptions about its function.

To paraphrase Rob Pike, the command is not remove, but 'rm'. It's called 'rm' because rm is what it does.

http://mail.9fans.net/pipermail/9fans/2016-September/035429....


But that's really nonsense, right? It's clearly intended to be a shorthand for "remove" that saves a few keystrokes, just as "cp" is a shorthand for copy. And remove is ambiguous compared to delete. These are petty complaints, but Rob Pike's attempt to dodge them seems a big disingenuous.


> Unix seems to consist largely of arcanae which be learnt only by taking instruction direct from a priesthood who seem largely to be stuck in the anal-retentive stage.

Tell that to linux nerds who encourage people on forums to install arch from scratch if they want to learn about how computers work.


> install arch from scratch if they want to learn about how computers work

15 years ago it was "install gentoo to learn how computers work" but all it really teaches you is how the gentoo/arch install procedure works, or, more likely, how to type in commands from a HOWTO one after another.


VMS had the "sound the fire alarm and leave the building" attitude towards exceptional events that Windows NT has. That's why you used to see the "Blue Screen of Death" on Windows all the time, but they changed it in the early days of Win 8, when they realized you were violating the human rights of tablet users, even laptop users, and suddenly changed it so the system would reboot, reopen your windows, and pretend it didn't happen.

The flip side is UNIX which tries to pretend things are OK even when they are strange. For instance, you might fill the disk with a log file, then 'rm' the log file and note that the disk is still full. The file is still taking up space on the disk, even though it is invisible. The space isn't released until that process closes the file or gets killed.

Contrast that to VMS which, given an impossible situation, will do nothing. (as in "not do anything") Recovering from a "full disk" on UNIX is usually routine, but on VMS you will probably be recovering from tape or calling the factory for support.

This guy

https://www.youtube.com/watch?v=r1Esq1l0Yoo

(the year after that album was released) got mad because a midwestern state university student had a "non-traditional" student spamming hundreds of USENET newsgroups. The university couldn't throttle the spammer because he'd had already won a "free speech" lawsuit against the university newspaper.

The antagonist saw this as an existential threat to the net one Friday night and went Ender Wiggen on his ass.

He thought he'd fill the disk quota on the spammer's account on the VAX/VMS system so the spammer couldn't log in and delete his received email -- an excellent implementation of quotas meant you could usually get the victim running to 'mommy' (the sysadmin) for help.

The antagonist used ftp-to-email gateway servers to vastly amplify the attack and confuse people about the origin. At the last minute he found a way to get the ftp-to-email gateway servers to send email commands to each other in a cascading way.

The target campus went down in about 30 minutes, and, logged into the VT-100 terminal in his dorm room, the antagonist realized that he'd miscalculated and the attack was possibly 20,000 times larger than planned.

The target campus didn't come back until Tuesday evening, probably the disk was full for the whole VMS Cluster.

The spam stopped. They never proved anything, but a few days they shoved an empty SUNtape into the antagonist's hand that allegedly held the files from his account at his campus central computer center account -- just "being a jackass on social media" was a good enough reason.

About a decade after that incident, I had "the same thing" happen to an email server running Linux on this box

https://en.wikipedia.org/wiki/Cobalt_Qube

during the 'Love Letter' virus crisis. I had email accounts getting millions of emails a day on that badly underpowered machine, a beta-test model with a 2% slow real-time clock.

In 15 minutes I had the email server shut down, the disk full condition cleared, logs working correctly again.

I brought up qmail and did not like what I saw in "top" so I shut it down. I wrote a uniq|bash|awk script that picked out virus-sending ip addresses from the logs and piped the bash script into bash to block them at the firewall.

I brought qmail up and it was good, good enough to go to bed. By the next morning the viral load was serious so I automated the firewall script, installed an anti-virus scanner, etc.

In wartime, VAX/VMS gives up the fight but UNIX soldiers on.


I was getting started in the field right around the time this was written, attending college (and later being employed) at a university that was an all-DEC shop. I wound up being introduced to VMS and Unix at the same time. I greatly preferred Unix at the time, but looking back on it, it was for a completely different set of reasons than I would apply now.

The biggest was that I wanted a machine of my own to work on, not just a shell account on a big Vax with a tiny quota. The price of the lowest-end, used machine with a hope of running vms at a decent clip was astronomical, upgrades were proprietary and expensive. I think the machine in my office at the time came with a perpetual single user vms license, but if I wanted to create shell accounts for my friends, buying license paks was also quite expensive.

If I wanted to, say, run irc on the vaxstation, instead of the handful of popular clients that one could build on Ultrix, there was a single vms port, and to build it would require licensing a compiler. Downloading the source would require buying the tcp/ip module license as it would be years before that was bundled in.

So if I ditched the Vaxstation and got one of the MIPS based DEC pizza boxes, I could put Ultrix on it and at least get the GNU toolchain installed and be able to build and run a bunch of the software I wanted. My older colleagues who preferred working with VMS would spend weeks attempting to port some popular utility to VMS, which seemed goofy to me at the time.

Within a couple of years 386BSD and Linux came along, and my plan to save up the 5kUSD or so to buy my own DECStation quickly vaporized when I saw that pretty much everything I wanted to run on Ultrix would build on Linux, so I certainly had no more interest in VMS after that.

Now a couple of decades later, I sometimes think about the things that I desperately miss about products like VMS. The expansive, exhaustive documentation. The robustness of the systems in production, the great interoperability between the various compilers.

We also got pretty amazing support from DEC, but the amount of money that we paid for all those things was breathtaking. If you were using custom software for critical businesses or processes, it was not a terrible arrangement, but if you wanted to stick an email server in the closet of your small business and you weren't already a fan of VMS, it would have seemed crazy to choose one of those boxes. They were very much a premium product in a space where developers with DIY tendencies where suddenly being presented with heaps of cheaper choices, even if they weren't as reliable or well-documented.

I'm very curious to see what becomes of the x86_64 port; now that I'm a little older, more patient, and have a few bits of software that I'd like to stay running all the time, it might be nice to have the option. I think I even remember enough DCL to get around :)


And yet just now, several decades down the road, there’s a new, official VMS port to x86.


the 'Dying Operating System' not being DOS makes me sad.


These complaints haven't aged very well, and are not as funny as when I first saw them. He's complaining about case sensitivity?


UNIX is a product of evolution and as such is messy, with newer subsystems trying to reuse and control older, simpler subsystems.

Our bodies are the same way.


This passage at the end was the most thought-provoking for me:

> Unix and C however form a powerful deterrent to the average astronomer to write her or his own code (and the average astronomer's C is much, much worse than his Fortran used to be). The powers-that-be in the software world of course have always felt that "ordinary" users (astronomers in this case) should be using software and not writing it. The cynic might feel that since those same powers nearly all make their living by writing software, and get even more pay when they manage other programmers, then they have a vested interest in bringing about a state of affairs where the rest of us are reduced to mere supplicants, dependent on them for all our software needs. It is clear that Unix does not pose an insuperable barrier --- the ever-expanding armies of hackers out there are evidence enough that the barrier can be scaled given enough time and enthusiasm for the task. But hacking is not astronomy, and hackers are not astronomers, and it is astronomy and astronomers I worry about. We shouldn't have to scale the Unix barrier, and it is all the sadder because, since the advent of a VMS-based Starlink, ordinary astronomers have had something denied to most other scientists in this country --- readily accessible, reliable, user-friendly computing power that can be easily harnessed to a particular astronomical requirement.

I've spent the past hour or so figuring out what I think about this. It's easy to dismiss this piece of rhetoric as flawed reasoning or as irrelevant today. For example, I'm not sure if Unix was ever to blame for the self-serving user-developer dichotomy discussed in that quote above. After all, the Unix philosophy is all about small tools tied together with scripting, not the mega-packages mentioned in the article. And was there really not a good Fortran option for Unix in 1994? Or was Starlink (or its Unix-based successor) just too cheap to buy whatever good Unix Fortran implementation might have existed back then?

But the software development priesthood is still a problem; I'd say it's worse today. It's what worries me the most about the future of computing for the next generation, including my nieces and nephew. Will they be able to help shape the software that runs so much of their lives if they can't land a job at a handful of tech behemoths? Much of that software is even built on a foundation of Unix and more generally open source, but the power of that foundation isn't available to the end-users.

However, the free software movement's answers to these problems are hampered by their own elitism, which is in fact largely a continuation of the Unix elitism this article bemoans. It's true that we have much better languages than C, there are usable GUI desktop environments for free Unixes, and there are even whole open-source integrated development environments that one can run on a free Unix. But at some point one still comes in contact with the same arcane, haphazardly designed command-line environment that this article and the UNIX-HATERS Handbook rightly ridicule.

So if we actually want a system that's both free and open from top to bottom, and actually approachable to most people, then maybe we need to finally let go of Unix, or treat it as only an expedient substrate as Apple and Google do. Or as another writer put it, we need to free our technical aesthetic from the 1970s [1].

[1]: https://prog21.dadgum.com/74.html

Disclosure: I happen to currently work for Microsoft, on the Windows accessibility team. But these opinions are entirely my own, written on my own initiative. And yes, the fact that I benefit somewhat from the user-developer dichotomy enforced by the likes of my employer makes me feel uncomfortable. But I'm not sure quitting would actually help anything.


At least in the previous decade, and even at that time in other parts of UK science, I don't think there was a "software priesthood". It probably invaded astronomy and HEP earlier than elsewhere, and to some extent now resides in Research Software Engineering. The first OS on the nuclear structure interactive graphics system was written by a Birmingham physicist, for instance. I wrote analysis software as necessary (and read up on software engineering as well as the other techniques needed for the research, like electronics and high vacuum systems). The UK Computational Computing Projects were run by scientists who did the work, and you could get Fellow of the Royal Society-level help (though the two mainstays of CCP4 only got the title much later).

I don't know where this free software elitism is prevalent, as someone involved with it in research since before the term became necessary. I could appreciate the UHH, by the way, from a time when Unix was largely not free software.


This is REQUIEM


and to think that not long ago around 90% of the world's sms traffic was processed by VMS ...


Source?


I really thought (hoped really) this was going to be an article about Windows.


> Unix and C however form a powerful deterrent to the average astronomer to write her or his own code (and the average astronomer's C is much, much worse than his Fortran used to be).

Compared to what? If I want to quickly bang out some code and run it, installing fedora (or another distro) is probbably the least painful way to get it done.

I would say that Windows is a powerful deterrent to the average person writing their own code much more so than Linux.


This is about Unix v.s. VMS, not Windows v.s. Linux.


That's true. Still, I think cheph is right to point out that the situation has changed since then. I honestly got so caught up in the rhetoric of the OP that I had forgotten this for a moment.


In particular, we now download whatever software tools we like from this lovely internet thing, so it's less of a concern what programming environment the OS bundles by default.

Back then your choices on UNIX were to either accept whatever your vendor included (which would mostly be C), or buy a disk set of some other tools and hope they actually worked on your machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: