Twice in the last decade I've been completely stuck on something, googled extensively, finally found the answer ... which I wrote, on Usenet, 25 years ago.
I have a notesfile on how to fix/debug things. When I started it, I would write things down after I fixed something and figured I'd need to remember it later.
Problem was, I would often forget to write things down, or naively think "that was quick to find out, don't need to write it down", which is a classic mistake of confusing post-facto mindset with pre-facto.
So now when I wonder how to do a thing, I first write down in my notesfile what my question is, so when I do resolve it I go and write down the answer to satisfy my need for closure.
I make a new markdown file for every program / VPS login command (the entire SSH command with custom ports), and really anything which I know I would need to search through bash history down the road to do it again.
Those files are sorted into a handful of folders which allow me to quickly find all these programs / commands quickly.
this is why I have made it a point these days that when I fix an issue, I immediately go to all the forums, subreddits and stackoverflow sites where I posted the question and post the relevant solution. Its a lot of work but totally beats figuring out what you did 6 months later
That’s just karma. I have tried not to do that to myself because so many others have done it to me. Still, sometimes the answer I left is too vague to be practical.
Oh man this happened to me last month. I was even wondering how this guy had the exact same issue that I was having and only realized after a while that it was literally me who posted it. I eventually found an answer and decided to post an answer in case future me forgets it again.
I've also had this happen, but never figured out the solution.
This is especially frustrating because your question is likely to pop up first in your search results because you probably phrase things the same way even years later!
I can't even tell you how often this happens to me. People come to me because I wrote something and I then tell them: "Let me go read the manual" (because I can't remember what I did a month later, so I'm always consulting my own docs).
I always tell people: "You think I wrote this manual for you, but I'm not that altruistic. I wrote it for future me."
To be fair, not exactly the same thing. I typically remember I wrote it, just not what it does or how it works. :-)
Still less efficient than dumping your piece of knowledge insomzthing like a second brain, aka personal knowledge management.
The tricky part is whether to write down everything and make your notes too long to read/follow, or summarise things with the risk of forgetting an important piece of info [why you should do this or that, what you should NOT do, what additional piece of knowledge is required, etc].
The art of taking smart notes is really fascinating.
It boils down to the same issue. Whether you store data or not, your brain is a bottle eck. And managing notes would require the same skill as curating your efforts.
The most depressing thing is being presented with an issue and finding my name attached to a closed ticket from years before with no explanation of how I fixed it.
Guess what, current me, you're going to learn this again from scratch because past me was in a hurry and couldn't be bothered to type out what he did.
BSG - this has all happened before, and it will all happen again.
I go through the same thing but with a few "popular" GitHub issues. Every time I encounter those bugs I Google it and the first result in my comment on the issue that explains the workaround I used.
I sometimes have to go back to GitHub issues I created, that I bookmarked for this specific reason.
I think that the Internet has modified how our brains retain information. I don't think this is an original idea, but I've observed that I'm real good at indexing where I saw some piece of information and very poor at storing the actual piece of information. I have to physically write things down to commit them to recallable memory.
Physically writing things has been a useful tool to increase recall for a long time. Taking notes in class may be more helpful than reviewing them later.
Certainly, quick access to information with a small search query helps tune our brains to get small search queries, but do some filing for a day or three and your brain will get tuned to quick recall there too. In the old days, you'd get real familiar with certain parts of books.
I suspect our brains have always been better at remembering meta-information than information itself. E.g. where you last saw an item of clothing vs the configuration of colours on it. In most cases this makes sense, and is essentially a space-saving trick, but there must be some point at which the meta-info becomes too inaccessible or too slow to utilise that remembering the full detail directly becomes worthwhile. Convincing your brain of that before it becomes too late can be a regular challenge with so much readily available externally stored information.
My favorite pattern is to go look for an answer in the StackExchange network, find myself nodding in agreement with one of the answers, then read the author... me, 10 years ago!
When I began studying physics I felt like a clueless novice. All those equations looked like chicken-scratch on paper. And, "What's the difference between kinetic and potential energy?" And there were so many concepts to understand - forces, masses, fields, charges, and particles. And differential equations seemed incomprehensible. Now, after a few decades I've mastered most all of that. Yet, when I try to grasp the real basics, like "What is space-time?", "How does superposition lead to a single macroscopic universe?", "What is the Higgs field?", "How did this universe 'start'?", "What exactly is mass?", then I realize I'm really back at novice again. I know almost nothing.
Reading the comments here I'm under the impression that people think OP wrote the manual for `su`, but the interesting fact here is he wrote the GNU coreutils implementation of `su` itself.
Debian (stable, at least) doesn't use his version anymore. The HISTORY section of the current manual says
> This su command was derived from coreutils' su, which was based on an implementation by David MacKenzie. The util-linux version has been refactored by Karel Zak.
I find my personal memory is very ephemeral, without repetition, things get quickly forgotten. In a long enough timeline, about 1-3 year, I start to fail recognizing my own code. Since I have trained most Jr Devs I work with they all have similar coding patterns as me, which makes it even harder.
Granted, I have specific code tells, which helps, but like the article mentions I have forgotten the why or the bug that I fixed that required specific changes.
That feeling when you go on a rant about how something was done in a super confusing way, so you do a ‘git blame’ or equivalent - and it’s all your fault. Oops.
This is why there's a "from:" line in the standard commit message template I introduced for my group of teams. If there's a ticket, bug report, user story, specification change, or standards reason for the commit I want to know that later. Sometimes that makes it into the code comments, and sometimes into the docs. It's always in the commit.
I write this shit down so I don’t have to remember it, thanks.
Back in the age of modems, when PPP was still the new hotness, I was proud of myself for memorizing the IP addresses for a few of the services I used regularly. Two or three times I got to punk my friends when the DNS servers got messed up, and they’re sitting with me in the computer lab wondering what else we could do to pass the time when they looked over and noticed that I’m happily typing away in the very thing they couldn’t get into because The Internet Is Down. No man, it’s just DNS.
Older, sadder but wiser me knows that I still could have done that joke if I had written the numbers on a scrap of paper. I could have had twenty instead of five.
After a number of rounds of the game, “who wrote this garbage? I wrote this garbage” I began to recognize my own particular code writing style.
I still get nerdsniped though if certain people start copying my idioms and introduce bugs, because this looks like something I would write but didn’t.
I get a little offended by people who think stackoverflow is a character flaw. I go there because it will tell me things like if you get the arguments to ‘dd’ wrong you’ll zero out the drive you were trying to back up. That the function breaks if you pass in null for the second parameter. Or that it simply doesn’t work properly.
To expand on this, people needlessly cargo-cult arcane dd commands, but on many (most?) systems, "cp foo.iso /dev/sdb" will do the same thing, but with possibly even better performance!
That’s a restore not a backup. I’m not actually defending dd. It’s bonkers how long it was considered acceptable to offer that as a solution for anything except zeroing out a disk. I shed a single proverbial tear and contemplated the sad state of technology every time I used it, and felt dirty for having done so.
It is The Worst.
It’s just the most egregious example. The difference in excess argument processing between commands is rather broad example. Is the last arg special or the first one? Grep and tar are in the majority, but cp works differently. They only make sense if you think of the power user.
What I meant was that using dd to take format/backup disks is just way too dangerous for human beings. The prospect of losing your precious data even once in your lifetime is too expensive a cost for the questionable benefit of feeling cool about doing backups by running a command on the terminal. Just use a gui tool for writing to physical disks. The additional visual feedback that a gui can provide is absolutely essential for human beings performing such a dangerous operation.
At this point recommending dd really is kind of telling the other person to go fuck themselves. It doesn't even follow the arg format of every other unix command. I had to double check to see if anyone fixed that in the interim. Nope, still a=b syntax, rather than -i device1 -o device2.
That really should be a giant clue that it shouldn't be used by anyone, for any reason. "It could be that the purpose of your life is only to serve as a warning to others."
What would you recommend as an alternative for disk imaging in Linux? I'm in the process of migrating a herd of PCs from HDDs to SSDs and would appreciate suggestions.
I'd say the easiest way to remember the argument order is that it's conceptually the same as for mv and cp: ln -s x y is the closest possible symlink-analogue of cp x y.
Ah, I remember it is the non-intuitive one, but then when I use it a while, I keep double-guessing which one is the intuitive one.
Kind of like when my wife tells me I'm doing something wrong and she wants it to be the other way.. I know she thinks this thing is important, but can't work out which way she wants it done.
I remember this as "ln=long -s=short $long $short" as in "ln -s $long $short" creates a $short file pointing to $longfile. But to your point, $short is defaulted to basename($long).
i've always considered manpages to be sort of part of the unix command line experience. since they're fast, it's no matter.
but yea, it took me many years to wire down ln's target link_name semantics for some reason.
what helped for me was to reason about the single argument form. ln -s ~/opt/junk/bin/thinger creates a link to thinger in the current directory. this single argument form is easy to remember, if you want to create a link of course you have to specify the name of the target (from which the link name will be inferred) and since it's the only argument it has to be the first one. now if you want to give it a different name, put the name in obvious still open place, the second argument.
Funny, I always reason about the 3+ argument form... For example, what does ln -s foo bar baz do? Well it can't create a foo symlink that points to bar AND baz, but it could create multiple symlinks - bar and baz - that point to the file foo. Therefore the first arg is the file you want to point to, and the rest of the arguments are symlinks you want to create (of which there happens to usually only be one).
Edit: this line of reasoning works for common usage of a lot of other utils too, like zip/tar etc. Even grep - is it grep FILE PATTERN [PATTERN ...] or grep PATTERN FILE [FILE ...] ?
I learned to behave as if there is no second argument for ln. Instead of doing `ln -s path/to/foo bar` I do `ln -s path/to/foo` then `mv foo bar`. Of course it doesn't always work, but covers most use cases for me.
I generally only need `ln -s <src> <dst>`. I know the -s means symbolic link, but in my head I read it as "source", since that's how I remembered the order long ago.
This is confusing, because when you picture the link as an arrow (as in `ls` output), the link name is the source and the link target is the destination.
I think what a lot of people don’t get about the Unix command line is that learning to use the tool is part of the experience. Sure I forget the precise flags to get various tools doing what I need, but browsing the man page and rediscovering the breadth of the tool is half the beauty.
The only feeling I've gotten looking at a man page is, "well, I don't have to time to dig through all of this. Guess I'll just look at stack overflow which will have my exact use case and required parameters."
I remember the times before I learned Bash and the Unix userland. Those where dark times. I was stuck on Windows 98, which I really didn’t like. It just felt so needlessly crippled. When I discovered Bash, Debian, the Unix way, it felt like a breath of fresh air.
These days I’m on macOS. And one of the best things about it is that it’s a great desktop operating system, with a Unix under the hood. The userland Apple ships is quite dated, but that’s easily solvable by installing GNU Coreutils etc via Homebrew.
nah. learning unix was fun. having all the documentation online and in an easily called up and consistent format made it possible. microcomputer operating systems didn't have things like "man -k" and while we can complain all day here till we're blue in the face about small quirks, it was light-years beyond the crap that was going on in microcomputer operating systems.
even when doing windows or embedded development with tools like visual studio or wrs tornado, i've always insisted on having mkl or cygwin for command line tasks.
This is why imo powershell (when done right) is more powerful than bash (no matter how correctly it’s done). Powershell cmdlet names are descriptive and generally make sense as do the parameter names, additionally the guidelines for how to make powershell commands forces a certain style that once you grok it makes sense for most powershell commands.
If bash has something like that I’m still missing it.
Obviously the brevity of bash can be its own power.
I do feel like we are all a little beholden to bash and it is tiring. While I will put simple work into simple bash scripts I always use shellcheck and do ‘set -euo pipefail’. For the life of me I can’t remember what that is or why I need it, I just know that I do. (I’d be comfortable looking it up if I actually cared enough to.)
Anything more than trivial I do in a high level programming language like Go (previously I would use Ruby or Perl).
From everything I’ve seen PS is a nice direction to go in.
I wish there was a sudden upheaval and everybody used a shell that wasn’t so legacy-cruft-laden.
PowerShell’s learning curve was too steep for me (or maybe I didn’t try hard enough, idk). I used it for a couple of years, but once WSL turned up I went back to my trusty old Linux skills.
My biggest problem with regex is each language/program/whatever has their own take on it. You don't have to learn it over and over, you have to learn multiple slightly different versions of it over and over.
I don't mind the slightly different versions, that's what documentation and SO is for. I mind the language being a garbage kludge of organically-grown syntax and semantics, with not the slightest hint of a single principle or a unifying idea. And I mind that the vast majority of interfaces to it from a general purpose PL is fucking around with strings like a caveman, instead of a typed, IDE-assisted, first-class representation as a full-blown language construct or mix of constructs
been recently learning Python in earnest. Coming from Perl, Ruby, PHP, JavaScript, etc. I'm just dumbfounded. How on earth did Python's regex library get to be so bad? It's like it was designed by people that were figuring out how regexes worked as they went along but must have skipped the part about modifiers, how anchoring works, etc. No wonder so many people hate regexes. The ironic part is Python's "one obvious way to do it" zen thing. It has like 5 different ways to do the same thing with a regex whereas Perl, the "more than one way to do it" camp has simple and obvious ways to do everything with regex. I love regex in Perl. In Python I don't want to touch it ever again. Even JavaScript is miles better.
I agree. I actually have a pretty strong understanding of regexes and use them regularly (for editing, not so much in code), but I despise how every editor/language does them subtly different.
I’ve never understood that one. I realize it’s just a joke, but xvf/cfv (extract/compress, view, uh,. Ok I did have to look up f, which I think is file) and the z for gzip is one of the few commands I have no trouble remembering.
Though if you include the dash to use those as flags, the second one is wrong (it treats the "v" as the file name for "f"). Without the dash, the next positional argument is the file name.
Examples: These do the same thing:
tar cfv foo.tar a b c
tar cvf foo.tar a b c
tar -cvf foo.tar a b c
Reminds me of a peeve I had with the Apple docs when they described Bash like riding a bike (but in a different metaphor - it's good to get around somewhere easily but won't take you as far as an application language)
This is one reason answering questions on StackOverflow is so rewarding. You can save future you a ton of trouble by putting a (better) answer there and then stumbling onto it later. Ditto blog posts. Power tip: this also works for personal journals in searchable format.
15 years working as a software dev. There have been many times Sergio from the past has saved my ass. I find exactly what I needed to get something done and turns out the author was me. Coworkers also sometimes find my stuff so that's a nice feeling.
If you write code do yourself a favor and write for future you. It'll help.
I've had that happen with math teaching material a couple of times. One time I was dissatisfied with how I presented the definition of implication so I googled. The top link was to a discussion where the person answering linked to me. Like DJM, I found it a little unsettling.
Once I had a question about APACS, circa 2008, not a lot of information. found a tool using it, some lateral browsing, dns, simmilar usernames and finally i got to talk to the authour of the tool who gladly helped me :)
I was discussing how the only info I found was in a remote wikipedia article, yet incomplete. the guy told me he had written that article too.
Before I began my studies in Zen, I thought a tree was a tree and a stone, a stone.
When I started to study Zen, I could see that a tree was not a tree, and a stone was not a stone.
Now that I am a Zen master, I know that a tree is a tree and a stone is a stone.
-- Source: my buddy in college
I think you come full circle to learn that you can only keep so much in your head at one time and that you're always in some sense loading up what you need for the next month or three. At least this time you knew to look for the man su command, and remind yourself of the work you did, that you shared with all these other people.
I do a lot of podcasts or audiobooks while gardening of late. I’m tapering off because it’s not the same experience. I started by listening to just nature based books. It was different. More in some ways, less in others.
If you can’t be alone with your own thoughts then brother are you in a bad place. Everything in moderation.
Reminds me of how after learning how CPUs, memory, operating systems etc work I thought "wow, it really is all ones and zeroes". A simple phrase but it became more meaningful with deeper understanding.
Sorry, but ”computers work by ones and zeroes” is one of my pet peeves.
It is true in the sense that 1 and 0 are common representations for true and false in computer science, but really, it is false and almost certainly establishes magical thinking in the layperson.
Modern computers run on electricity, and in electrical circuits such as computers, true/false is represented as a transistor semiconductor being in a conducting or non-conducting state. Current can either flow, or it can’t.
In fact, one could build a computer out almost anything that lends itself to both being on and off, and to being controlled by its on/off state (or that of another equivalent assembly).
> (A) Modern computers run on electricity [not ones and zeroes]
> (B) one could build a computer out almost anything that lends itself to both being on and off
You got it right in B, which is exactly the point when people say computers are just 1s and 0s. Computers are a mathematical concept, not just some electrical device, as you seem to claim in A. The fact that you can build a computer out of water or air pressure or Minecraft Redstone is exactly the point people are making when they say they're built up from 1s and 0s (not electricity, not silicon and copper, not Redstone).
Cut me some slack, I was referring to our beloved contemporary devices when I was saying modern computers in A, so the comparison is a bit apples and oranges.
I'm not trying to nitpick you here; I'm genuinely confused why "Computers are all 1s and 0s under the covers" is a pet peeve of yours! It sounds like you mostly agree with the statement, so I'm just not sure why it would bother you.
Because I have seen so many laypeople operate off the misconstrued idea that computers literally work like that!
Granted, many of these people have been around since before typewriters and phones were a common thing, but the ”1s and 0s” explanation does not offer anything tangible for people who cannot see the trees from the forest (sic), and thus it only widens the ”digital divide”.
From what I've seen the problem is more being that people just take that as the whole truth without considering more interesting ideas that are based on it and make it actually work.
But I don't think this contributes to a "digital divide", people who aren't particularly interested in computers wouldn't be more excited with different wording.
In my experience the best thing to get people interested in computers is to show them more than MS Word in school. Fortunately a substitute teacher was more competent than that and absolutely blew my mind with a for loop.
I think oversimplification is a major contributor to magical thinking, and magical thinking lends itself to continued failures to understand things, especially in the context where the computer is not operating normally.
Imagine if people generally understood that automobile combustion engines work by ”combusting fuel” without knowing anything more about their car.
That’s absolutely true in a sense, but if we left things at that, drivers would probably be inclined to think some kind of magic is happening, because that’s what science thought of combustion for a long time! (see ”Phlogiston theory”)
I wonder how those drivers would explain the functioning of the pedals and the shift knob.
Is 1 indicative of current flowing? Or not flowing? Is that consistent with a given chip, let alone an entire system? Is it always DC current, or can it be AC? In fact, is the 1 represented by current, voltage, and/or frequency?
There's lots of different answers here in different contexts. The reason 1s and 0s are good is because they represent the information in the digital domain, not the implementation in the analogue.
I recall trying to explain how hard drives worked to a guy who didn’t believe me, it was weird because he had aspirations of being a hacker some day. He got really mad “math is math” style when we explained they are analog rounded to binary and that’s why disk erasers exist and take so long (and this was before they got really paranoid).
Then people kept wandering up and agreeing with me and by the end it was practically an intervention.
if only the Carmack level was attainable for us mortals; I've moments where I trick myself into writing passable code but after a little research the same memory order semantics from my IDE stare back at me from Carmack's Doom 3 code 20 years ago. RIP
Kurzgesagt's short video is great too, it's (quasi-)verbatim, but the music, animation and narration makes it quite poignant.
The idea behind the story is quite fascinating to me. If simulationists are right, it's as good or better of a "why" as "origin seeking" IMO. Not that I believe one way or the other, I just think the idea's interesting to ponder on.