Hacker News new | past | comments | ask | show | jobs | submit login
From novice to master, and back again (2013) (djmnet.org)
406 points by mre on Aug 9, 2022 | hide | past | favorite | 143 comments



Twice in the last decade I've been completely stuck on something, googled extensively, finally found the answer ... which I wrote, on Usenet, 25 years ago.


Worse : You Google a question and realise you yourself asked it many moons ago and it never got answered.


Worst: you find your own answer that says: “Nvm, fixed the issue.”


I have a notesfile on how to fix/debug things. When I started it, I would write things down after I fixed something and figured I'd need to remember it later.

Problem was, I would often forget to write things down, or naively think "that was quick to find out, don't need to write it down", which is a classic mistake of confusing post-facto mindset with pre-facto.

So now when I wonder how to do a thing, I first write down in my notesfile what my question is, so when I do resolve it I go and write down the answer to satisfy my need for closure.


This is the way.

I make a new markdown file for every program / VPS login command (the entire SSH command with custom ports), and really anything which I know I would need to search through bash history down the road to do it again.

Those files are sorted into a handful of folders which allow me to quickly find all these programs / commands quickly.


this is why I have made it a point these days that when I fix an issue, I immediately go to all the forums, subreddits and stackoverflow sites where I posted the question and post the relevant solution. Its a lot of work but totally beats figuring out what you did 6 months later


That’s just karma. I have tried not to do that to myself because so many others have done it to me. Still, sometimes the answer I left is too vague to be practical.


I make a habit of turning every question I have, that I don't find an answer to in 30 min, a question AND answer on stackoverflow/stackexchange

And many times, even years later, when I google the answer, I google my name, stackoverflow, and the topic, and kapow - answer received :D


Oh man this happened to me last month. I was even wondering how this guy had the exact same issue that I was having and only realized after a while that it was literally me who posted it. I eventually found an answer and decided to post an answer in case future me forgets it again.


I've also had this happen, but never figured out the solution.

This is especially frustrating because your question is likely to pop up first in your search results because you probably phrase things the same way even years later!


Obligatory xkcd: https://xkcd.com/979/


From a comment I saved several days ago:

> we don't have to figure it out - we just need to remember what we did

https://news.ycombinator.com/item?id=32318185


Allow me to recommend Kim Stanley Robinson’s _Icehenge_.


I can't even tell you how often this happens to me. People come to me because I wrote something and I then tell them: "Let me go read the manual" (because I can't remember what I did a month later, so I'm always consulting my own docs).

I always tell people: "You think I wrote this manual for you, but I'm not that altruistic. I wrote it for future me."

To be fair, not exactly the same thing. I typically remember I wrote it, just not what it does or how it works. :-)


this kind of job is very memory constrained

you get into a task, you cram stuff in as you go, you finish it, and then you dump everything go onto another task.

i find my most valuable skill is to slow the pace to ensure retention of information


Still less efficient than dumping your piece of knowledge insomzthing like a second brain, aka personal knowledge management. The tricky part is whether to write down everything and make your notes too long to read/follow, or summarise things with the risk of forgetting an important piece of info [why you should do this or that, what you should NOT do, what additional piece of knowledge is required, etc]. The art of taking smart notes is really fascinating.


It boils down to the same issue. Whether you store data or not, your brain is a bottle eck. And managing notes would require the same skill as curating your efforts.


Wow, epic!

The most depressing thing is being presented with an issue and finding my name attached to a closed ticket from years before with no explanation of how I fixed it.

Guess what, current me, you're going to learn this again from scratch because past me was in a hurry and couldn't be bothered to type out what he did.

BSG - this has all happened before, and it will all happen again.


I go through the same thing but with a few "popular" GitHub issues. Every time I encounter those bugs I Google it and the first result in my comment on the issue that explains the workaround I used.

That's probably how early Alzheimer feels.


I sometimes have to go back to GitHub issues I created, that I bookmarked for this specific reason.

I think that the Internet has modified how our brains retain information. I don't think this is an original idea, but I've observed that I'm real good at indexing where I saw some piece of information and very poor at storing the actual piece of information. I have to physically write things down to commit them to recallable memory.


Physically writing things has been a useful tool to increase recall for a long time. Taking notes in class may be more helpful than reviewing them later.

Certainly, quick access to information with a small search query helps tune our brains to get small search queries, but do some filing for a day or three and your brain will get tuned to quick recall there too. In the old days, you'd get real familiar with certain parts of books.


I suspect our brains have always been better at remembering meta-information than information itself. E.g. where you last saw an item of clothing vs the configuration of colours on it. In most cases this makes sense, and is essentially a space-saving trick, but there must be some point at which the meta-info becomes too inaccessible or too slow to utilise that remembering the full detail directly becomes worthwhile. Convincing your brain of that before it becomes too late can be a regular challenge with so much readily available externally stored information.


My favorite pattern is to go look for an answer in the StackExchange network, find myself nodding in agreement with one of the answers, then read the author... me, 10 years ago!


When I began studying physics I felt like a clueless novice. All those equations looked like chicken-scratch on paper. And, "What's the difference between kinetic and potential energy?" And there were so many concepts to understand - forces, masses, fields, charges, and particles. And differential equations seemed incomprehensible. Now, after a few decades I've mastered most all of that. Yet, when I try to grasp the real basics, like "What is space-time?", "How does superposition lead to a single macroscopic universe?", "What is the Higgs field?", "How did this universe 'start'?", "What exactly is mass?", then I realize I'm really back at novice again. I know almost nothing.


Reading the comments here I'm under the impression that people think OP wrote the manual for `su`, but the interesting fact here is he wrote the GNU coreutils implementation of `su` itself.

Debian (stable, at least) doesn't use his version anymore. The HISTORY section of the current manual says

> This su command was derived from coreutils' su, which was based on an implementation by David MacKenzie. The util-linux version has been refactored by Karel Zak.


That’s a real, “I’ve forgotten more about X than you’ll ever know moment”!


Even better then it applies to the same person in both parts! Hah.


I find my personal memory is very ephemeral, without repetition, things get quickly forgotten. In a long enough timeline, about 1-3 year, I start to fail recognizing my own code. Since I have trained most Jr Devs I work with they all have similar coding patterns as me, which makes it even harder.

Granted, I have specific code tells, which helps, but like the article mentions I have forgotten the why or the bug that I fixed that required specific changes.


That feeling when you go on a rant about how something was done in a super confusing way, so you do a ‘git blame’ or equivalent - and it’s all your fault. Oops.


This is why there's a "from:" line in the standard commit message template I introduced for my group of teams. If there's a ticket, bug report, user story, specification change, or standards reason for the commit I want to know that later. Sometimes that makes it into the code comments, and sometimes into the docs. It's always in the commit.


I write this shit down so I don’t have to remember it, thanks.

Back in the age of modems, when PPP was still the new hotness, I was proud of myself for memorizing the IP addresses for a few of the services I used regularly. Two or three times I got to punk my friends when the DNS servers got messed up, and they’re sitting with me in the computer lab wondering what else we could do to pass the time when they looked over and noticed that I’m happily typing away in the very thing they couldn’t get into because The Internet Is Down. No man, it’s just DNS.

Older, sadder but wiser me knows that I still could have done that joke if I had written the numbers on a scrap of paper. I could have had twenty instead of five.


Looked like a hero when I set DNS to 4.2.2.2 or whatever when AT&T or Charter would have an outage.


It certainly wasn't always that easy.


Related:

From Novice to Master, and Back Again - https://news.ycombinator.com/item?id=9098635 - Feb 2015 (1 comment)

From Novice to Master, and Back Again - https://news.ycombinator.com/item?id=8443981 - Oct 2014 (27 comments)


I wonder if anyone here is discovering that they also posted their comment in the previous links.


yes. I think most of us who programmed in the trenches are familiar with forgetting what we've done...

the difference is that nothing I've ever worked on will ever be public or useful beyond a small private business which doesn't even exist anymore.


After a number of rounds of the game, “who wrote this garbage? I wrote this garbage” I began to recognize my own particular code writing style.

I still get nerdsniped though if certain people start copying my idioms and introduce bugs, because this looks like something I would write but didn’t.


Think of the people you've helped get their jobs done over the years. That's worth more than author credit in the colophon of a man page.


The man page has ... probably helped plenty of people get their jobs done over the years!

edit: or the program the man page is documenting!


Your first point is fair too. Commands are useless if nobody documents their usage.


This reminds me of an old joke I heard a while back, which is that the opposite of "it's like riding a bike" is "it's like the UNIX command line".

No matter how many times you do it, you will have to re-learn it every single time.


I get a little offended by people who think stackoverflow is a character flaw. I go there because it will tell me things like if you get the arguments to ‘dd’ wrong you’ll zero out the drive you were trying to back up. That the function breaks if you pass in null for the second parameter. Or that it simply doesn’t work properly.


I used the if= and of= command line parameters last time I used dd: https://man7.org/linux/man-pages/man1/dd.1.html


I think that using dd is a character flaw.


To expand on this, people needlessly cargo-cult arcane dd commands, but on many (most?) systems, "cp foo.iso /dev/sdb" will do the same thing, but with possibly even better performance!


That’s a restore not a backup. I’m not actually defending dd. It’s bonkers how long it was considered acceptable to offer that as a solution for anything except zeroing out a disk. I shed a single proverbial tear and contemplated the sad state of technology every time I used it, and felt dirty for having done so.

It is The Worst.

It’s just the most egregious example. The difference in excess argument processing between commands is rather broad example. Is the last arg special or the first one? Grep and tar are in the majority, but cp works differently. They only make sense if you think of the power user.


What I meant was that using dd to take format/backup disks is just way too dangerous for human beings. The prospect of losing your precious data even once in your lifetime is too expensive a cost for the questionable benefit of feeling cool about doing backups by running a command on the terminal. Just use a gui tool for writing to physical disks. The additional visual feedback that a gui can provide is absolutely essential for human beings performing such a dangerous operation.


At this point recommending dd really is kind of telling the other person to go fuck themselves. It doesn't even follow the arg format of every other unix command. I had to double check to see if anyone fixed that in the interim. Nope, still a=b syntax, rather than -i device1 -o device2.

That really should be a giant clue that it shouldn't be used by anyone, for any reason. "It could be that the purpose of your life is only to serve as a warning to others."


What would you recommend as an alternative for disk imaging in Linux? I'm in the process of migrating a herd of PCs from HDDs to SSDs and would appreciate suggestions.


I use the `ln` command a lot. I use the `man ln` command _almost_ as much.


What finally got the `ln` argument order engraved in my mind was learning that you can skip the destination argument:

    ln -s /foo/bar/baz
will create a soft link in the CWD named baz, pointing to /foo/bar/baz.

So you see, if you know that you can always skip the destination, then, logically, the source must be the first one!


I'd say the easiest way to remember the argument order is that it's conceptually the same as for mv and cp: ln -s x y is the closest possible symlink-analogue of cp x y.


Nah, the closest possible symlink-analogue of `cp x y` is `cp -s x y` :-)


This a great way to think about it.


> logically, the source must be the first one!

You mean the target, but that isn’t a logically necessary consequence at all. Conceivably, `ln` could support the following two syntaxes:

  ln [options] target
  ln [options] link_name target
The way I remember the correct parameter order is that I remember it’s the non-intuitive one.


Ah, I remember it is the non-intuitive one, but then when I use it a while, I keep double-guessing which one is the intuitive one.

Kind of like when my wife tells me I'm doing something wrong and she wants it to be the other way.. I know she thinks this thing is important, but can't work out which way she wants it done.


The intuitive one is the order in which `ls` displays it, or assignment order (a := b). That’s how I remember what the intuitive order is. ;)


Surely the intuitive order for mutating assignment is value → name, though...


I remember this as "ln=long -s=short $long $short" as in "ln -s $long $short" creates a $short file pointing to $longfile. But to your point, $short is defaulted to basename($long).


The other way to think about it is that it's the same order as cp.


Finally! Somehow I've never been able to remember that either.


i've always considered manpages to be sort of part of the unix command line experience. since they're fast, it's no matter.

but yea, it took me many years to wire down ln's target link_name semantics for some reason.

what helped for me was to reason about the single argument form. ln -s ~/opt/junk/bin/thinger creates a link to thinger in the current directory. this single argument form is easy to remember, if you want to create a link of course you have to specify the name of the target (from which the link name will be inferred) and since it's the only argument it has to be the first one. now if you want to give it a different name, put the name in obvious still open place, the second argument.


Funny, I always reason about the 3+ argument form... For example, what does ln -s foo bar baz do? Well it can't create a foo symlink that points to bar AND baz, but it could create multiple symlinks - bar and baz - that point to the file foo. Therefore the first arg is the file you want to point to, and the rest of the arguments are symlinks you want to create (of which there happens to usually only be one).

Edit: this line of reasoning works for common usage of a lot of other utils too, like zip/tar etc. Even grep - is it grep FILE PATTERN [PATTERN ...] or grep PATTERN FILE [FILE ...] ?


“Same argument order as cp” always helped me.


Mnemonic: “What you have to what you want”


i'm enjoying seeing the variety in everyone's mnemonics :)


I learned to behave as if there is no second argument for ln. Instead of doing `ln -s path/to/foo bar` I do `ln -s path/to/foo` then `mv foo bar`. Of course it doesn't always work, but covers most use cases for me.


I generally only need `ln -s <src> <dst>`. I know the -s means symbolic link, but in my head I read it as "source", since that's how I remembered the order long ago.


This is confusing, because when you picture the link as an arrow (as in `ls` output), the link name is the source and the link target is the destination.


I think what a lot of people don’t get about the Unix command line is that learning to use the tool is part of the experience. Sure I forget the precise flags to get various tools doing what I need, but browsing the man page and rediscovering the breadth of the tool is half the beauty.


The only feeling I've gotten looking at a man page is, "well, I don't have to time to dig through all of this. Guess I'll just look at stack overflow which will have my exact use case and required parameters."


In my opinion, the EXAMPLES section should be near the top of the man page and include the most common usages.



What? That’s some intense Stockholm syndrome right there.


How so?

I remember the times before I learned Bash and the Unix userland. Those where dark times. I was stuck on Windows 98, which I really didn’t like. It just felt so needlessly crippled. When I discovered Bash, Debian, the Unix way, it felt like a breath of fresh air.

These days I’m on macOS. And one of the best things about it is that it’s a great desktop operating system, with a Unix under the hood. The userland Apple ships is quite dated, but that’s easily solvable by installing GNU Coreutils etc via Homebrew.


nah. learning unix was fun. having all the documentation online and in an easily called up and consistent format made it possible. microcomputer operating systems didn't have things like "man -k" and while we can complain all day here till we're blue in the face about small quirks, it was light-years beyond the crap that was going on in microcomputer operating systems.

even when doing windows or embedded development with tools like visual studio or wrs tornado, i've always insisted on having mkl or cygwin for command line tasks.


This is why imo powershell (when done right) is more powerful than bash (no matter how correctly it’s done). Powershell cmdlet names are descriptive and generally make sense as do the parameter names, additionally the guidelines for how to make powershell commands forces a certain style that once you grok it makes sense for most powershell commands.

If bash has something like that I’m still missing it.

Obviously the brevity of bash can be its own power.


I do feel like we are all a little beholden to bash and it is tiring. While I will put simple work into simple bash scripts I always use shellcheck and do ‘set -euo pipefail’. For the life of me I can’t remember what that is or why I need it, I just know that I do. (I’d be comfortable looking it up if I actually cared enough to.)

Anything more than trivial I do in a high level programming language like Go (previously I would use Ruby or Perl).

From everything I’ve seen PS is a nice direction to go in.

I wish there was a sudden upheaval and everybody used a shell that wasn’t so legacy-cruft-laden.


PowerShell’s learning curve was too steep for me (or maybe I didn’t try hard enough, idk). I used it for a couple of years, but once WSL turned up I went back to my trusty old Linux skills.


The brevity is great when used interactively.

For scripting you can always use Python, which is lightyears ahead of PS.


"It's like regex" could be another punchline for the joke


My biggest problem with regex is each language/program/whatever has their own take on it. You don't have to learn it over and over, you have to learn multiple slightly different versions of it over and over.


I don't mind the slightly different versions, that's what documentation and SO is for. I mind the language being a garbage kludge of organically-grown syntax and semantics, with not the slightest hint of a single principle or a unifying idea. And I mind that the vast majority of interfaces to it from a general purpose PL is fucking around with strings like a caveman, instead of a typed, IDE-assisted, first-class representation as a full-blown language construct or mix of constructs


From this description, you may enjoy Raku.


been recently learning Python in earnest. Coming from Perl, Ruby, PHP, JavaScript, etc. I'm just dumbfounded. How on earth did Python's regex library get to be so bad? It's like it was designed by people that were figuring out how regexes worked as they went along but must have skipped the part about modifiers, how anchoring works, etc. No wonder so many people hate regexes. The ironic part is Python's "one obvious way to do it" zen thing. It has like 5 different ways to do the same thing with a regex whereas Perl, the "more than one way to do it" camp has simple and obvious ways to do everything with regex. I love regex in Perl. In Python I don't want to touch it ever again. Even JavaScript is miles better.


I agree. I actually have a pretty strong understanding of regexes and use them regularly (for editing, not so much in code), but I despise how every editor/language does them subtly different.


Truly relevant XKCD https://xkcd.com/1168/


I’ve never understood that one. I realize it’s just a joke, but xvf/cfv (extract/compress, view, uh,. Ok I did have to look up f, which I think is file) and the z for gzip is one of the few commands I have no trouble remembering.


Though if you include the dash to use those as flags, the second one is wrong (it treats the "v" as the file name for "f"). Without the dash, the next positional argument is the file name.

Examples: These do the same thing:

  tar cfv foo.tar a b c
  tar cvf foo.tar a b c
  tar -cvf foo.tar a b c
This errors, trying to add foo.tar to archive v:

  tar -cfv foo.tar a b c


Correct, it's -cvf.


Even easier: tar --help


Reminds me of a peeve I had with the Apple docs when they described Bash like riding a bike (but in a different metaphor - it's good to get around somewhere easily but won't take you as far as an application language)


This is one reason answering questions on StackOverflow is so rewarding. You can save future you a ton of trouble by putting a (better) answer there and then stumbling onto it later. Ditto blog posts. Power tip: this also works for personal journals in searchable format.


Love stories like this, few people can claim to have truly forgotten more than us novices will ever know but he surely can.


15 years working as a software dev. There have been many times Sergio from the past has saved my ass. I find exactly what I needed to get something done and turns out the author was me. Coworkers also sometimes find my stuff so that's a nice feeling.

If you write code do yourself a favor and write for future you. It'll help.


"This incident will be reported."


Where exactly does that message come from? Su itself, PAM, some other library?


I think it's from sudo, not su.




1. Have problem.

2. Google it.

3. Find answer on stackoverflow.

4. Try to upvote.

5. "You can't vote for your own post."

This has happened to me multiple times.


I've had that happen with math teaching material a couple of times. One time I was dissatisfied with how I presented the definition of implication so I googled. The top link was to a discussion where the person answering linked to me. Like DJM, I found it a little unsettling.


We write things down so we don’t have to rely on memory. Working as intended.


He didn't write the manual, though. He wrote the GNU coreutils `su` program itself.


Once I had a question about APACS, circa 2008, not a lot of information. found a tool using it, some lateral browsing, dns, simmilar usernames and finally i got to talk to the authour of the tool who gladly helped me :)

I was discussing how the only info I found was in a remote wikipedia article, yet incomplete. the guy told me he had written that article too.

good times


I live by command line history and timestamps.

I wish there was even better way to preserve the context of what you were doing at the time (env variables, current path at the time etc).


Developing your own functions. A VERY useful trick to make your usual command line chains to be stored in a single [hopefully meaningful] command.

PS: be sure to always have a -—help option that describes the function, just as you would do when programming


Zen Koan:

Before I began my studies in Zen, I thought a tree was a tree and a stone, a stone.

When I started to study Zen, I could see that a tree was not a tree, and a stone was not a stone.

Now that I am a Zen master, I know that a tree is a tree and a stone is a stone.

-- Source: my buddy in college

I think you come full circle to learn that you can only keep so much in your head at one time and that you're always in some sense loading up what you need for the next month or three. At least this time you knew to look for the man su command, and remind yourself of the work you did, that you shared with all these other people.


There's a poplar meme called "midwit meme"[0]. I guess it's a popular observation.

[0] https://imgflip.com/i/6peco7


I’m more of a redwood meme guy myself.

https://i.imgflip.com/1kuv83.jpg


    Before enlightenment: chop wood, carry water
    After enlightenment: chop wood, carry water


Where does existential dread fit in there?


Somewhere in the middle. Hopefully not while sharpening the axe or staring into the well.


There’s plenty of opportunity to ponder one’s own existence while chopping wood and carrying water. Or to listen to a podcast.


I do a lot of podcasts or audiobooks while gardening of late. I’m tapering off because it’s not the same experience. I started by listening to just nature based books. It was different. More in some ways, less in others.

If you can’t be alone with your own thoughts then brother are you in a bad place. Everything in moderation.


Just notice it.


It fits between the kama used to harvest rice and the comma used to separate those thoughts.


Somewhere in the middle


Reminds me of how after learning how CPUs, memory, operating systems etc work I thought "wow, it really is all ones and zeroes". A simple phrase but it became more meaningful with deeper understanding.


Sorry, but ”computers work by ones and zeroes” is one of my pet peeves.

It is true in the sense that 1 and 0 are common representations for true and false in computer science, but really, it is false and almost certainly establishes magical thinking in the layperson.

Modern computers run on electricity, and in electrical circuits such as computers, true/false is represented as a transistor semiconductor being in a conducting or non-conducting state. Current can either flow, or it can’t.

In fact, one could build a computer out almost anything that lends itself to both being on and off, and to being controlled by its on/off state (or that of another equivalent assembly).


You contradict yourself here.

> (A) Modern computers run on electricity [not ones and zeroes]

> (B) one could build a computer out almost anything that lends itself to both being on and off

You got it right in B, which is exactly the point when people say computers are just 1s and 0s. Computers are a mathematical concept, not just some electrical device, as you seem to claim in A. The fact that you can build a computer out of water or air pressure or Minecraft Redstone is exactly the point people are making when they say they're built up from 1s and 0s (not electricity, not silicon and copper, not Redstone).


Cut me some slack, I was referring to our beloved contemporary devices when I was saying modern computers in A, so the comparison is a bit apples and oranges.


I'm not trying to nitpick you here; I'm genuinely confused why "Computers are all 1s and 0s under the covers" is a pet peeve of yours! It sounds like you mostly agree with the statement, so I'm just not sure why it would bother you.


Because I have seen so many laypeople operate off the misconstrued idea that computers literally work like that!

Granted, many of these people have been around since before typewriters and phones were a common thing, but the ”1s and 0s” explanation does not offer anything tangible for people who cannot see the trees from the forest (sic), and thus it only widens the ”digital divide”.


From what I've seen the problem is more being that people just take that as the whole truth without considering more interesting ideas that are based on it and make it actually work. But I don't think this contributes to a "digital divide", people who aren't particularly interested in computers wouldn't be more excited with different wording.

In my experience the best thing to get people interested in computers is to show them more than MS Word in school. Fortunately a substitute teacher was more competent than that and absolutely blew my mind with a for loop.


I think oversimplification is a major contributor to magical thinking, and magical thinking lends itself to continued failures to understand things, especially in the context where the computer is not operating normally.

Imagine if people generally understood that automobile combustion engines work by ”combusting fuel” without knowing anything more about their car.

That’s absolutely true in a sense, but if we left things at that, drivers would probably be inclined to think some kind of magic is happening, because that’s what science thought of combustion for a long time! (see ”Phlogiston theory”)

I wonder how those drivers would explain the functioning of the pedals and the shift knob.


You're probably going to have trouble finding an average car driver that can explain to you how automatic transmission works.

> Imagine if people generally understood that automobile combustion engines work by ”combusting fuel” without knowing anything more about their car.

No need to imagine.


Now imagine how much safer roads would be if people understood much more about their cars!


Is 1 indicative of current flowing? Or not flowing? Is that consistent with a given chip, let alone an entire system? Is it always DC current, or can it be AC? In fact, is the 1 represented by current, voltage, and/or frequency?

There's lots of different answers here in different contexts. The reason 1s and 0s are good is because they represent the information in the digital domain, not the implementation in the analogue.


Indeed. Those are implementation details and arbitrary conventions. :)


I recall trying to explain how hard drives worked to a guy who didn’t believe me, it was weird because he had aspirations of being a hacker some day. He got really mad “math is math” style when we explained they are analog rounded to binary and that’s why disk erasers exist and take so long (and this was before they got really paranoid).

Then people kept wandering up and agreeing with me and by the end it was practically an intervention.


> 1 and 0 are common representations for true and false in computer science

Long before standing for true or false, 1 and 0 have stood for the presence or absence of something arbitrary, so I don't see any issue here.


Just 1s and 0s don’t really do anything useful before we agree on some conventions, such as

- byte and word size,

- endianness,

- semantics of what means what in a string of bits (think Two’s Complement, IEEE 754, ASCII, ISO-8859-1, Shift-JIS, Unicode),

- what it means to do certain operations on bits (Boolean algebra),

- how different binary operations can be constructed from transistors / logic gates (ALU design),

- how information can be retained in and recalled from memory (basically just flip-flops),

- how said memory is laid out with respect to internal/external devices and program regions (conventions!)

- how said memory can be addressed, and how information can be transferred between CPU and memory (bus architectures),

- how the computer architecture can be programmed to do things (processor instruction sets),

- and whatever I forgot just now…

And then some people design and build trinary computers, imagine that.


In fact, if we consider the presence or absence of truth in a statement...

(There is no completion. The trailing off is intended.)


Turing, u here?


It still functions like an abacus.


It is pretty cool. When I was little I had no idea what it means, just knew that it sounds cool. But the older I get that more I see these patterns.

junior dev: python is so cool!

senior dev: python is slow. No type checking. The syntax is garbage.

John Carmack: I write a lot of python and I use it exactly for what it is good for!


if only the Carmack level was attainable for us mortals; I've moments where I trick myself into writing passable code but after a little research the same memory order semantics from my IDE stare back at me from Carmack's Doom 3 code 20 years ago. RIP



Kurzgesagt's short video is great too, it's (quasi-)verbatim, but the music, animation and narration makes it quite poignant.

The idea behind the story is quite fascinating to me. If simulationists are right, it's as good or better of a "why" as "origin seeking" IMO. Not that I believe one way or the other, I just think the idea's interesting to ponder on.

https://www.youtube.com/watch?v=h6fcK_fRYaI


Interesting read, thanks for sharing


Thank You!


Sounds a lot like Bruce Lees famous quote! https://mobile.twitter.com/brucelee/status/13618679498967040...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: