Hacker News new | past | comments | ask | show | jobs | submit login
Computers Can Be Understood (2020) (nelhage.com)
65 points by tbodt on April 14, 2022 | hide | past | favorite | 19 comments



I tried very hard to steer an old colleague away from their "magical" view of computers, because at some point I aggregated enough experiences and insights to agree with the article's thesis. When working closely with hardware, there's no way around breaking that magic. I would even flavor that how it all "actually" works physically is cool and magical (in a we can compose magic way).

It was interesting to connect some of the issues they asked for help with relate back to this magic-fication.


If an abstraction is good enough and fit for a particular problem, you can mostly treat it as magic, solve your problem at an upper layer and be done with it. Only when there is a mismatch between the problem domain and the available abstractions, you need to start jiggling through the layers and understand what problems each layer is trying to address.


I had an experience similar to their Lambda one not that long ago. Needed somewhere to host a static Hugo site, went with Netlify, who marketed some vaguely specified special support and functionality for Hugo, and "sync-to-git-repo" functionality.

Ended up spending several hours digging through their docs and scattered code until it became clear to me that the hugo-stuff was just out-of-date and poorly stitched together tooling around hugo build that was made to run in CI, and there was no actual git functionality, just web hook APIs specific for GitHub/GitLab/Bitbucket.

Even leaving that aside and figuring out how to properly upload the ready-made static directory, which I got to in the process, would have taken its own fair share of detective work.

Whereas if I had treated it like magic, I could have just zipped it up and drag-and-dropped it in their web UI and be done in 2 minutes.

It still bothers me how they go to such great extents to make it as appealing and smooth as possible for the majority happy-path use case but end up making it extremely confusing to do what in the end turns out to be trivial. Even deliberately using technically incorrect terminology in documentation (which act as misdirection) just to align with common misconceptions.

I can't seem to properly verbalize the eerie feeling I have but it relates to a trend leading to devs only using ready-made tools and APIs precisely on their abstraction level, dumbing down, total centralization of internet infrastructure, and an eventual ban on or unavailability of general computing for individuals.

</rant>


This quote feels oddly relevant:

    Almost anything in software can be implemented, sold, and even used given enough determination. There is nothing a mere scientist can say that will stand against the flood of a hundred million dollars. But there is one quality that cannot be purchased in this way—and that is reliability. The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.
- C.A.R. Hoare


Well, it's valid for security too. Any kind of engineering (science and arts too) has those qualities that come from competence and simplicity. Those are always rare.


> Even deliberately using technically incorrect terminology in documentation (which act as misdirection) just to align with common misconceptions.

I like to play with microcontrollers. Nearly everyone uses the terms IoT (internet of things), despite the fact that the vast majority of microcontrollers don't have internet connectivity out of the box. AI is picking up some buzz at the moment, and I even saw the term AIoT being used. Gotta get those buzzwords in.

What I find astonishing is that these are all things directed at technically-competent programmers who are supposed to appraise things objectively rather than being given a lot of marketing guff. Could you ever imagine trying to sell a product to a chemist, engineer or physicist in this way?

I do have one app that's connected to the internet, but that uses a Raspberry Pi. I have had other projects that used Wifi. I've come to the conclusion that it is way better to avoid anything internet if at all possible. It makes devices much more unreliable.

And that's what grinds my gears.


Perhaps I'm pretentious, but I'd question the quality of code written by someone who believes computers and software are magical mysterious systems.


At a fundamental level I understand that doping silicon turns rock into transistors upon which a computer is built. I still think it's magic though.


It’s magic because it’s awesome. There’s other tech like that. Reading for example: you look at tiny signs in succession and hallucinate what the writer intended for you to experience, understand. Extremely awesome and magical.


I think this desire for understanding ultimately drives the resistance to systemd and the Linux/BSD divide, and I think for good reason. There will always be friction between features and inherent simplicity/ability to understand the system.


Haha, started thinking of Julia Evans while reading the post and then her name just popped up.


Strive to know as much as possible but delivery is different from achieving understanding. If your goal is to deliver knowledge is just another tool; it is not some magic answer to everything.

You have to find a balance where you know enough to know you have delivered.


It's an important tool however as it will drastically influence how your solution looks like.

If you have to modify behaviour of a system without understanding it, the approach is usually to apply even more complexity as a band-aid. This will work in the immediate term, but will make your system more brittle and ironically even harder to understand in its entirety.

Also note that black boxes are rigid. If you don't understand what happens inside a blackbox, you can only build around it. However, for a lot of exciting features, you need to modify the black box itself - which you can't do if you have no idea what it does.


Eh, sufficient complexity is indistinguishable from magic. No single human can understand the whole system of a modern computer in full detail. There is just too much stuff to learn for one lifetime. That's why we build all those abstractions in the first place.


I think the answer here is somewhere in between. You're right that there will probably never be someone with a comprehensive understanding of low level programming who is also a physics prodigy and understands every minute detail of computers, but that doesn't make it "magic". We build abstractions in layers so everyone can granularly learn what matters to them and ignore the parts that don't. The amount of magic in a computer is inversely proportional to how deep you are in the abstraction layers, but I don't think that means it can't be understood.

The word "grok" does a pretty good job of summing up how I feel on it. I think most people can grok a computer, but that doesn't mean they could tell you every detail about it's use/manufacturing/operation/architecture/lithography. But we know enough to have a working knowledge, and for the purposes of this question I think that's what matters. Computers are designed to output predictable, reproducable behavior. They are meant to be unferstood by design.


I believe that the article agrees with you, and is arguing for a narrower claim.


>Modern software and hardware systems contain almost unimaginable complexity amongst many distinct layers, each building atop each other. It is common — and substantially correct — to observe that no single human understands all of the layers in, say, a modern web application, starting from the transistors and silicon up through micro-architecture, the CPU instruction set, the OS kernel, the user libraries, the compilers, the web browser, and Javascript VM, the Javascript libraries, and the application code, not even to mention all the network services invoked in loading that code.

>In the face of this complexity, it’s easy to assume that there’s just too much to learn, and to adopt the mental shorthand that the systems we work with are best treated as black boxes, not to be understood in any detail.

>I argue against that approach. You will never understand every detail of the implementation of every level on that stack; but you can understand all of them to some level of abstraction, and any specific layer to essentially any depth necessary for any purpose.


"At core, computers are built on a set of (mostly) deterministic foundations, which follow strict rules at each tick of the clock. We built layers upon layers of abstractions upon those foundations, each of which, as well, behaves in a (mostly) reproducible and deterministic way based on the abstractions at the previous level.

There is no magic. There is no layer beyond which we leave the realm of logic and executing instructions and encounter unknowable demons making arbitrary and capricious decisions. Most behaviors in one layer are comprehensible in terms of the concepts of the next layer, and all behaviors can be understood by digging down through enough layers."

It is possible for deterministic systems to become so complex that they are unable to be understood. Physics and most natural sciences encountered this very dilemma early on, determinists posited that the world was a deterministic state machine and that for this reason the future could be predicted with enough study.

This philosophical and physical debate was resolved in the formation chaos theory, which proved that systems can become complex enough that this would be benefit of determinism vanished.

https://en.wikipedia.org/wiki/Chaos_theory


Very relevant, when a system built on deterministic foundations becomes complex enough it's as indistinguishable from an indeterministic one. In principle you can understand it, but in practice there might be just massive & prohibitive cognitive overload.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: