I disagree. If it can be built, it can be known, top to bottom.
Not everyone may want to know all the gory details, and nobody wants to get called on to explain it, but it can be, and should be done as a matter of course.
The issue is that the industry will kick and scream to keep from being held accountable to providing the customer with a way to be able to parse that information, and no one wants to hire people to document, because that isn't seen as positive value creation in today's corporate climate. It won't ever be short of a legal requirement to do so. That legal requirement will be fought tooth and nail on the basis of wanting to continue to protect trade secrets.
Nothing keeps a device from having it's actions fully elucidated except an unwillingness to abolish information asymmetry.
On my four year electronic engineering course, we did semiconductor physics, principles of electromagnetism, IC design, digital logic using semiconductors, abstract digital logic, processors and systems programming with ARM assembly, transmission lines, digital comms systems (stuff like network throughput, dish areas, network capacity, information theory), the Ethernet standard, packet switching algorithms, TCP/IP, network programming with C, high level languages (Python, Java), and on my own for degree projects I did stuff with PostgreSQL, FastCGI, Elixir, HTTP and Javascript. There were optional modules for stuff like nanotech, physical modelling synthesis, control theory, audio programming for iOS, etc.
Combine that four year degree with a computer science degree (so eight years total), and I think you'd have a pretty in-depth overview of the principles of every part of the stack from React down to silicon, boron and phosphorus, albeit not all the implementation details.
Agressive EEs can cover a lot. I started out designing chips. Agressive CS (Compilers, OS, Data Structures, Databases) can cover a lot. Some of the modern stacks dependencies (npm, blegh) leave you with a lot to comb through. Modern compiler's ability to deal with various level caches and memory architectures, and look-aheads, and TLBs, and what not, can boggle the mind a bit.
One can pretty easily understand all of the sub-systems in isolation (well, it takes more than a couple of years!), but not on ALL of the systems, and complex systems, in total are more than any one mind can reason about in, in total. Too much for one brain to contain the dependency graph.
I'm only at 0.5 and I do fairly well... Key is that it's not in crystal clear focus at the semiconductor physics or Node.js levels, it still pretty close everywhere else.
Not only is it possible, but it is important to demand.
The article makes some good points, but I feel that the headline is inaccurate.
It may be impossible to know everything, just as one cannot claim to know "all" of history, but one can make a bloody good go of it.
The main problem I've encountered personally is simply limits on human memory. Things fall out of your brain.
I studied semiconductors, electromagnetism, quantum mechanics etc as part of my Physics degree. I've written emulators, webapps, backend code, etc.
There are gaps. Not conceptual gaps but rather in implementation.
Could I write a top to bottom tutorial? Not without research, and undoubtedly subject matter experts would pick at it, but I believe I have some understanding about every level of the stack and an idea of where to pick that knowledge back up if I need to.
Is it useful?
I mean, surely that's for an individual to decide. Financially perhaps you'll be better off learning the hot new framework of the day and hobnobbing with the cool kids.
But then you could also get rich making wanky adtech optimisation. Or just starting a run of the mill exploity corp.
Me? I just want to know how it works, in endless fractal beauty.
I don't live my life based on what others consider to be "useful".
> there's No Such Thing as Knowing Your Computer 'All the Way to the Bottom'
NAND to Tetris (www.nand2tetris.org) seems like a pretty good take on the concept.
Another one is Wirth's Oberon system, which runs on Wirth's TRM/RISC architecture, which is simple enough for students to implement in Verilog (or another HDL such as VHDL or Wirth's Lola) and compile to an FPGA.
MIPS is also simple enough to implement in Verilog over the course of an academic term and also has good tool support for languages like C and C++.
Personally I find understanding low-level system behavior to be interesting, fun, and helpful for understanding many irritating application behaviors (often errors and slowdowns) which are caused by interactions with the OS and hardware.
I was recently doing some language evaluation for rewriting an existing system in Go to improve its cross-platform compatibility. One of the things I realized in this exercise was that "compiles to C" was a powerful feature - not because of C itself, since C often gets in the way of the source language's intent - but because it means that the result fits into and leverages all the C-based tooling, facilitating the desired level of platform independence.
And for a lot of projects today, "the bottom" ends with C for precisely this reason - while the fact that C is the chosen language for this task is completely arbitrary, as arbitrary as the status quo hardware paradigm of "x86 on desktops, ARM on mobile". I could imagine a universe where it was an extended form of Basic or Pascal that ruled the world instead.
Likewise, I was also looking into extending Lua with a small DSL interpreter that could accelerate certain byte-level tasks: copy around blocks of bytes, run some common algorithms quickly, define spaces for variables, apply some parameters. At first I thought of this language as a "VLIW" assembly language with a particular focus on having a big bag of tricks, but when I compared my semantics with actual hardware assemblers, I found that having no registers or stack manipulation and focusing only on direct memory addresses changed the character of it so much as to make it a different beast, one more like the "autocoders" of the 1950's: not quite ALGOL, and yet clearly headed in that direction.
All of which is to say - the more you are comfortable with compilers, the more the bottom just becomes the thing you happen to be leveraging to ship product. It isn't "the real stuff" until it becomes the specification language. Some things, like how numeric types are defined in hardware, force the terms of specification way up the stack so that the hardware may be used at its capacity. But that in itself is an "essential complexity" in computing, where the physical realities coincide with the desired programming model. When it's software defining the spec, as in our libc-and-Javascript-driven universe, that's more akin to social complexity, but it produces the same kinds of effects.
Maybe it's not worth learning C as a programming language, but it's worth learning the C standard library, as every language links it in and most of the functionality is available. I'd also say to throw in a cursory reading of the Linux kernel's syscalls as exposed through glibc and a tutorial on how to go from glibc to the kernel implementation of the syscall. It's not strictly necessary for game development or frontend website design, but in games you'll end up learning C++ anyway and with websites everyone seems to migrate into the backend servers eventually or else quit programming and do graphics design.
As far as operating systems, Rust does seem to be a better entry point, with the Redox OS and general community.
The main reason to dig down is to uncover faults and leaks in abstractions, which is something you absolutely will have to do in the fullness of time.
A corollary reason is to improve your "mechanical sympathy" when writing high-performance code.
Many of what we once called "application devs" -- now more commonly "full-stack devs" -- will never have to discover real mechanical sympathy, which is fine. But there is a glass floor for such people.
There is one main reason to learn C: everything can interface with C, and you may need to extend your language's capabilities.
There is one main reason to learn assembly: things WILL eventually go wrong, to the point that you need to debug at the machine level in order to understand what's going on.
Also, many debug tools such as ptrace are primarily accessed through C, and require a basic understanding of the assembler view of the machine.
I realize that it's a bit like saying that COBOL programmers will never have trouble finding work (probably mostly true) but until Linux is rewritten in Rust and macOS/Xnu is rewritten in Swift, C is going to be essential for anyone who has to work with UNIX-like OS kernels and associated drivers.
I would also argue that C is pretty essential for anyone who wants to use EBPF/IOvisor.
This is an excellent article and I rarely see people giving this advise. At the end of the day there is no bottom in general. And it's not clear why that's the case unless you have a reasonable amount of experience in electrical engineering / manufacturering and computer design. An interpreter or a compiler is a computer. Anything lower than that is just implementation details.
I don't agree. At all. Forget C/C++. Build a microprocessor in VHDL or Verilog. This is a common task in many universities with better EE or CSE disciplines. Do these folks understand Java? Or modern C/C++. Well they do much better than the folks that learn at the Java level and work up.
This lives on the assumption (one I often see repeated) that everyone is writing code to the same requirements and implementation details as you. If I'm in a washing machine I;m in total control (or bloody well should be).
I would point out that the vast bulk of software in the entire world is vertical market business software. Vertical market meaning for a limited kind of user, versus horizontal software such as Word or Excel that everyone might use. Vertical market software is literally everywhere and largely invisible. Your public library has custom software. Your hospital. Ever go to get a blood draw and notice the operator is using specialized software. The place that changes the oil in your car. A lawyer's office. Cabinetry makers have custom specialized software. Cities using utility billing software. Keeps track of what gas meters (models, serial numbers) and whether and where installed, tracks the meter usage and generates bills. Rental car companies and airlines have custom software. Hotels have custom software. Restaurants software may have a map of the tables and what "state" they are in, with a list of customers in line. Some of these categories like City, Healthcare, School Districts, and other specialized software are entire industries unto themselves with huge software companies that build software just for these specific industries.
Much of this software is now web based. Because web based is centrally controlled. Zero install at the workstation and zero maintenance at all the workstations. All you need is an OS and a browser. (hint: chromebooks in some cases, and iPads, etc)
Is it any surprise that languages like Java have been the top languages year after year? It's where the jobs are.
It's not the same thing as microcontrollers. And microcontrollers are in lots of things around us. But they get built once, and replicated millions of times. The vertical market business software has thousands of specialized categories, all different, and with ever changing regulatory requirements (federal and state) and reporting requirements, etc. An accounting system with a large complex payroll module probably pays your paycheck. And is integrated with a human resources system. And benefits system.
All this software is written thinking at a higher level of abstraction some distance from the hardware. Transactions. Currency amounts and conversion. Databases. Yes, it may sound boring, but it's what makes the world go around and is mostly invisible.
It's also very stable. Like decades of employment stable.
Probably mis-represented myself. For my sins 30 odd years of MS stack. Doing exactly the money market stuff you're talking about (50bn$ PnL system on last job). I know you know, but how many people understand the transaction isolation model in SQL server (or how to write your own transaction compensator?). Wtf when a supposed DB admin won't allow MARS. Deeper knowledge, deeper understanding. You can't argue - I kinda Know this is you.
Not everyone may want to know all the gory details, and nobody wants to get called on to explain it, but it can be, and should be done as a matter of course.
The issue is that the industry will kick and scream to keep from being held accountable to providing the customer with a way to be able to parse that information, and no one wants to hire people to document, because that isn't seen as positive value creation in today's corporate climate. It won't ever be short of a legal requirement to do so. That legal requirement will be fought tooth and nail on the basis of wanting to continue to protect trade secrets.
Nothing keeps a device from having it's actions fully elucidated except an unwillingness to abolish information asymmetry.