Hacker News new | past | comments | ask | show | jobs | submit login

Is this using "microcode" in the same sense we'd use it today to refer to e.g. "Intel CPU microcode"?



In the sense that you have microcode presenting a high-level instruction set that internally is executed as a low-level instruction set, yes. (There are clearly implementation differences, so if you'd pedantically want to find a distinction there are plenty.)


It would be interesting to see a history of CPUs in terms of their layers of abstraction. When was something like "MOV AX, BX" invented, and how far is that from physical ones and zeros (ons and offs) in the silicon? Obviously I'm not a CS graduate.


A CPU's registers is where all of the "bit transformation" (adding, subtracting, multiplying, dividing, branching) work that Computers do, is done. Registers are like very small scratchpads, typically 32 or 64 bits (or 4 bytes/8 bytes) wide. Computers with as little as one main register for calculation, and another register for which instruction in memory is executing (instruction pointer) are possible. CPU's as low as 4 bits have existed (Intel 4004), although there's no reason electronically a CPU could use less bits than that (well, in order to execute a program, you need an address in memory, and for that you need more bits for larger addresses, i.e., more memory). Every electronic calculator from the 1970's is a computer IF it had memory, conditional instructions (if results of last calculation were greater than zero, go to (change instruction pointer) to this new address, otherwise, continue with next instruction. MOV AX, BX is an abstraction for programmer consumption only, a CPU's basic instructions are something like: 1) Move value into accumulator (AX, R1 or EAX or RAX or R1 or Reg1) from somewhere in memory. 2) Perform some sort of calulation on it (Add, Subtract, Multiply, Divide, Bit Shift, And, Or, etc.) 3) Store it back to memory 4) Execute a conditional branch instruction which says, as a result of the last mathematical computation, either a) jump to a new address (bypass a section of code), or b) continue with the next instruction.

The Zuse Z3 is probably the simplest Turing-Complete computer that was ever invented (in 1941 no less!); I'd start there:

https://en.wikipedia.org/wiki/Z3_(computer)


The Z3 is a floating point machine... I'd wager that it is more complex than the 4004. While the 4004 has a couple more transistors than the Z3 had relays (about 600 in the "CPU"), the telephone relays used by the Z3 had many circuits and thus can perform more complex stuff than a single transistor in a logic circuit.


The basic elements of a CPU:

An ALU (Arithmetic Logic Unit) that does simple addition and subtraction, binary negation, and integer comparisons.[1]

A collection of registers that store binary bit patterns.

A set of data path switches that connect the elements together in various ways - e.g. so you can connect a register to an ALU and do some math on it, or copy the output of one register to another.

There's also an instruction decoder which converts MOV AX, BX into a set of control signals for all the other parts. For example it sets up the data path switches, connects AX and BX, and then triggers a write on BX.

The first instruction decoders were made from hardwired logic. They shipped with the computer, and they were impossible to change. [2]

Then it was realised that the logic could be replaced by a kind of nano-program for each machine instruction which set up all the elements dynamically.

This could be baked into ROM, or it could be loaded on boot. The latter meant instruction sets could be updated to add new features to the CPU. This also meant the same hardware could run two different instruction sets. (A nice trick, but often less useful in practice than it sounds.)

The real advantage was a cut in development time. Instead of having to iterate on board designs with baked-in instruction decoding, the hardware could be (more or less...) finished and the instruction set could evolve after completion. Bugs could be fixed at much lower cost.

It also meant the instruction set could be extended almost indefinitely with no extra hardware cost. (DEC's VAX was the poster child for this, with linked list manipulation and polynomial math available as CPU instructions.)

And it meant that cheaper CPUs in a range could emulate some instructions in compiled software, while more expensive CPUs could run it at full speed in microcoded hardware - all while keeping code compatibility across a CPU family.

The modern situation is complicated. Modern CPUs are fully modelled in software before being taped out and manufactured, so boot-loadable microcode isn't as useful as it once was.

ARM is fully hardwired (so far as I know) but x86 has a complex hybrid architecture with some microcoded elements - although I believe they're fixed on the die, and can't be updated.

[1] More complex CPUs have floating point support, but the principle is the same.

[2] In fact the earliest decoders were diode arrays, which could be swapped out and replaced. So the idea of microsequencing has been around almost since the first CPUs were built.


Intel asm opcodes are translated to processor-specific microcode instructions. The "microcode updates" actually rewrite the microprograms for each opcode. If users had access to microcode(not likely anytime soon) very powerful optimizations could be done to create optimal instruction sets for each task.

Another approach is https://en.wikipedia.org/wiki/No_instruction_set_computing Which allows programming the CPU directly without relying on static instruction sets(essentially programming at or below microcode level).


Today's x86 CPUs usually contain some kind of microcode but it is distinct concept from uOPs (micro operations) that are usually referenced in high level micro architecture descriptions. Incoming instructions are decoded by some combination of hardware and microcode into uOPs that then get executed by RISC-like execution units.

Then there is slight terminological problem with the tendency of both Intel and AMD to label essentially any binary blob they don't feel like documenting as "microcode", ranging from few bytes of configuration data, through actual CPU microcode to complete RTOS images for some embedded CPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: