Raymond Chen is such a joy to read. I’m old and jaded enough to have very mixed feelings about Microsoft but it’s obvious they’ve had some very talented programmers over the years.
If I didn’t get enough pleasure from using Microsoft software over the years as I did from other OS’s I certainly made up for it reading stories from MS folk like Raymond.
The 80386 is unusual in that it supports multiple calling conventions
It's unusual to speak of a processor as "supporting" any calling convention, given that they are simply a convention compilers may follow. The CPU doesn't care (and in the case of the 386 which has no return address prediction or special stack handling, it really doesn't matter) about such things as functions or procedures either, as you'll quickly realise if you read good optimised handwritten Asm.
For x86 information, I really enjoyed Xeno Kovah's OpenSecurityTraining courses for Intro and Intermediate x86. Recording quality is a bit spotty, but their whole YouTube channel is filled with great content.
I wonder if when these processors were developed, from a hardware standpoint, did the engineering team contemplate Turing completeness and the theory of computation, or just try to satisfy a set of perceived customer needs.
They got Turing completeness in just the MMU. This was surely unintended, just like Turing completeness in C++ template expansion and Turing completeness in sendmail configuration files.
Here you go, computation via continuous MMU faults without actually completing any instructions:
Since the amount of adressable memory on any CPU is finite, while a Turing machine needs in infinite band of memory, such claims of Turing-completeness of processors or in this case the MMU are clearly fraudulent.
Only an abstract mathematical model can be Turing-complete - there exists no physical realization of a Turing-complete system/Turing machine and there will never be.
This is completely pointless pedantry. Everybody knows that, and nobody bothers to type "Turing-complete (up to memory constraints)" all the time. Real mathematicians do this all the time, taking shortcuts with the understanding that they could phrase it completely rigorously if it were necessary.
Turing completeness is easier to achieve than to avoid. If you have conditional branching and an unbounded store (memory is finite of course, but it's arbitrarily finite) it is Turing complete.
So they may not have cared, but the analysis step would have been really simple. And if you don't want to bother with any formalia you can just implement Brainfuck (or less anachronistically you can go with FRACTAN), if it works you have something Turing complete.
It's difficult (though not quite impossible) to make a non-Turing-complete processor and still have it do useful things. So by making a processor that can do useful things, you have made it Turing complete.
If "tiring completeness" is supposed to read "Turing completeness", then including a conditional jump instruction of any sort completes the hardware requirement. The rest is about the number of conditions upon which that may be based and the access semantics, and that's where the CISC, RISC and hybrid approaches differ.
It’s a direct evolution from 4004, 8008, 8080, to 8086. There was no compatibility but you can see the inspiration. IIRC 8086 was assembly compatible with 8080 but not binary compatible.
At the time superscalar, register-renaming, vector-capable, and out-of-order were distant dreams and RAM was faster than the CPU. Everyone wrote assembly so the CPU designs accumulated instructions to make life easier on assembly programmers.
8086 was actually a stop-gap. The architecture astronauts at Intel that thought real hard about design were working on the iAPX 482, which was a stack machine that supported garbage collection in hardware(!)
If I didn’t get enough pleasure from using Microsoft software over the years as I did from other OS’s I certainly made up for it reading stories from MS folk like Raymond.
As always a great and informative read!