The performance is the point. 8-bit CPUs are so slow assembler could be - often had to be - hand-optimised for speed.
You can't do that on modern CPUs, because the nominal ISA has a very distant relationship to what happens inside the hardware.
The code you write gets optimised for you dynamically, and the machine is better at it than you are.
You may as well write in a high-level language, because the combination of optimising compilers with optimising hardware is probably going to be faster than any hand-written assembler. It's certainly going to be quicker to write.
The 68000 is a very nice, clean machine to code for, but the main point of doing it today is historical curiosity. It's quite distant from modern computing.
The original 68K instruction set is distant from modern computing only in these points:
- 32 bits
- lack of vectorization and such.
It's still perfect for most embedded stuff, by my estimation.
Well .... there are certain points like: wasn't there some issue with branch displacements on MC68K being confined to 16 bit ranges? If you have large functions, it can be a problem.
I dimly remember a project I was on circa 2001 to port a protocol stack (that we were developing on x86) to a system with some Freescale processor with a 68K instruction set.
I remember having to chop the source file into several translation units, because when it was all in one file, the inlining or static function calls or whatever, were generating PC relative branches that were too large for the opcode.
With today's hugeware, you'd be running into that left and right.
> I remember having to chop the source file into several translation units, because when it was all in one file, the inlining or static function calls or whatever, were generating PC relative branches that were too large for the opcode.
That's just inadequate tools.
With GNU as for RISC-V if I write `beq a1,a2,target` and target is more than 4k away then the assembler just silently emits `bne a1,a2,.+4; j target` instead.
The MC68K has a BRA instruction with an opcode 0110 0000 (0x60). The low byte of the 16 bit opcode word is a displacement. If it is 0, then the next 16 bit word has a 16 bit displacement. If that low byte is 0xFF, then the next two 16 bit words have a 32 bit displacement.
The displacement is PC relative.
This unconditional 0x60 opcode is just a special case of Bcc which ids 0x6N, where N is a condition type to check for a conditional branch. Bcc also has 8, 16 or 32 bit displacement.
So, yeah; that's not a problem. Not sure what the issue was with that GCC target; it somehow just didn't generate the bigger displacements.
You can't do that on modern CPUs, because the nominal ISA has a very distant relationship to what happens inside the hardware.
The code you write gets optimised for you dynamically, and the machine is better at it than you are.
You may as well write in a high-level language, because the combination of optimising compilers with optimising hardware is probably going to be faster than any hand-written assembler. It's certainly going to be quicker to write.
The 68000 is a very nice, clean machine to code for, but the main point of doing it today is historical curiosity. It's quite distant from modern computing.