> This chip was called the AMU (Address Management Unit), and was designed in late ’84 and early ‘85. Since this was before the days of the Verilog Hardware Description Language, and there existed no other logical description (other than PAL equations) of the Mac to drive any EDA design or simulation tools, I assisted Bob by writing an event driven cycle accurate simulator in C that took as input the equations that described the behavior of the Mac motherboard signals. These equations (or rules) were parsed by a YACC (Yet Another Compiler Compiler) generated program as part of the simulator. There were “rules” defining internal and output signal behavior. System stimulus was captured using a Tektronix DAS logic analyzer on actual Macs, and then the results of the AMU simulator were compared against captured behavior of real Macs.
Wild! For some reason I just assumed everyone was making the world's largest Karnaugh maps before VHDL.
Ha!
Karnaugh maps don't scale very well, so when I did this I switched to Quine-McCluskey when the glue logic got big. And often it was easiest to implement glue logic with ROM (treating address lines as inputs and data lines as outputs) rather than with explicit gates.
What I miss about VHDL is the ability to design the full logic as a schematic. Schematics were our bread and butter back then.
"RISC based implementation of the Apple II 6502 Processor:
In mid ’85 I performed an analysis that showed a simple RISC style implementation of a 16‐bit binary compatible superset of the 8‐bit microprocessor used in the Apple II 6502, along with some judicious use of on‐chip caching, could substantially improve performance – to the point of potentially outperforming the 68000 used in the Mac, and given the simplicity of the 6502 the implementation was “doable” by a small team. "
The Apple /// used a 6502 clocked at 2 MHz. The 65816 was an evolutionary dead end - it wouldn’t allow the computer to evolve the same way the 32-bit 68K allowed the Mac to evolve.
IIRC, there was another computer idea floated around in Apple based on the 65816 that was killed before the IIgs project was started.
Making a faster Apple II would eat away Mac sales without providing a path to evolve. That’s also why the IIgs was never allowed to have the higher clocks the 65816 could use.
The path to evolve for the 65816 while keeping backwards compatibility would probably end up as a convoluted mess of layers not unlike the x86.
The path for the 68000 was much clearer - it already starts from a clean 32-bit architecture with a 16-bit data bus and a 24-bit address bus. The 68000 and its offspring also powered multiple Unix workstations at the same period, creating a critical mass of development tools that didn't exist for the 65816. It went on to 32-bit data and address buses, FPU, MMU and extra instructions and survived for some time after Apple, IBM and Motorola mostly gave up on it and went on with the PowerPC. All in all, the 68K architecture carried the Mac line for more than 10 years.
That would be very unlikely to be possible with Apple II software compatibility. In this case, the PDS Apple II board was probably the best option for Apple. It could have been designed to have a version of GS/OS using the Mac an an IO device in a software-friendly way, but the GS/OS software base was not there to justify the extra effort.
The 65CE02 was a very slight improvement over the 65C02. It's a bit faster at the same clock and has some tricks that appeared on the 65816, but, apart from that, is not that impressive.
The C65 would have been a much nicer computer than the C128.
Unfortunately, for Apple II users, the fact it doesn't match 6502 timings perfectly makes it difficult to use as a drop-in replacement, as time-critical ops in joystick sensing and floppy IO may need some adjustment.
The Commodore 64 has a 65816 Super CPU add on that some games can take advantage of. It is needed for that Super Mario Brothers clone to get proper scrolling.
> But an ethic of extreme sacrifice was cultivated – which was captured and reinforced with paraphernalia like the “90 hrs a week and loving it” T‐shirts.
This is the actual reality when job adverts ask for passion.
But for some people, there is also a time in their life for irrational overcommitment. I would argue that it's even -- not necessarily healthy, but -- productive, valuable, worthwhile. You can learn a lot about yourself by finding the limits of your abilities, tolerances, and willingness.
GP is oversimplifying. Most "passion" ads do not mean "irrationally overcommit for my profit". They mean "don't bother if you think our project is boring".
There are probably a couple dozen "OMG so worth it" projects that spring to my mind from the annals of computer history. If the project arrived at the right time in your life, and you had the skills and desire to pursue it to the point of irrational overcommitment, the rewards (non-financial) would be lifelong.
Developing the Macintosh is definitely one of those.
I agree that overcommitting can be a lot of fun, and feels good. However, its easy to overdo it to the point where you really hurt yourself in at least the medium term.
Back in 2011 or so I went all-in on a job. It was fun and exciting. Then one time I pulled a few all-nighters in a row, AND management started to get abusive ("if this code doesn't work in 48 hours you're all fired") and something snapped in me. I felt physically ill for a week. But even more astoundingly, I couldn't program for a solid 3 months. The feeling of revulsion to the computer was so powerful, I just couldn't, wouldn't program anything (obviously I quit that job so programming that project was not what I was attempting). Interacting with computers for any reason, like email, or journaling, was dicey for a while.
On the bright side, I got out more during that time, went to the beach, played music at open mic nights, and enjoyed real life. And I recovered. But it was slow. I wasn't really "normal" for a solid 18 months. And now I simply will not do that to myself again. I still push hard, but I push smart - I know what my breaking point is, and I'm not interested in experiencing that again.
I'd also add that, technically speaking, that kind of effort is terribly inefficient. "Slow down to speed up" and "if you don't have time to do it right, you better have time to do it again" are true statements. Often that effort is characterized as a furtive exploration of a vast combinatorial space looking for a specific outcome - a kind of intellectual gambling. Far better to master techniques individually and then combine them cleanly and with confidence for the desired effect.
Wild! For some reason I just assumed everyone was making the world's largest Karnaugh maps before VHDL.