Hacker News new | past | comments | ask | show | jobs | submit login

Some of it is in the article.

Like Atari with the ST, Commodore basically failed to capitalize on the original amiga and by the early 90s PCs have mostly caught up.

While the amiga 1000 was revolutionary in 1985, AGA[1] was not that special in 1992 (especially as it wasn't particularly beneficial to 3d games like doom, which were becoming the new hotstuff).

The amiga of the 90s was the 3dfx voodoo.

[1] http://en.wikipedia.org/wiki/Amiga_Advanced_Graphics_Archite...




And frankly by the late 80s the 680000 architecture itself had hit a dead end performance wise. Maybe Motorola could have pulled off what Intel did with Pentium, paper over the aging CISC with RISC internals, but instead we got PowerPC.

I'm not sure if you ever used a PowerPC Mac when they were a mix of emulated 68k and PowerPC, but they were notoriously unstable. The mix of a lack of memory protection and emulated CPU instructions would have been the same for the Amiga and Atari ST whose OSs also lacked memory protection and safety features of more modern operating systems.

I have an Atari Falcon 030, Atari's last and best machine. It is a really really nice machine. But it was hobbled by poor software support -- it's only now that hackers are discovering what they can do with the combination of the 68030 and the Motorola 56k DSP in it (Example: Quake 3 has been ported/rewritten recently for it, using the 56k for 3d acceleration.)

I used to wonder what the world would have been like if the 68000 systems won out. But now we're seeing a world where the ARM belatedly wins out, which is kind of neat, tho ARM is arguably now as "evil" as Intel :-)

What is an interesting mental exercise is imagining what would have happened if the 6502 or 6809 architectures had expanded and done well. Those architectures had insanely fast interrupt processing and very fast (single CPU cycle) memory access. Some really neat machines could have been made if they'd continued to advance them, gotten past the 64k memory address limit and into high clock rates. Western Design stopped at the 65c816, a 16-bit variant of the 6502. Something faster and funner than the Amiga could have been built with a 32-bit 6502 descendant and chipsets similar to what was in the Amiga. That would have been really neat.


Maybe Motorola could have pulled off what Intel did with Pentium, paper over the aging CISC with RISC internals, but instead we got PowerPC.

I think one of the factors could be that 68k is a bit harder to decode than x86 - while the instruction set is more orthogonal, the encoding has less structure; compare http://goldencrystal.free.fr/M68kOpcodes.pdf (68k) with http://i.stack.imgur.com/07zKL.png (x86).

very fast (single CPU cycle) memory access

That was possible only because at the time, memory was faster than the core and could keep up. Modern CPUs run the core at several times memory speeds, and there is latency involved due to physical constraints.


One difference between Motorola and Intel, is Motorola was less concerned with breaking backwards compatibility.

If the 68k family had continued to evolve past the 1994s 68060, I'm sure they could have just dropped backwards compatibility to some of the more complex addressing modes, or just devoted less silicon to them, and making what remains faster. Kinda like happened with the ColdFire version of the 68k family.


The not-implemented instruction trapping in the 68K could be easily evolved to cover less frequently used instructions to make room for more optimized implementations of the frequently used ones.

What really killed the 68000 was the move to RISC, in particular Apple's move to PowerPC. That took away any hope of future evolution (they even managed to release the 68060 after that, but that was it) and collapsed the high-end 68K business.


Note that Motorola also did the m88k which was their RISC approach. It generated a bit of interest, but was never successful. https://en.wikipedia.org/wiki/Motorola_88000

The Motorola Series 900 machines were interesting - I had one under my desk at work for quite a while. They had stackable units, including one that contained a SCSI 3.5" floppy drive that was way faster than regular ones. We also had a DG unit with the m88k.


A lot of what was the 88K was put into the PowerPC.


Yep. The one thing Intel frets over these days is cache misses. Hyperthreading is all about keeping that pipeline busy even if the original thread encounter a cache miss.


> very fast (single CPU cycle) memory access

> That was possible only because at the time, memory was faster than the core and could keep up. Modern CPUs run the core at several times memory speeds, and there is latency involved due to physical constraints.

That begs the question: Would we be better off if CPU clock speeds were set such that the memory could keep up again, and we software developers learned to work within real constraints again, rather than expecting the CPU makers to keep working miracles to deliver ever more performance? I have no wish to go back to programming in Applesoft BASIC or 6502 assembler as I did in my childhood and early teenage years. But programming a 32-bit processor clocked to match the speed of memory, in C++ or Rust, wouldn't be so bad.


Would we be better off if CPU clock speeds were set such that the memory could keep up again

Absolutely not, because of the locality principle. As Terje Mathisen used to say, "All programming is an exercise in caching."

Locality isn't a property of a specific coding style or methodology, it's just the way programs work. No matter what kind of architecture we end up using 50 years from now, it will have a fast cache of some kind, backed up by slower memory of some kind. We'll have a different set of problems to confront in day-to-day development work, but hobbling the CPU won't be the answer to any of them.


Sure, caching is important. But today, we have multiple layers of cache: registers, L1, L2, sometimes L3, and RAM, all of which are caches for nonvolatile (increasingly flash) storage. All of that layering surely has a cost. So what would we get if a processor with no caches between registers and RAM were manufactured using a current process (say, 14 nm), clocked such that DRAM could keep up (so, 100 MHz if another comment on this thread is accurate), and placed on a board with enough RAM for a general-purpose OS as opposed to an RTOS for a single embedded application? Would the net result be any more power efficient than the processors that smartphones use now?


L1/L2 cache levels are transparent optimizations over the top of register--ram, so eliminating them in a RAM-bound application would save you transistors (power usage) without losing performance. But although a few certain RAM-bound applications might perform equivalently, you've destroyed all other classes of application in the process.

Power efficiency is more complex, often it's better to briefly burst then get back to sleep faster, rather than drag things out at 100 MHz, but a specific answer would depend on many factors.


> That begs the question: Would we be better off if CPU clock speeds were set such that the memory could keep up again

Memory latency is at best about 10 ns. I don't think a 100 MHz CPU would better in any way than what we have now. Well, except power requirements would sure be very low.


>What is an interesting mental exercise is imagining what would have happened if the 6502 or 6809 architectures had expanded and done well.

It was, but for some reason Hitachi didn't publicize it, and word only escaped years too late: http://en.wikipedia.org/wiki/Hitachi_6309


Yep I have some here, and there's even a board for putting it and the 6809 into Atari 8-bit computers (!). It's a fine processor, but limited to 64k address space still. But fast, and fun to play with.


Your last paragraph describes first ARM chip :) ARM was all that(fast interrupts, fast ram) and more (30k hand laid transistors). It took a while, but ARM is taking over from the bottom up, making Intel ignore highend and concentrate all of its efforts on power efficiency (raw performance all but stopped in last 5 years).


When you say ARM is now as "evil" as Intel, are you referring to processor architecture, business practices, or both?


>(Example: Quake 3 has been ported/rewritten recently for it, using the 56k for 3d acceleration.)

Wow, what is that like? I'd love to see a demo video .. know of one?


Seems to be this (Quake 2, not 3, and still in development): https://www.youtube.com/watch?v=hDXSMgW-r5M&index=1&list=PLN...


Yes, sorry, Quake 2.

Long thread here: http://www.atari-forum.com/viewtopic.php?f=68&t=26775


Maybe Motorola could have pulled off what Intel did with Pentium, paper over the aging CISC with RISC internals

They did; that's what the mc68060 was. Too little, too late and (as you say) corporate attention directed at PPC.


If anyone is interested, Stuart Brown's "Doomed: The Embers of Amiga FPS" is a nice little documentary of the history mentioned above.

https://www.youtube.com/watch?v=Tv6aJRGpz_A


It's really hard to give people from this generation a sense of how fast things advanced and how big the leaps were in the 80s and early 90s. We went from 8 bit to multimedia 32 bit in less than 10 years. For devs it was a new box every year or two. Sometimes two new computers in a year.

My first 386 dev box was purchased in 1987. Within a year of that, I was using a Compaq portable 386 (http://en.wikipedia.org/wiki/Compaq_Portable_386). 386 was a big deal because it finally got us Intel devs a flat address space... so no more trying to fit data into tiny pages (64k). 386 killed the chief advantage of 68k architecture, and for whatever reason Motorola just couldn't get the clock speed up fast enough.

There were two things that were interesting about the Amiga in 87: video toaster for doing cheap video effects (think intro sequence to Better Call Saul, not awesome demos) and gaming.

But gaming on the PC made a huge leap in 1987 when IBM shipped ALL of their new PS/2 computers with a 256 color video adapter called the VGA (seem to remember the lowest end models only doing 256 colors in 320x260 mode... but that was good enough)... Eventually TrueVision and even ATI had video cards that could do the same sorts of things (or better) than an Amiga.

So many great computer ideas died in the 80s and early 90s... but it was really evolution... most of them died because a generalized solution (i.e. VGA with video out + software) eclipsed a specialized solution (i.e. Amiga with video toaster).


AFAIK the lowest end was called MCGA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: