Permit me to disagree with the distinguished Mr. Thompson. There wasn't quite "no reason".
First off, the notion of a personal workstation was just getting started back then. It was entirely reasonable for the MIT AI lab to want to build some workstations for its researchers, who had previously been sharing a PDP-10. There wasn't, in 1979, any off-the-shelf machine with virtual memory and bitmapped graphics that you could just buy. The original PDP-11 had a completely inadequate 16-bit address space. The first VAX model, the 11/780, was too expensive to be a single-user workstation. The 11/750, released in October 1980, was cheaper, but I think still too expensive for the purpose (though a lot of them were sold as timesharing systems, of course).
In any case, workstations started to be big business in the 1980s, and through the 1990s. Apollo, Silicon Graphics (SGI), and of course Sun Microsystems all enjoyed substantial success. The fact that DEC didn't own this market speaks clearly to the unsuitability of the 11/750 for this purpose.
Also, the extreme standardization of CPU architectures that we now observe -- with x86 and ARM being practically the only significant players left -- hadn't occurred yet at that time. It was much more common then than now for someone building a computer to design their own CPU and instruction set.
None of that has to do with Lisp specifically, but it does put some context around the AI Lab's decision to design and build their own workstation. If they wanted workstations in 1980, they really had no choice.
And the tagged architecture did have some interesting and useful properties. One of them was incremental garbage collection, made possible by hardware and microcode support. We still don't have a true equivalent on conventional architectures, though machines are so fast nowadays that GC pauses are rarely onerous for interactive applications.
Another consequence was a remarkable level of system robustness. It soon became routine for Lisp machines to stay up for weeks on end, despite the fact that they ran entirely in a single address space -- like running in a single process on a conventional machine -- and were being used for software, even OS software, development. The tagging essentially made it impossible for an incorrect piece of code to scribble over regions of memory it wasn't supposed to have access to.
Obviously Lisp Machines didn't take over the world, but it wasn't really until the introduction of the Sun 4/110 in 1987 (IIRC) that there was something overwhelmingly superior available.
If Thompson had said simply that it was clear from the trends in VLSI that there would eventually be a conventional machine that was so much faster and cheaper than it would be possible to make Lisp Machines -- simply because conventional machines would be sold in much greater volume -- that Lisp Machines would be unviable, I would be forced to agree with him. But that had not yet happened in 1979.
EDITED to add:
One more point. The first CPU that was cheap enough to use in a workstation and powerful enough that you would have wanted to run Lisp on it was the 68020; and that didn't come out until 1984.
I think it's actually a matter of having better CAD tools. When we get to the point that an entire billion-transistor CPU can be laid out automatically given only the VHDL for the instruction set (or something -- I'm not enough of a chip designer to know exsctly what's involved), then it will be much easier to experiment with new architectures.
This is very interesting. I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?
I would really enjoy playing with a Lisp chip. It might not be good for performance computing, but it would be great for writing GUIs. The paper suggests having a chip with a Lisp part for control and an APL part for array processing - I think the modern equivalent would be a typed-dispatching part for control and some CUDA or OpenCL cores for speed.
> I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?
Full custom is still quite expensive.
But you can go the route I'm talking about (prototype on an FPGA, then get in on one of the standard runs at a chip fab via MOSIS or CMP or a similar service) for ~10,000 USD for a handful of chips.
I'm sensing some kind of universal price point for bleeding edge fabrication.
Adjusting for time, etc. that's pretty what in cost in 1991 to have a handful of custom boards and firmware built about the TI DSP chips of the day in order build a dedicated multichannel parallel siesmic signal processing array for marine survey work.
First off, the notion of a personal workstation was just getting started back then. It was entirely reasonable for the MIT AI lab to want to build some workstations for its researchers, who had previously been sharing a PDP-10. There wasn't, in 1979, any off-the-shelf machine with virtual memory and bitmapped graphics that you could just buy. The original PDP-11 had a completely inadequate 16-bit address space. The first VAX model, the 11/780, was too expensive to be a single-user workstation. The 11/750, released in October 1980, was cheaper, but I think still too expensive for the purpose (though a lot of them were sold as timesharing systems, of course).
In any case, workstations started to be big business in the 1980s, and through the 1990s. Apollo, Silicon Graphics (SGI), and of course Sun Microsystems all enjoyed substantial success. The fact that DEC didn't own this market speaks clearly to the unsuitability of the 11/750 for this purpose.
Also, the extreme standardization of CPU architectures that we now observe -- with x86 and ARM being practically the only significant players left -- hadn't occurred yet at that time. It was much more common then than now for someone building a computer to design their own CPU and instruction set.
None of that has to do with Lisp specifically, but it does put some context around the AI Lab's decision to design and build their own workstation. If they wanted workstations in 1980, they really had no choice.
And the tagged architecture did have some interesting and useful properties. One of them was incremental garbage collection, made possible by hardware and microcode support. We still don't have a true equivalent on conventional architectures, though machines are so fast nowadays that GC pauses are rarely onerous for interactive applications.
Another consequence was a remarkable level of system robustness. It soon became routine for Lisp machines to stay up for weeks on end, despite the fact that they ran entirely in a single address space -- like running in a single process on a conventional machine -- and were being used for software, even OS software, development. The tagging essentially made it impossible for an incorrect piece of code to scribble over regions of memory it wasn't supposed to have access to.
Obviously Lisp Machines didn't take over the world, but it wasn't really until the introduction of the Sun 4/110 in 1987 (IIRC) that there was something overwhelmingly superior available.
If Thompson had said simply that it was clear from the trends in VLSI that there would eventually be a conventional machine that was so much faster and cheaper than it would be possible to make Lisp Machines -- simply because conventional machines would be sold in much greater volume -- that Lisp Machines would be unviable, I would be forced to agree with him. But that had not yet happened in 1979.
EDITED to add:
One more point. The first CPU that was cheap enough to use in a workstation and powerful enough that you would have wanted to run Lisp on it was the 68020; and that didn't come out until 1984.