Hacker News new | past | comments | ask | show | jobs | submit login

Actually I was reading the interview of Ken Thompson on `Coders at work. He said about Lisp machines:

All along I said, “You’re crazy.” The PDP-11’s a great Lisp machine. The PDP-10’s a great Lisp machine. There’s no need to build a Lisp machine that’s not faster. There was just no reason to ever build a Lisp machine. It was kind of stupid.

I don't know much about the topic, but I thought Lisp machines were about enabling a programmer to code in a language that's high level enough to do powerful things, but in the same time that can access to all the low levels components of the machine. Can somebody explain me what I am missing?




Permit me to disagree with the distinguished Mr. Thompson. There wasn't quite "no reason".

First off, the notion of a personal workstation was just getting started back then. It was entirely reasonable for the MIT AI lab to want to build some workstations for its researchers, who had previously been sharing a PDP-10. There wasn't, in 1979, any off-the-shelf machine with virtual memory and bitmapped graphics that you could just buy. The original PDP-11 had a completely inadequate 16-bit address space. The first VAX model, the 11/780, was too expensive to be a single-user workstation. The 11/750, released in October 1980, was cheaper, but I think still too expensive for the purpose (though a lot of them were sold as timesharing systems, of course).

In any case, workstations started to be big business in the 1980s, and through the 1990s. Apollo, Silicon Graphics (SGI), and of course Sun Microsystems all enjoyed substantial success. The fact that DEC didn't own this market speaks clearly to the unsuitability of the 11/750 for this purpose.

Also, the extreme standardization of CPU architectures that we now observe -- with x86 and ARM being practically the only significant players left -- hadn't occurred yet at that time. It was much more common then than now for someone building a computer to design their own CPU and instruction set.

None of that has to do with Lisp specifically, but it does put some context around the AI Lab's decision to design and build their own workstation. If they wanted workstations in 1980, they really had no choice.

And the tagged architecture did have some interesting and useful properties. One of them was incremental garbage collection, made possible by hardware and microcode support. We still don't have a true equivalent on conventional architectures, though machines are so fast nowadays that GC pauses are rarely onerous for interactive applications.

Another consequence was a remarkable level of system robustness. It soon became routine for Lisp machines to stay up for weeks on end, despite the fact that they ran entirely in a single address space -- like running in a single process on a conventional machine -- and were being used for software, even OS software, development. The tagging essentially made it impossible for an incorrect piece of code to scribble over regions of memory it wasn't supposed to have access to.

Obviously Lisp Machines didn't take over the world, but it wasn't really until the introduction of the Sun 4/110 in 1987 (IIRC) that there was something overwhelmingly superior available.

If Thompson had said simply that it was clear from the trends in VLSI that there would eventually be a conventional machine that was so much faster and cheaper than it would be possible to make Lisp Machines -- simply because conventional machines would be sold in much greater volume -- that Lisp Machines would be unviable, I would be forced to agree with him. But that had not yet happened in 1979.

EDITED to add:

One more point. The first CPU that was cheap enough to use in a workstation and powerful enough that you would have wanted to run Lisp on it was the 68020; and that didn't come out until 1984.


When Moore's law eventually runs out of steam it might be once again practical to have that sort of specialization.


I think it's actually a matter of having better CAD tools. When we get to the point that an entire billion-transistor CPU can be laid out automatically given only the VHDL for the instruction set (or something -- I'm not enough of a chip designer to know exsctly what's involved), then it will be much easier to experiment with new architectures.


The barrier to entry is low, now, and getting lower.

Prototyping your custom instruction sets on FPGAs and then commissioning a run to stamp them to ASICs isn't prohibitively expensive, or hard.

In part, it's lack of imagination that has led us so far down the complicated, twisty path into x86 hell.

Just because your chip can do it doesn't mean it's good at it.


This is very interesting. I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?

I would really enjoy playing with a Lisp chip. It might not be good for performance computing, but it would be great for writing GUIs. The paper suggests having a chip with a Lisp part for control and an APL part for array processing - I think the modern equivalent would be a typed-dispatching part for control and some CUDA or OpenCL cores for speed.


> I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?

Full custom is still quite expensive.

But you can go the route I'm talking about (prototype on an FPGA, then get in on one of the standard runs at a chip fab via MOSIS or CMP or a similar service) for ~10,000 USD for a handful of chips.


I'm sensing some kind of universal price point for bleeding edge fabrication.

Adjusting for time, etc. that's pretty what in cost in 1991 to have a handful of custom boards and firmware built about the TI DSP chips of the day in order build a dedicated multichannel parallel siesmic signal processing array for marine survey work.


The only thing those machines had was direct execution of what you can consider Lisp bytecode and direct support for data types tagging, which could optimize code execution.

But general purpose processors proved to be fast enough for executing code generated by Lisp compilers, hence one of the reasons such machines did not succeed.

Same was tried with Pascal and Java, but it is not worth it.

You don't need a Lisp machine to do system programming in Lisp. All you need is compilation to native code and a set of primitives for low level hardware access.


It's worth to note that this paper describes different approach than what was done by commercial lisp machines: Implementing Lisp interpreter directly by hardware state machine. It was done on single smallish NMOS ASIC that was actually manufactured and tested. By the way, essentially half of examples and exercises in SICP is directly related to this endeavor.


Lisp Machines enabled researchers to have their own machines (not time-shared computers) which were able to run large applications, competent development environments, ... Like Macsyma. Icad. And many others. Other machines during that time were too small.


> I thought Lisp machines were about enabling a programmer to code in a language that's high level enough to do powerful things, but in the same time that can access to all the low levels components of the machine

You can do this already without specialized hardware. People have been using the RPython trick to enable 'low level' programming in something like a high level language. (Squeak Smalltalk, Rubinius, PyPy) There was recently a post about using Lua for device drivers. If you make the VM model quite simple, many high level languages are flexible enough to serve as low level ones.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: