There's a person devoted to basically (as I take it) re-designing the Lisp Machine (http://en.wikipedia.org/wiki/Lisp_machine) from the bottom-up on modern hardware (in the "let's design (say) a microcontroller using modern hardware assemblage/etc. tools/knowledge" sense) - the blog is at http://www.loper-os.org ; from the blog:
- the 'About' page (an interesting read, but not sure if it's the most concise/on-topic expose to the project; but it presents the author's frame of mind / angle of approach I suppose): http://www.loper-os.org/?p=8
That might be (though if you browse under 'Hardware' or 'LoperOS', you'll find relatively recent posts; his updates are generally scarce throughout in any case) - I'd say this is one of those really long-term (indeed, vaporware as of now) projects that might not make it, but hey. In any regard, I'm quite sure the author himself sees this as a really long-term affair and treats it as a free-time-after-work kind of thing. [0] I mean for what he's set out to accomplish, it's a huge undertaking. Perhaps an over-ambitious overkill, but I for one try to follow his (frustratingly infrequent) updates because (1) it's a really interesting idea and (2) I like his perspective and I think some of the people here might share a subset of his views (it's not difficult to spot them, as he rants relatively often;).
Author of linked project speaking. The latter is not dead, but closed-source (for the time being.) And moving very slowly, given that I am involved in several commercial projects which eat most of my time.
This reminds me of the Reduceron[1], a processor designed for Haskell. You can do a lot of optimizations in hardware if you have certain guarantees about the software running on it.
Similarly, you can do a lot of optimizations in hardware if you can make the assumption that it is used correctly.
I hate to sound like I'm arguing against this. I believe it is entirely plausible more progress is made where more work is done. I have not the knowledge to say if more progress is possible on either. Nor do I want that to limit the ideas pursued.
Very interesting, I was thinking about how to implement lisp hardware wise just this week. Will read in detail.
The idea that this single document gives you the capability of fully understanding a computing system is insane. If you're patient enough I imagine you could even try building it.
Actually I was reading the interview of Ken Thompson on `Coders at work. He said about Lisp machines:
All along I said, “You’re crazy.” The PDP-11’s a great Lisp machine. The PDP-10’s a great Lisp machine. There’s no need to build a Lisp machine that’s not faster. There was just no reason to ever build a Lisp machine. It was kind of stupid.
I don't know much about the topic, but I thought Lisp machines were about enabling a programmer to code in a language that's high level enough to do powerful things, but in the same time that can access to all the low levels components of the machine. Can somebody explain me what I am missing?
Permit me to disagree with the distinguished Mr. Thompson. There wasn't quite "no reason".
First off, the notion of a personal workstation was just getting started back then. It was entirely reasonable for the MIT AI lab to want to build some workstations for its researchers, who had previously been sharing a PDP-10. There wasn't, in 1979, any off-the-shelf machine with virtual memory and bitmapped graphics that you could just buy. The original PDP-11 had a completely inadequate 16-bit address space. The first VAX model, the 11/780, was too expensive to be a single-user workstation. The 11/750, released in October 1980, was cheaper, but I think still too expensive for the purpose (though a lot of them were sold as timesharing systems, of course).
In any case, workstations started to be big business in the 1980s, and through the 1990s. Apollo, Silicon Graphics (SGI), and of course Sun Microsystems all enjoyed substantial success. The fact that DEC didn't own this market speaks clearly to the unsuitability of the 11/750 for this purpose.
Also, the extreme standardization of CPU architectures that we now observe -- with x86 and ARM being practically the only significant players left -- hadn't occurred yet at that time. It was much more common then than now for someone building a computer to design their own CPU and instruction set.
None of that has to do with Lisp specifically, but it does put some context around the AI Lab's decision to design and build their own workstation. If they wanted workstations in 1980, they really had no choice.
And the tagged architecture did have some interesting and useful properties. One of them was incremental garbage collection, made possible by hardware and microcode support. We still don't have a true equivalent on conventional architectures, though machines are so fast nowadays that GC pauses are rarely onerous for interactive applications.
Another consequence was a remarkable level of system robustness. It soon became routine for Lisp machines to stay up for weeks on end, despite the fact that they ran entirely in a single address space -- like running in a single process on a conventional machine -- and were being used for software, even OS software, development. The tagging essentially made it impossible for an incorrect piece of code to scribble over regions of memory it wasn't supposed to have access to.
Obviously Lisp Machines didn't take over the world, but it wasn't really until the introduction of the Sun 4/110 in 1987 (IIRC) that there was something overwhelmingly superior available.
If Thompson had said simply that it was clear from the trends in VLSI that there would eventually be a conventional machine that was so much faster and cheaper than it would be possible to make Lisp Machines -- simply because conventional machines would be sold in much greater volume -- that Lisp Machines would be unviable, I would be forced to agree with him. But that had not yet happened in 1979.
EDITED to add:
One more point. The first CPU that was cheap enough to use in a workstation and powerful enough that you would have wanted to run Lisp on it was the 68020; and that didn't come out until 1984.
I think it's actually a matter of having better CAD tools. When we get to the point that an entire billion-transistor CPU can be laid out automatically given only the VHDL for the instruction set (or something -- I'm not enough of a chip designer to know exsctly what's involved), then it will be much easier to experiment with new architectures.
This is very interesting. I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?
I would really enjoy playing with a Lisp chip. It might not be good for performance computing, but it would be great for writing GUIs. The paper suggests having a chip with a Lisp part for control and an APL part for array processing - I think the modern equivalent would be a typed-dispatching part for control and some CUDA or OpenCL cores for speed.
> I always thought that making an ASIC was prohibitively expensive except for the largest companies. How much does it really cost?
Full custom is still quite expensive.
But you can go the route I'm talking about (prototype on an FPGA, then get in on one of the standard runs at a chip fab via MOSIS or CMP or a similar service) for ~10,000 USD for a handful of chips.
I'm sensing some kind of universal price point for bleeding edge fabrication.
Adjusting for time, etc. that's pretty what in cost in 1991 to have a handful of custom boards and firmware built about the TI DSP chips of the day in order build a dedicated multichannel parallel siesmic signal processing array for marine survey work.
The only thing those machines had was direct execution of what you can consider Lisp bytecode and direct support for data types tagging, which could optimize code execution.
But general purpose processors proved to be fast enough for executing code generated by Lisp compilers, hence one of the reasons such machines did not succeed.
Same was tried with Pascal and Java, but it is not worth it.
You don't need a Lisp machine to do system programming in Lisp. All you need is compilation to native code and a set of primitives for low level hardware access.
It's worth to note that this paper describes different approach than what was done by commercial lisp machines: Implementing Lisp interpreter directly by hardware state machine. It was done on single smallish NMOS ASIC that was actually manufactured and tested. By the way, essentially half of examples and exercises in SICP is directly related to this endeavor.
Lisp Machines enabled researchers to have their own machines (not time-shared computers) which were able to run large applications, competent development environments, ... Like Macsyma. Icad. And many others. Other machines during that time were too small.
> I thought Lisp machines were about enabling a programmer to code in a language that's high level enough to do powerful things, but in the same time that can access to all the low levels components of the machine
You can do this already without specialized hardware. People have been using the RPython trick to enable 'low level' programming in something like a high level language. (Squeak Smalltalk, Rubinius, PyPy) There was recently a post about using Lua for device drivers. If you make the VM model quite simple, many high level languages are flexible enough to serve as low level ones.
Why don't we have tagged memory in today's architectures? Is nobody getting inspiration from the LISP machine or Burroughs (https://en.wikipedia.org/wiki/Burroughs_large_systems) architectures? Is it because of the failed Intel iAPX 432?
Languages that need tagged memory can implement this in software, and on current arch they will still run as fast as in a CPU that had that trick implemented in hardware. More likely faster, because the software solution enables much greater flexibility (e.g. look at Javascript VMs, they have various ways to represent values, and that's a single language). Also because compilers can often optimize out all value-representation overheads (by unboxing etc.). Stuff implemented in the CPU decoder (or even in microcode) can never afford to make that level of optimization.
PowerPC AS has the 65th bit but you can't use it. I suspect people would be unable to agree on the details such as the size and semantics of the tags. And of course your RAM would cost 10% more.
A colleague of mine worked in the fab, (South building on the Richardson campus), that developed TI's Lisp chip. Years later we found a complete Explorer system including documentation in a local surplus warehouse, bought it and sold it to a collector the next week. Yet another indirect benefit of the whole Lisp system era...
Regarding other kinds of evaluation, like evaluating graphs, this is a bit the idea behind XLR (http://xlr.sf.net). In that case, evaluation by rewriting parse trees.
Example: a factorial is defined as
0! -> 1
N! -> N*(N-1)!
An if-then-else statement as:
if true then X else Y -> X
if false then X else Y -> Y
The core operator is the rewrite -> which means: rewrite the parse tree on the left into the parse tree on the right. There are of course rules about binding and stuff, and the type system is a bit of a challenge. But the idea may be intestto whoever is still hanging around this thread :-)
- posts tagged under 'Hardware': http://www.loper-os.org/?cat=7
- posts tagged under 'LoperOS': http://www.loper-os.org/?cat=11
- the 'About' page (an interesting read, but not sure if it's the most concise/on-topic expose to the project; but it presents the author's frame of mind / angle of approach I suppose): http://www.loper-os.org/?p=8