Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Developers from Fuji Xerox wrote a portable VM in C to run the environment on different host platforms, called Maiko.

I'm always confused by the relationship of C and Lisp(s). Here the VM is written in C. Yet elsewhere there seem to be at least one good example of a Lisp compiler written in Lisp [0]. What was the reason for writing Maiko in C, versus Lisp "all the way"?

[0] "The first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT, and could be compiled by simply having an existing LISP interpreter interpret the compiler code, producing machine code output able to be executed at a 40-fold improvement in speed over that of the interpreter.[19] This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely. The language used in Hart and Levin's memo is much closer to modern Lisp style than McCarthy's earlier code. " https://en.wikipedia.org/wiki/Lisp_(programming_language)#Hi...




A Lisp and a compiler are two different things: Lisp is the whole thing and the Lisp compiler is just a component of a Lisp system.

Now, one could write a virtual machine in Lisp, but usually one would write it in C or assembler, since that's what squeezes more performance out of the hardware and makes interfacing to the operating system 'easier' (threads, calls into the OS, memory management, interrupts, error handling, etc.).

There are examples where (parts of) a virtual machine are written in Lisp. For example the Virtual-CPU emulator may be generated as C or assembler code from a Lisp program. There are also special versions of, say, the JVM written in Lisp to prove its correctness. Ideally one may want to have the core VM emulator written in assembler to reduce it to the most minimum hardware instructions per virtual machine instruction.

The example you cited from 1962 is a Lisp compiler written in Lisp, compiled by itself, running in a Lisp whose runtime was written in assembler (IIRC).


That helps, thank you. Based on what I've seen with Smalltalk, and playing around with the online Interlisp environment [0], I was under the impression an underlying OS was not a necessity [1].

[0] https://online.interlisp.org/main

[1] Part of my confusion may stem from ideas like presented by Chuck Moore in Masterminds of Programming by Biancuzzi: "Operating systems are dauntingly complex and totally unnecessary. It’s a brilliant thing that Bill Gates has done in selling the world on the notion of operating systems. It’s probably the greatest con game the world has ever seen.

An operating system does absolutely nothing for you. As long as you had something—a subroutine called disk driver, a subroutine called some kind of communication support, in the modern world, it doesn’t do anything else. In fact, Windows spends a lot of time with overlays and disk management all [sic] stuff like that which are irrelevant. You’ve got gigabyte disks; you’ve got megabyte RAMs. The world has changed in a way that renders the operating system unnecessary.

What about device support?

Chuck: You have a subroutine for each device. That’s a library, not an operating system. Call the ones you need or load the ones you need."


Interlisp was developed at BBN on the PDP-10 under the TENEX operating system. Danny Bobrow had previously been at MIT and worked on MACLISP running on PDP-10s running the ITS OS. So that mode was and is perfectly normal.

When Danny went to PARC they started working on a port of Interlisp to the D machine hardware. The model adopted was that of smalltalk: a hermetic environment running on the bare iron, with everything (drivers, network stack etc) written in lisp. One big difference from the MIT world was that smalltalk and Interlisp were built around working in a world and checkpointing the entire machine state, rather than loading files into a base image.

PARC also had a lot of network-only RPC services (mail, filesystem, printing, etc) so each environment had its own implementation of talking to these services, and all its own UX. We’re talking late 70s here, some of it more sophisticated than what you can get today.


I should add that the D machines had writable control stores so these language environments (Interlisp-D, SmallTalk, and Cedar/Mesa) each had custom microcode. I wrote some microcode for InterLisp-D back when I was 20.



Fascinating.

> Genera is your whole environment; it encompasses what you normally think of as an operating system as well as everything else - system commands and all other activities. From where you look at it, there is no "top-level" controller or exec.


But it has a sophisticated process scheduler, several garbage collectors, complex memory management, various network stacks, implementations of file systems (local and remote), virtual memory paging, software installer, printer scheduler, namespace server for configuration of networked resources (users, networks, printers, hosts, ...), mail server and client, ...

It's not that it has "a subroutine" for a disk, but actually very extensive support for disks and file systems on disk.

It's just that everything runs in one spared memory space, incl. all applications. Probably not a winning way to design a networked OS in today's Internet environment.


> Probably not a winning way to design a networked OS in today's Internet environment.

I wonder.

I know this is safely in imagination world, but if I were to have a system like this connected to the Internet, and there were others with similar systems also connected to the Internet, I would guess requirements above and beyond what is already provided would be:

1) security / sandboxing

2) ease of sharing code

I’m pretty fuzzy on how #2 would happen. I spent so much time with Git these days it’s hard to imagine anything else. #1 is actually less hard for me to imagine: “The key to Genera's intelligence is the sharing of knowledge and information among all activities.” I can imagine the routines responsible for this being extended to handle sandboxing. But then again I have a pretty good imagination, so maybe in reality this is actually the more difficult part to implement.


At that time one put stuff on a remote machine acting as a file server. One would centrally configure which stuff is where. The Lisp Machine could also act as a file server. It then knew users, files directories, servers, access control lists, file versions, etc. To a non-Lisp Machine one used NFS, the Lisp Machine had its own remote file protocol called NFILE. One could share software also via tar files or via its one distribution format. Networked object stores were also being developed.

But that was all before encrypted network connections were used... we are talking about the 80s when TCP/IP just became a thing.

Today one would need to upgrade the network stack of a Lisp Machine to support something like TLS or use it only over VPNs...


>From where you look at it, there is no "top-level" controller or exec.

”Real systems have no top."

Read that quote somewhere quite a while ago. :)

Searched just now and found:

https://softwarequotes.com/author/bertrand-meyer

which has the quote.


They have several.


Yes, I did understand the quote. It was not by me, but I could have said "no single top", if I had thought if it earlier :)


it'd be entirely possible to write the vm in lisp (there are plenty of good optimising lisp compilers, e.g. sbcl).

however, typically you'd want the vm to be easily portable / buildable in new environments, and that's much easier to achieve with c.

and it is likely just easier to write this kind of low-level code in c (however i know plenty of people who will gladly demonstrate otherwise).

consider that the jvm et al are also written in c(++), for much the same reasons of practicality.


The C abstract machine is basically an overgrown PDP-11 at this point and most modern hardware is designed with that in mind with GPU and vector hardware being notable exceptions — and notably not being especially amenable to programming in C.

It’s actually been an unfortunate and pernicious codependence IMHO.


It's more that it's easier to use C for two reasons. The first is that C is really popular and therefore pretty portable. It's a lingua franca. The other is that because the hosts are largely defined in C, it's easier to interact with. Of course, the host doesn't actually "speak C", it follows some form of ABI. But the reality is that implementing each ABI is non-trivial and you can avoid a lot of pain by just using the host's C compiler/linker/etc. that implements it for you.


Like, SBCL compiles and assembles directly to machine code, that's very much the Lisp way. But SBCL has a lot of C that's involved in getting the SBCL image running and interacting with the host.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: