Hacker News new | past | comments | ask | show | jobs | submit login
The Revival of Medley/Interlisp (theregister.com)
154 points by samizdis on Nov 23, 2023 | hide | past | favorite | 85 comments



My company bought me a Xerox 1108 Lisp Machine running Interlisp D in 1982. I write a commercial product for that environment that we sold for $5K, and it was a lot of fun. I run the latest Medley releases occasionally just for nostalgia. For present day hacking enjoyment I go with SBCL Common Lisp+Emacs, or Racket, or Python when I need the ecosystem.

Xerox really did a great job creating their Lisp Machines, a joy to develop on.


I often fantasize about a world where lisp or smalltalk machines took off instead of the Windows/Linux we have now. I know things weren't perfect, but it just seems like such a cool system and that we've evolved in a much less powerful direction.


We almost had it in OS/2, I guess many aren't aware that Smalltalk was kind of ".NET for OS/2" during its heyday.

Hence why it is a first party on SOM (OS/2 COM version), and SOM does support metaclasses.

But then OS/2 went as we all know, and with IBM's backing of Java, Visual Age products turned into Eclipse.

http://www.edm2.com/index.php/VisualAge_Smalltalk

https://en.wikipedia.org/wiki/IBM_System_Object_Model


In 2000, I worked for a company that had been acquired by IBM. When I discovered that I had access to things like VisualAge, APL, OS/2 I had a blast downloading and exploring these.

You're right; there was a pretty good vision for the future within IBM back then. It just didn't catch on.


When I worked at Kaleida (a joint venture of IBM and Apple), I had the wonderful opportunity to play around with Sk8, which was amazing! It was kind of like Dyland and ScriptX, in that it was an object oriented dialect of Lisp/Scheme with a traditional infix expression syntax. But it also had wonderful graphics and multimedia support, and cool weird shaped windows, and you could point at and explore and edit anything on the screen, a lot like HyperCard.

Q: What do you get when you cross Apple and IBM?

A: IBM!

https://en.wikipedia.org/wiki/SK8_(programming_language)

>SK8 (pronounced "skate") was a multimedia authoring environment developed in Apple's Advanced Technology Group from 1988 until 1997. It was described as "HyperCard on steroids",[1] combining a version of HyperCard's HyperTalk programming language with a modern object-oriented application platform. The project's goal was to allow creative designers to create complex, stand-alone applications. The main components of SK8 included the object system, the programming language, the graphics and components libraries, and the Project Builder, an integrated development environment.

[...]

The SK8 Multimedia Authoring Environment:

https://sk8.dreamhosters.com/sk8site/sk8.html

What is SK8?

SK8 (pronounced "skate") is a multimedia authoring environment developed in Apple's Research Laboratories. Since 1990, SK8 has been a testbed for advanced research into authoring tools and their use, as well as a tool to prototype new ideas and products. The goal of SK8 has been to enable productivity gains for software developers by reducing implementation time, facilitating rapid prototyping, supporting cross platform development and providing output to multiple runtime environments including Java. SK8 can be used to create rich media tools and titles simply and quickly. It features a fully dynamic prototype-based object system, an English-like scripting language, a general containment- and renderer-based graphic system, and a full-featured development interface. SK8 was developed using Digitool's Macintosh Common Lisp.

[...]

Sk8 Users Guide:

https://macintoshgarden.org/sites/macintoshgarden.org/files/...

https://news.ycombinator.com/item?id=21846706

mikelevins on Dec 20, 2019 | parent | context | favorite | on: Interface Builder's Alternative Lisp Timeline (201...

Dylan (originally called Ralph) was basically Scheme plus a subset of CLOS. It also had some features meant to make it easier to generate small, fast artifacts--for example, it had a module system, and separately-compiled libraries, and a concept of "sealing" by which you could promise the compiler that certain things in the library would not change at runtime, so that certain kinds of optimizations could safely be performed.

Lisp and Smalltalk were indeed used by a bunch of people at Apple at that time, mostly in the Advanced Technology Group. In fact, the reason Dylan existed was that ATG was looking for a Lisp-like or Smalltalk-like language they could use for prototyping. There was a perception that anything produced by ATG would probably have to be rewritten from scratch in C, and that created a barrier to adoption. ATG wanted to be able to produce artifacts that the rest of the company would be comfortable shipping in products, without giving up the advantages of Lisp and Smalltalk. Dylan was designed to those requirements.

It was designed by Apple Cambridge, which was populated by programmers from Coral Software. Coral had created Coral Common Lisp, which later became Macintosh Common Lisp, and, still later, evolved into Clozure Common Lisp. Coral Lisp was very small for a Common Lisp implementation and fast. It had great support for the Mac Toolbox, all of which undoubtedly influenced Apple's decision to buy Coral.

Newton used the new language to write the initial OS for its novel mobile computer platform, but John Scully told them to knock it off and rewrite it in C++. There's all sorts of gossipy stuff about that sequence of events, but I don't know enough facts to tell those stories. The switch to C++ wasn't because Dylan software couldn't run in 640K, though; it ran fine. I had it running on Newton hardware every day for a couple of years.

Alan Kay was around Apple then, and seemed to be interested in pretty much everything.

Larry Tesler was in charge of the Newton group when I joined. After Scully told Larry to make the Newton team rewrite their OS in C++, Larry asked me and a couple of other Lisp hackers to "see what we could do" with Dylan on the Newton. We wrote an OS. It worked pretty well, but Apple was always going to ship the C++ OS that Scully ordered.

Larry joined our team as a programmer for the first six weeks. I found him great to work with. He had a six-week sabbatical coming when Scully ordered the rewrite, so Larry took his sabbatical with us, writing code for our experimental Lisp OS.

Apple built a bunch of other interesting stuff in Lisp, including SK8. SK8 was a radical application builder that has been described as "HyperCard on Steroids". It was much more flexible and powerful than either HyperCard or Interface Builder, but Apple never figured out what to do with it. Heck, Apple couldn't figure out what to do with HyperCard, either.


Download link's broken on the sk8 site.


I'm curious, how do you feel about PowerShell?

I think it's quite amazing in that it's basically the scripting language for dotnet. I love that it allows for interactive usage of dotnet libraries.

To me it has the feel of a lot of the dynamic languages from the past, but in a framework that acknowledges types.

I've been using it for finance/economic data stuff lately. For example:

https://github.com/dharmatech/net-liquidity.ps1

The only downside is a lack of a Pandas-like library for it, so I occasionally reach for Python for larger datasets.


Powershell is great, but it'd really benefit from a GUI DSL for building UIs and charting data (dotnet is a lot of effort), increased speed (it's pretty slow) and libraries like pandas for data analysis and something like the GNU scientific library available. If those things were all built-in to Powershell/Windows, we'd have something pretty cool and unique, which is the ability to quickly and easily build little apps that don't require installs or anything like that. Just copy your script over for your buddy.

As is the theme in this thread, there's just so much I'd expect Windows to do that it simply can't do. Powershell is basically designed for Devops and not just normal business users unfortunately. It could be so much more.


Always a pleasure to see your comments in hn threads, pjmlp.

Happy Thanksgiving.


Thanks, likewise.


> Visual Age products turned into Eclipse.

(!) Well, TIL.


And boy was Sun pissed of at that name!


Ha! Now that you come to mention it, yes, ISWYM and I bet...


I feel the exact same way. I’m grateful for modern computers and what they can do, but I think the substrates of Lisp and Smalltalk machines make building flexible component-based software easier than the Linux, Windows, and Web ecosystems we have today. If I had the spare time, I’d work on a modern-day OS inspired by the Lisp and Smalltalk environments of old.

If I had the time and the money, I’d like to pick up where Xerox PARC left off when they stopped working on Smalltalk, Cedar, Mesa, and similar projects. I’m also very fascinated by Apple projects of the 1990s such as SK8, Dylan, and the original proposal for a Lisp-based Newton. During the “interregnum” years at Apple many people with interesting ideas on system design and usability worked at Apple, such as Don Norman and Larry Tesler. I’m grateful for Steve Jobs’ return and for NeXT-based macOS, but unfortunately as time passed by, the Smalltalk, Lisp, and even NeXT influences at Apple faded away. It would be cool if somebody continued this vision. I’d do it in a heartbeat if I had the time and the financial resources.


You might want to check out Urbit: urbit.org. The whitepaper[1] is a bit outdated but in section 12 "Inadequate summary of related work" you can see some of its influences:

"Many historical OSes and interpreters have approached the SSI[2] ideal, but fail on persistence, determinism, or both. In the OS department, the classic single-level store is the IBM AS/400 [18]. NewtonOS [19] was a shipping product with language-level persistence. Many image oriented interpreters (e.g., Lisps [20] and Smalltalks) are also SSI-ish, but usually not transactional or deterministic. And of course, many databases are transactional and deterministic, but their lifecycle function is not a general-purpose interpreter."

- [1] https://media.urbit.org/whitepaper.pdf

- [2] "Solid-State Interpreter"


Urbit is mostly marketing fluff for extracting money from investors and potential users. It is based on a for-profit commodity and ecosystem (their stars and ships and planets and all that stuff) that you have to buy into to use it. You cannot just "run your own urbit".


Sadly, Urbit is forever tarred by one of its contributors. I'm not saying it's not worth investigating. It is. But why put your energies in a software project which already has a cultural strike against it? It's sort of like maintaining RieserFS: technically interesting but upsetting the social nature of programming humans.


Well. Rieser tried to keep his professional and criminal lives separate. RieserFS wasn't an expression of his revolutionary worldview.

And I was going to point out that Java survived Patrick McNaughton, FOSS survived ESR, etc.

From the whitepaper, Urbit looks bitchin. I love its audacity. In that way, reminds me of Linda (tuplespaces), Xanadu, Jef Raskin's Humane Interfaces (Canon Cat), and others.

Alas. It appears Urbit and its creator's worldview are inseparable.

I'll wait for the reboot. Or maybe just glean some of its ideas.

Thanks for the head's up.


> one of its contributors

I think "original creator and sole original developer" would be a more accurate summary.


You're not alone. When someone asks "What OS do you prefer, Windows, Linux, or Mac?" my answer is: "none". Anyhow, back to dreaming. Someday maybe...


That's a good way to put it. I don't want an OS in the common way of thinking of one. I want something a lot closer to the Xerox Alto, but maybe more with a command oriented language.


Did you program the Alto? It didn’t really have an OS — each app took over, like on a PC or Apple II.

You might be thinking of the Alto’s descendants, the D-Machines, like the Dandilion (Star) or the research machines the Dolphin and the ECL Dorado. Those machines ran complete environments on the bare iron (Smaltalk, Interlisp-D, and Cedar/Mesa) but those environments included full O/Ses.

The Smalltalk environment did initially run on the Alto, but Interlisp never did — too demanding.


Rumors of my Death, LLMification, and Enshittification are greatly exaggerated, Gumby!

PS: about you comment below: The Novix NC4016 was FORTH coming.

https://en.wikichip.org/wiki/novix/nc4016

DonHopkins on Jan 20, 2022 | parent | context | favorite | on: Xerox PARC Mesa Programming Language 5.0 (1979) [p...

Previous HN discussion about that video:

https://news.ycombinator.com/item?id=22375449

Eric Bier Demonstrates Cedar:

https://www.youtube.com/watch?v=z_dt7NG38V4

>This interpretive production was created from archival footage of Eric Bier, PARC research scientist, demonstrating the Cedar integrated environment and programming language on January 24, 2019. Cedar was an evolution of the Mesa environment/language, developed at PARC’s Computer Science Laboratory originally for the Xerox Alto. Mesa was modular and strongly-typed, and influenced the later Modula family of languages. Cedar/Mesa ran on the D-machine successors to the Alto (such as the Dorado) and added features including garbage collection, and was later ported to Sun workstations. Cedar/Mesa’s integrated environment featured a graphical window system and a text editor, Tioga, which could be used for both programming and document preparation, allowing for fonts, styles, and graphics to be embedded in code files. The editor and all its commands were also available everywhere, including on the command console and in text fields. The demo itself is running through a Mac laptop remotely logged into Bier’s Sun workstation at PARC using X Windows. Bier demonstrates the Cedar development environment, Tioga editor, editing commands using three mouse buttons, sophisticated text search features, the command line, and the Gargoyle graphics editor, which was developed as part of Bier’s UC Berkeley Ph.D. dissertation. Bier is joined by Nick Briggs, Chris Jacobi, and Paul McJones.

[...]

https://news.ycombinator.com/item?id=34056973

DonHopkins 11 months ago | parent | context | favorite | on: Ten influential programming languages (2020)

You know what's a lot like Ada in a good way is Mesa, which evolved into Ceder, from Xerox PARC. I know people who really loved programming in it. They'd call it "Industrial Strength Pascal". It was a successful experiment in code reuse. A strongly typed language with strong separation between interfaces and implementations, which encouraged creating robust, hardened code.

https://en.wikipedia.org/wiki/Mesa_(programming_language)

>Mesa and Cedar had a major influence on the design of other important languages, such as Modula-2 and Java, and was an important vehicle for the development and dissemination of the fundamentals of GUIs, networked environments, and the other advances Xerox contributed to the field of computer science.

Demonstration of the Xerox PARC Cedar integrated environment (2019) [video] (youtube.com)

https://news.ycombinator.com/item?id=22375449

Computer History Museum: Eric Bier Demonstrates Cedar

https://www.youtube.com/watch?v=z_dt7NG38V4

Mark Weiser and others at Xerox PARC's ported the Cedar environment to Unix, which resulted in the development of the still-widely-used Boehm–Demers–Weiser conservative garbage collection.

https://news.ycombinator.com/item?id=22378457

I believe that stuff is the port of Cedar to the Sun. Xerox PARC developed "Portable Common Runtime", which was basically the Cedar operating system runtime, on top of SunOS (1987 era SunOS, not Solaris, so no shared libraries or threads, which PCR had to provide). He demonstrates compiling a "Hello World" Cedar shell command, and (magically behind the scenes) dynamically linking it into the running shell and invoking it.

Experiences Creating a Portable Cedar.

Russ Atkinson, Alan Demers, Carl Hauser, Christian Jacobi, Peter Kessler, and Mark Weiser.

CSL-89-8 June 1989 [P89-00DD6]

http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-89-8...

>Abstract: Cedar is the name for both a language and an environment in use in the Computer Science Laboratory at Xerox PARC since 1980. The Cedar language is a superset of Mesa, the major additions being garbage collection and runtime types. Neither the language nor the environment was originally intended to be portable, and for many years ran only on D-machines at PARC and a few other locations in Xerox. We recently re-implemented the language to make it portable across many different architectures. Our strategy was, first, to use machine dependent C code as an intermediate language, second, to create a language-independent layer known as the Portable Common Runtime, and third, to write a relatively large amount of Cedar-specific runtime code in a subset of Cedar itself. By treating C as an intermediate code we are able to achieve reasonably fast compilation, very good eventual machine code, and all with relatively small programmer effort. Because Cedar is a much richer language than C, there were numerous issues to resolve in performing an efficient translation and in providing reasonable debugging. These strategies will be of use to many other porters of high-level languages who may wish to use C as an assembler language without giving up either ease of debugging or high performance. We present a brief description of the Cedar language, our portability strategy for the compiler and runtime, our manner of making connections to other languages and the Unix operating system, and some measures of the performance of our "Portable Cedar".

PCR implemented threads in user space as virtual lightweight processes on SunOS by running several heavy weight Unix processes memory mapping the same main memory. And it also supported garbage collection. Mark Weiser worked on both PCR and the Boehm–Demers–Weiser garbage collector.

https://en.wikipedia.org/wiki/Boehm_garbage_collector

This is the 1988 "Garbage Collection in an Uncooperative Environment" paper by Hans-Juergen Boehm and Mark Weiser:

https://hboehm.info/spe_gc_paper/preprint.pdf

>Similarly, we treat any data inside the objects as potential pointers, to be followed if they, in turn, point to valid data objects. A similar approach, but restricted to procedure frames, was used in the Xerox Cedar programming environment [19].

[19] Rovner, Paul, ‘‘On Adding Garbage Collection and Runtime Types to a Strongly-Typed, Statically Checked, Concurrent Language’’, Report CSL-84-7, Xerox Palo Alto Research Center.

http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-84-7...

My guess is that the BDW garbage collector had its roots in PCR (pun intended, in fact this entire message was just an elaborate setup ;), but I don't know for sure the exact relationship between Cedar's garbage collector, PCR's garbage collector (which is specifically for Cedar code), and the Boehm–Demers–Weiser garbage collector (which is for general C code). Does anybody know how they influenced each other, shared code, or are otherwise related? Maybe there's a circular dependency!

https://news.ycombinator.com/item?id=24450970

Xerox Cedar “Viewers Window Package” (2018) (toastytech.com)

http://toastytech.com/guis/cedar.html

gumby on Sept 13, 2020 | next [–]

This says “developed after the Star“ but imho the Dandelion (marketed as the star) was too slow for this environment and you needed one of the bigger machines (Dolphin or Dorado). Actually it’s kind of amazing to realize that two years later youncould get a small Mac for about a fifth the price that sat on your desk (not rolled next to it on casters) and was much more responsive. Did less, but what it did it did well, and was all that most people needed.

In addition to the Smalltalk and Mesa environments mentioned in the post, there was the Interlisp-D environment too, which got much more use outside thanks to being used outside PARC.

pjmlp on Sept 13, 2020 | parent | next [–]

The Computer History Museum organized a session with Eric Bier, and several other folks demoing the Mesa/Cedar environment.

https://youtu.be/z_dt7NG38V4

The only modern environments that seem to have kept alive several of these ideas are Windows/.NET/COM, the ones designed by Apple/NeXT and to certain extent Android (although with a messed up execution).

Even Linux could grasp many of these ideas, if D-BUS would be properly taken advantage of and settled on a specific development experience.

Somehow it looks like we are still missing so much from Xerox PARC ideas.

----

The Cedar Programming Environment: A Midterm Report and Examination

http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-83-1...

C - Cedar/Mesa Interoperability

http://www.bitsavers.org/pdf/xerox/parc/cedar/C_-_Cedar_Mesa...

Describes Portable Common Runtime (PCR), and the PostScript and Interpress decomposers implemented in Cedar, and includes many other interesting document about Cedar.


two things come to mind:

- good ideas too early don't grow, but they reemerge as genes for subsequent generations (closure, pattern matching, immutability etc are all back in fashion)

- there's a paradoxical idea of great pioneering ideas that die before becoming mainstream on their own but need future lesser vessels to shine anonymously. somehow the past was better but couldn't be.. how many other nice things were partially lost ?


At an emotional level I get this and can sympathize. But at a rational level... what would be shortcomings of a Lisp programming environment or Smalltalk image, versus Win/Mac/*nix?

Perhaps I had better specify: apart from software ecosystem support.

(I see one great thread talking about downsides of image-based(as in, Smalltalk image) on related @ https://news.ycombinator.com/item?id=34300806#34302095)


Beckman Instruments got our Xerox 1108 in 1983 (and an 1186 a couple of years later). We developed Expert System commercial products in Interlisp-D but ported them to run on the PC (DOS) using the Gold Hill Common Lisp.

That was a wonderful environment to develop on. So, I'm now working on the Medley Interlisp Project!

SpinPro™ designs optimal ultracentrifugation procedures (for biology research) (Beckman manufactures and sells ultracentrifuge instruments.) SpinPro had issues with marketing, with few customers.

PepPro™ designs chemical procedures to synthesize custom peptides (small proteins). PepPro was essentially completed when Beckman Instruments dropped their entire Peptide Synthesis product line. (The PepPro user manual was in final review.)

SpinPro https://pubs.acs.org/doi/abs/10.1021/bk-1986-0306.ch023

PepPro https://cdn.aaai.org/IAAI/1989/IAAI89-010.pdf


Hey Mark

Can you tell us more about that commercial project you developed?


I ported Charlie Forgy’s OPS5 to InterLisp-D and added a nice UI and a few utilities.

Most of my development work however was doing demos for specific potential clients. It was easy enough to put something tailored together, then a senior person would bring potential clients into my office and I would mostly talk with them while showing them their demo. So, not much practical. That was 40 years ago.


Crafting custom demos in InterLisp-D... What a cool gig!


What product did you write, if you can discuss it?


Related:

My encounter with Medley Interlisp - https://news.ycombinator.com/item?id=34300806 - Jan 2023 (49 comments)

2022 Medley Interlisp Annual Report - https://news.ycombinator.com/item?id=34100600 - Dec 2022 (11 comments)

Interlisp Online - https://news.ycombinator.com/item?id=32621183 - Aug 2022 (9 comments)

Larry Masinter, the Medley Interlisp Project: Status and Plans - https://news.ycombinator.com/item?id=25379238 - Dec 2020 (2 comments)

Interlisp project: Restore Interlisp-D to usability on modern OSes - https://news.ycombinator.com/item?id=24075216 - Aug 2020 (24 comments)


>Scheme is an exotic sports car. Fast. Manual transmission. No radio.

>Emacs Lisp is a 1984 Subaru GL 4WD: "the car that's always in front of you."

>Common Lisp is Howl's Moving Castle.

Scheme is the intersection of all Lisps.

Common Lisp is the union of all Lisps.

https://wiki.c2.com/?NetworkExtensibleWindowSystem

https://www.donhopkins.com/home/catalog/lang/NeWS.html

>Basically, X and NeWS seem to form the right and left brain halves of windowing systems. X is basic, fast (or should be) and analytical, NeWS seems to be what you should be using if you want something more creative than boxes with chars and/or line drawings in them of a fairly fixed nature. Right now people seem to be responding to each on that atavistic level.

>X is a jeep wagoneer with all options including a tow ball if you can't fit it inside the cab, NeWS is a DeLorean turning magnificently on a stand in the main lobby of the Museum of Modern Art, the engine comes in kit form, diesel, gasoline, ethanol, any number of cylinders all available, actually the kit is just a big cube of steel, very high grade, and a textbook on modern engine design.

>The X11/NeWS merge might very well end up to be the "long-awaited" station wagon version of the DeLorean, with the jeep hanging off the back on a newly attached brushed stainless steel tow ball, just in case.

>-Barry Shein, Boston University, 6 Feb 1988, NeWS-makers@brillig.umd.edu


> Common Lisp is the union of all Lisps

Except it wasn't. Common Lisp was different from most Lisps in that it was a standard and not an implementation. Implementations were different, from small to large scale. The initial CLtL1 language definition was a small part of Lisp Machine Lisp with some stuff added in (type declarations, lexical binding, ...).

CLtL1 lacked

  * a way to start or quit Lisp
  * command line arguments
  * memory management (like garbage collection, finalization, memory areas)
  * virtual machine
  * threads
  * stack groups
  * continuations
  * interrupts
  * fexprs
  * error handling
  * object system or any way to define extensible operations
  * user defined stream types
  * networking
  * internationlization features (like international character sets, multilanguage support, ...)
  * a higher level iteration construct
  * tail call elimination
  * stack introspection (backtraces, ...)
  * pretty printer interface
  * MLISP / RLISP syntax
  * library & source code management
  * 'weak' data structures
  * extensible hash tables
  * terminal interface
  * assembler
  * advising
  * source retrieval
  * pattern matcher
  * calling external programs
  * locatives
  * big floats  
  * single namespace for functions and variables
and more. None of that was in CLtL1. Much of that also is not in ANSI CL.

Read through Lisp manuals (MIT Scheme, MacLisp, MDL, Lisp Machine Lisp, Interlisp, ...) from that time (when CLtL1 was published) and much of that existed.

Implementations provided more. Just like, say, Interlisp-D (which also was an operating system with applications) provided much more than most Common Lisp implementations.


Scheme also isn't the intersection of all Lisps. I take union/intersection comment as a quip, not to be read literally.


It can be seen as a small core language of a lexically scoped & tail call optimizing Lisp variant. R1RS was defined on just 35 pages (and they were not as densely written as in later reports).


That's right, it's just a throw-away quip, but if you want the deep nuanced story and inside history of Common Lisp and comparison with Scheme, Kent Pitman is the one to read:

https://en.wikipedia.org/wiki/Kent_Pitman

Index of Kent Pitman's Papers:

https://www.nhplace.com/kent/Papers/

Scheme or Lisp? Kent M Pitman explains the deep philosophical differences.

https://www.reddit.com/r/programming/comments/6fa5r/scheme_o...

Kent Pitman on Scheme or Lisp?:

https://groups.google.com/g/comp.lang.lisp/c/TEk4O4-zsA8/m/H...

Common Lisp: The Untold Story:

https://www.nhplace.com/kent/Papers/cl-untold-story.html

Why Wolfram Mathematica did not use Lisp (2002) (ymeme.com):

https://news.ycombinator.com/item?id=9797936

https://web.archive.org/web/20110122140154/http://www.ymeme....

Kent Pitman's essay on why lisp doesn't have copying of lists.

https://groups.google.com/g/comp.lang.lisp/c/MmtQreo3PCM

Parenthetically Speaking with Kent Pitman: The Best Intentions: EQUAL Rights -- And Wrongs -- In Lisp:

https://www.nhplace.com/kent/PS/EQUAL.html

Kent M. Pitman Answers On Lisp And Much More:

https://developers.slashdot.org/story/01/11/03/1726251/kent-...

Kent M. Pitman's Second Wind:

https://developers.slashdot.org/story/01/11/13/0420226/kent-...

Tutorial on Good Lisp Programming Style: Peter Norvig, Sun Microsystems Labs Inc; Kent Pitman, Harlequin Inc.:

https://www.cs.umd.edu/~nau/cmsc421/norvig-lisp-style.pdf

Notes from the ANSI standardisation process:

https://stackoverflow.com/questions/72414053/notes-from-the-...

Issue CLOS-CONDITIONS Writeup:

https://www.lispworks.com/documentation/lw50/CLHS/Issues/iss...

On Pitman's “Special forms in Lisp” (2011) (kazimirmajorinc.com):

https://news.ycombinator.com/item?id=29947329

https://news.ycombinator.com/item?id=29954993

DonHopkins on Jan 16, 2022 | parent | next [–]

Kent Pitman also wrote the "Revised Maclisp Manual (Saturday Evening Edition)" aka the "Pitmanual".

https://en.wikipedia.org/wiki/David_A._Moon

http://www.nhplace.com/kent/publications.html

>In 1983, I finished the multi-year task of writing The Revised Maclisp Manual (Saturday Evening Edition), sometimes known as The Pitmanual, and published it as a Technical Report at MIT's Lab for Computer Science. In 2007, I finished dusting that document off and published it to the web as the Sunday Morning Edition.

http://www.maclisp.info/pitmanual/

Not to be confused with David Moon who wrote the "MacLISP Reference Manual" aka the "Moonual", and who co-authored the "Lisp Machine Manual" with Richard Stallman and Daniel Weinreb, which had big bold lettering that ran around the spine and back of the cover, so it was known as the "LISP CHINE NUAL" (reading only letters on the front).

https://news.ycombinator.com/item?id=15185827

https://hanshuebner.github.io/lmman/title.xml

https://news.ycombinator.com/item?id=15186998

DonHopkins on Sept 6, 2017 | next [–]

The cover of the Lisp Machine Manual had the title printed in all caps diagonally wrapped around the spine, so on the front you could only read "LISP CHINE NUAL". So the title was phonetically pronounced: "Lisp Sheen Nual".

My friend Nick made a run of custom silkscreened orange LISP CHINE NUAL t-shirts (most places won't print around the side like that).

https://www.facebook.com/photo.php?fbid=74206161754&l=54ec4e...

I was wearing mine in Amsterdam at Dappermarkt on Queen's Day (when everyone's supposed to wear orange, so I didn't stand out), and some random hacker (who turned out to be a university grad student) came up to me at random and said he recognized my t-shirt!

http://www.textfiles.com/hacking/hakdic.txt

CHINE NUAL (sheen'yu-:l) noun.

The reference manual for the Lisp Machine, a computer designed at MIT especially for running the LISP language. It is called this because the title, LISP MACHINE MANUAL, appears in big block letters -- wrapped around the cover in such a way that you have to open the cover out flat to see the whole thing. If you look at just the front cover, you see only part of the title, and it reads "LISP CHINE NUAL"

toomanybeersies on Sept 7, 2017 | parent | next [–]

Link to an image of the manual, for the lazy:

https://c1.staticflickr.com/1/101/264672507_307376d26c_z.jpg

https://news.ycombinator.com/item?id=27332340

DonHopkins on May 30, 2021 | prev | next [–]

Here's the source code for Kent Pitman's "DOCTOR" in MACLISP, which was of course inspired by ELIZA. (Joseph Weizenbaum taught Kent Pitman LISP!)

https://github.com/PDP-10/its/blob/master/src/games/doc.102

And here's what happened with he (manually by typing) hooked it up with Kenneth Colby's "PARRY" (the paranoid patient):

https://www.maclisp.info/pitmanual/funnies.html

>Parrying Programs

>I didn't write the original ELIZA program, although my Lisp class was taught by Joseph Weizenbaum, who did. I later wrote a very elaborate program of similar kind, which I just called DOCTOR, in order to play with some of the ideas.

>At some point, I noticed there was a program at Stanford called PARRY (the paranoid patient), by Kenneth Colby. I understand from Wikipedia's PARRY entry that Weizenbaum's ELIZA and PARRY were connected at one point, although I never saw that. I never linked PARRY with my DOCTOR directly, but I did once do it indirectly through a manual typist. Part of my record of this exchange was garbled, but this is a partial transcript, picking up in the middle. Mostly it just shows PARRY was a better patient than my DOCTOR program was a doctor.

>I have done light editing to remove the typos we made (rubbed out characters were echoed back in square brackets).

>Also, I couldn't find documentation to confirm this, but my belief has always been that the numeric values after each line are PARRY's level of Shame (SH), Anger (AN), Fear (FR), Disgust (DS), Insecurity (IN), and Joy (J).—KMP

[...]

;;; Notes about CLI interrupts and eval-in-other-lisp:

https://news.ycombinator.com/item?id=20267415

https://news.ycombinator.com/item?id=38061207

>Here's Kent Pittman's :TEACH;LISP from ITS, which is a MACLISP program that teaches you how to program in MACLISP. (That's "Man And Computer Lisp" from "Project MAC", not "Macintosh Lisp".)


Interlisp-D is quite fascinating even without the full environment.

The spaghetti stack implementation makes continuation type things comfortably implementable and also runtime introspection of program state.

It also has NLAMBDA as well as LAMBDA - the former is operative rather than applicative, i.e. it passes the argument expressions rather than evaluating them and passing the resulting values.

Means you can do macro-like things at runtime if you have the need, though unlike e.g. Kernel Lisp and other fexpr-based lisps you don't get an environment object passed - because Interlisp is dynamically rather than lexically scoped a plain eval will DTRT so long as you haven't shadowed any of the relevant symbols.

(I have a couple of Interlisp docs, plus a couple Kernel docs, plus some other stuff archived under https://trout.me.uk/lisp/ if anybody's interested - there's probably better links elsewhere in the thread for the Interlisp things but I have a habit of keeping copies of stuff I know I'll want to refer back to)


> The spaghetti stack implementation makes continuation type things comfortably implementable and also runtime introspection of program state.

For a long time that spaghetti stack implementation was buggy because I don’t think it was really used. When I was at PARC working on 3-Lisp I wrote an emacs (being no fan of structure editing or the mouse) in Interlisp-D. I chose to make each command binding a closure (so you had all the proper mode context etc). That spaghetti light on the mouse cursor was basically perpetually on and the machine became unusable.

I showed Masinter. It seems like nobody had ever thought of creating thousands of closures before. The bug was eventually fixed but by then I’d chosen a different strategy.


I bet the bug was caused by a copy-and-pasta error!


I’m surprised given that the spaghetti stack is divinely inspired


Maybe the problem was that you can't tail call modulo cons a noodly append()age?


> Developers from Fuji Xerox wrote a portable VM in C to run the environment on different host platforms, called Maiko.

I'm always confused by the relationship of C and Lisp(s). Here the VM is written in C. Yet elsewhere there seem to be at least one good example of a Lisp compiler written in Lisp [0]. What was the reason for writing Maiko in C, versus Lisp "all the way"?

[0] "The first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT, and could be compiled by simply having an existing LISP interpreter interpret the compiler code, producing machine code output able to be executed at a 40-fold improvement in speed over that of the interpreter.[19] This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely. The language used in Hart and Levin's memo is much closer to modern Lisp style than McCarthy's earlier code. " https://en.wikipedia.org/wiki/Lisp_(programming_language)#Hi...


A Lisp and a compiler are two different things: Lisp is the whole thing and the Lisp compiler is just a component of a Lisp system.

Now, one could write a virtual machine in Lisp, but usually one would write it in C or assembler, since that's what squeezes more performance out of the hardware and makes interfacing to the operating system 'easier' (threads, calls into the OS, memory management, interrupts, error handling, etc.).

There are examples where (parts of) a virtual machine are written in Lisp. For example the Virtual-CPU emulator may be generated as C or assembler code from a Lisp program. There are also special versions of, say, the JVM written in Lisp to prove its correctness. Ideally one may want to have the core VM emulator written in assembler to reduce it to the most minimum hardware instructions per virtual machine instruction.

The example you cited from 1962 is a Lisp compiler written in Lisp, compiled by itself, running in a Lisp whose runtime was written in assembler (IIRC).


That helps, thank you. Based on what I've seen with Smalltalk, and playing around with the online Interlisp environment [0], I was under the impression an underlying OS was not a necessity [1].

[0] https://online.interlisp.org/main

[1] Part of my confusion may stem from ideas like presented by Chuck Moore in Masterminds of Programming by Biancuzzi: "Operating systems are dauntingly complex and totally unnecessary. It’s a brilliant thing that Bill Gates has done in selling the world on the notion of operating systems. It’s probably the greatest con game the world has ever seen.

An operating system does absolutely nothing for you. As long as you had something—a subroutine called disk driver, a subroutine called some kind of communication support, in the modern world, it doesn’t do anything else. In fact, Windows spends a lot of time with overlays and disk management all [sic] stuff like that which are irrelevant. You’ve got gigabyte disks; you’ve got megabyte RAMs. The world has changed in a way that renders the operating system unnecessary.

What about device support?

Chuck: You have a subroutine for each device. That’s a library, not an operating system. Call the ones you need or load the ones you need."


Interlisp was developed at BBN on the PDP-10 under the TENEX operating system. Danny Bobrow had previously been at MIT and worked on MACLISP running on PDP-10s running the ITS OS. So that mode was and is perfectly normal.

When Danny went to PARC they started working on a port of Interlisp to the D machine hardware. The model adopted was that of smalltalk: a hermetic environment running on the bare iron, with everything (drivers, network stack etc) written in lisp. One big difference from the MIT world was that smalltalk and Interlisp were built around working in a world and checkpointing the entire machine state, rather than loading files into a base image.

PARC also had a lot of network-only RPC services (mail, filesystem, printing, etc) so each environment had its own implementation of talking to these services, and all its own UX. We’re talking late 70s here, some of it more sophisticated than what you can get today.


I should add that the D machines had writable control stores so these language environments (Interlisp-D, SmallTalk, and Cedar/Mesa) each had custom microcode. I wrote some microcode for InterLisp-D back when I was 20.



Fascinating.

> Genera is your whole environment; it encompasses what you normally think of as an operating system as well as everything else - system commands and all other activities. From where you look at it, there is no "top-level" controller or exec.


But it has a sophisticated process scheduler, several garbage collectors, complex memory management, various network stacks, implementations of file systems (local and remote), virtual memory paging, software installer, printer scheduler, namespace server for configuration of networked resources (users, networks, printers, hosts, ...), mail server and client, ...

It's not that it has "a subroutine" for a disk, but actually very extensive support for disks and file systems on disk.

It's just that everything runs in one spared memory space, incl. all applications. Probably not a winning way to design a networked OS in today's Internet environment.


> Probably not a winning way to design a networked OS in today's Internet environment.

I wonder.

I know this is safely in imagination world, but if I were to have a system like this connected to the Internet, and there were others with similar systems also connected to the Internet, I would guess requirements above and beyond what is already provided would be:

1) security / sandboxing

2) ease of sharing code

I’m pretty fuzzy on how #2 would happen. I spent so much time with Git these days it’s hard to imagine anything else. #1 is actually less hard for me to imagine: “The key to Genera's intelligence is the sharing of knowledge and information among all activities.” I can imagine the routines responsible for this being extended to handle sandboxing. But then again I have a pretty good imagination, so maybe in reality this is actually the more difficult part to implement.


At that time one put stuff on a remote machine acting as a file server. One would centrally configure which stuff is where. The Lisp Machine could also act as a file server. It then knew users, files directories, servers, access control lists, file versions, etc. To a non-Lisp Machine one used NFS, the Lisp Machine had its own remote file protocol called NFILE. One could share software also via tar files or via its one distribution format. Networked object stores were also being developed.

But that was all before encrypted network connections were used... we are talking about the 80s when TCP/IP just became a thing.

Today one would need to upgrade the network stack of a Lisp Machine to support something like TLS or use it only over VPNs...


>From where you look at it, there is no "top-level" controller or exec.

”Real systems have no top."

Read that quote somewhere quite a while ago. :)

Searched just now and found:

https://softwarequotes.com/author/bertrand-meyer

which has the quote.


They have several.


Yes, I did understand the quote. It was not by me, but I could have said "no single top", if I had thought if it earlier :)


it'd be entirely possible to write the vm in lisp (there are plenty of good optimising lisp compilers, e.g. sbcl).

however, typically you'd want the vm to be easily portable / buildable in new environments, and that's much easier to achieve with c.

and it is likely just easier to write this kind of low-level code in c (however i know plenty of people who will gladly demonstrate otherwise).

consider that the jvm et al are also written in c(++), for much the same reasons of practicality.


The C abstract machine is basically an overgrown PDP-11 at this point and most modern hardware is designed with that in mind with GPU and vector hardware being notable exceptions — and notably not being especially amenable to programming in C.

It’s actually been an unfortunate and pernicious codependence IMHO.


It's more that it's easier to use C for two reasons. The first is that C is really popular and therefore pretty portable. It's a lingua franca. The other is that because the hosts are largely defined in C, it's easier to interact with. Of course, the host doesn't actually "speak C", it follows some form of ABI. But the reality is that implementing each ABI is non-trivial and you can avoid a lot of pain by just using the host's C compiler/linker/etc. that implements it for you.


Like, SBCL compiles and assembles directly to machine code, that's very much the Lisp way. But SBCL has a lot of C that's involved in getting the SBCL image running and interacting with the host.


Here’s a short video showing the environment.

https://youtu.be/QDrhdsrmtAQ



I think people came to the conclusion in the mid 1980s that you couldn’t really get ahead with a specialized “LISP machine” compared to an advanced general purpose processor, particularly when you put caching, pipelining and superscalarity into the mix.

The genius of Common LISP was it had mechanical sympathy for the forthcoming ‘32-bit’ computers . Java was very much inspired by the CL spec defining a rich, efficient and implementable memory managed VM you could build over general purpose hardware with a lot of room for optimization and documenting that system very well.


I read in the article that it was possible to virtualize Medley/Interlisp on MS-DOS. If getting ahead means moving from Lisp Machines to MS-DOS then it's questionable what direction things were moving in.

I get annoyed when I read things like the Plan 9 system would seamlessly works across several heterogeneous machines making it feel like one system, yet in 2023 I have jump through hoops to access files on my computer from my phone and vice versa yet both are in the same network


> in 2023 I have jump through hoops to access files on my computer from my phone and vice versa yet both are in the same network

Syncthing is pretty good unless you use iOS. Apple products tend to only have interoperability and user freedom because they're forced to.


> The genius of Common LISP was it had mechanical sympathy for the forthcoming ‘32-bit’ computers

“Forthcoming” is a bit anachronistic.

By the time CL was being standardized in the early 80s 32 bit machines like the Vax were quite common, and there was plenty of experience from NIL and franzlisp. Also there was a decade of experience from the CADR which was a 32-bit machine.

And CL itself was based mainly on the CADR’s Lisp, a descendant of MACLISP, which was developed on 36-bit machines like the PDP-6/PDP-10/PDP-20 which was the original Lisp machine, and also the progenitor of Interlisp, that also fed CommonLisp. MACLISP also ran on Honeywell’s 36-bit hardware under Multics.


also the ibm 360 was 32-bit and shipped in 01965. it's a lot more similar to a vax or 68000 or 80386 than a cadr or pdp-10 is


The 360 32-bit registers but a 24-bit address space, the 370 (from 1971) made that a virtual address space. Both of those I would call a "24-bit" computer. In 1983 IBM came out with the 370-XA

https://en.wikipedia.org/wiki/IBM_System/370-XA

which I would describe as "32-bit" I think the VAX is a good example of a "32-bit" machine because they are similar in most respects to ARM/x86/RISC-V, particularly the virtual memory facility and how that relates to the OS such that you can boot Linux on it

https://www.linux.com/news/linuxvax-porting-project-maybe-la...

The VAX was really common, our high school had one, but when the 386 hit the market we were all shocked that you could have your very own PC that would perform 5x faster than a certain VAX at certain benchmarks and before long it was 30x faster.

The strange thing about the PC was that it took a very long time for 32-bit OS to be mainstream: the 386 was out in 1986 but it wasn't until Windows 95 that most people were running a 32-bit OS though I had a Linux computer in 1993 and dragged home a free VT-100 from the math department and a free Commodore 128 from the undergraduate physics lab both of which I used to log into it.


This is why I didn’t include the D machines in that list either: they were really 16-bit architectures with some 32-bit data paths in the CPU.


i don't think it really matters that you couldn't address more than 16 mebibytes, since you couldn't afford anywhere close to that much memory at the time anyway

the 68000 (used in the macintosh, jackintosh, apollo, sun-1, and amiga) and 68010 (used in the sun-2 and the unix pc) also used 24-bit addressing, and every arm before the arm6 had only 26-bit addressing†, so in those ways I think the 24-bit nature of the 360 was actually closer to the usual suspects in the attack of the killer micros than the vax was

the 360 model 67 had 4-kibibyte pages and segments, almost exactly like the 80386, but it shipped in 01965

a much bigger difference is that the 360, 68000, vax, 80386, arm, and risc-v all used byte-oriented addressing, while the pdp-10 and cadr were word-addressed machines, like mix, the cdc 6600, the crays, the pdp-8, or the greenarrays ga144. you couldn't take the address of a byte in memory on a pdp-10 or cadr. to me this is a much bigger issue for 'mechanical sympathy' than whether you can address four times as much memory as you can afford, 16 times as much, or 1024 times as much

(data general, predictably, found a way to combine the worst of both worlds; on the nova you could access memory in a byte-addressed way, but only if it was in the first half of the 16-bit address space. the upper half could only be addressed as 16-bit words)

the timeline of common lisp standardization and relevant 32-bit machines is maybe something like

- ibm 360 (01965)

- vax-11/780 (01977)

- cadr lispm (01978)

- 68000 (01979)

- common lisp effort starts at arpa (01981)

- apollo dn100 (01981? using two 68000s)

- berkeley risc-i (01981, fabbed by mosis and published but not sold)

- stanford mips (01981, likewise)

- sun-1 (01982)

- gls presents common lisp at acm lisp symposium (01982)

- sun-2 (01983)

- 68020 (01984, removal of 24-bit addressing limit)

- ti explorer (01984, descended from the cadr, using a 32-bit nubus; https://en.wikipedia.org/wiki/Transistor_count says the cpu wasn't 32-bit until 01987, but i think that's maybe wrong)

- cltl1 (01984)

- macintosh (01984, including the more expensive model with 512 kibibytes of ram)

- sun-3 (01985, using a 68020)

- jackintosh (01985)

- 80386 (01986)

- sparc (01986)

- mips r2000 (01986)

- arm2 (01986)

- compaq deskpro 386 (01986, the first mass-market 80386)

- ibm rt pc (01986, the first product using the romp processor theoretically available in 01981)

- zilog z8000 (01986)

- 68030 (01987)

- amd 29000 (01988)

- sun-4 (01988, first sun using sparc)

- arm3 (01989, still with the 26-bit limit)

- 80486 (01989)

- cltl2 published (01990)

- ansi standardizes common lisp (01994)

† even the arm6 and arm7 supported 26-bit addressing for backward combatibility, but as far as i can tell that's not really any different from the 80386 and vax supporting 16-bit addressing; i think it was a different processor mode

— ⁂ —

i think the 80386 took a long time to be mainstream because it was expensive as fuck, so people kept buying 80286 and even 8088 systems well into the 01990s. this meant that if you shipped software that required a 386, you were drastically limiting your market, so, for a long time, most 386es got used as just faster 8086es. its horrific boot sequence and bletcherous virtual memory design probably played a significant role in slowing the advent of 32-bit operating systems, too


I think you mean:

- ibm 0360 (01965)

- vax-011/0780 (01977)

- 068000 (01979)


i 0admit i was 0tempted


Looks like you got burned by the y2k problem and now you're digging in for the long run.


well, the slightly less short run, anyway


Just that the CL spec says literally nothing about a VM. It's a language spec, not an implementation spec.


> Common Lisp is Howl's Moving Castle.

Common lisp isn't that big of a moving target, its also a standard which means that the standard isnt moving. In the vehicular comparison, I'd say its more like the filliping geepney (big heavy, shiny and bullies it way through heavy traffic).


The Conrad Barski lisp book used pictures of sheep and wolves to describe Scheme, Lisp, and Haskell iirc. Also cavemen to describe Fortran I think lol. I think he was trying to show scheme as being more elegant than common lisp, but also significantly less practical.


I would be able to speak to the allegedly “impractical” nature of Scheme if only I knew why Cisco hired Kent Dybvig, the principal developer of Chez Scheme. I would also like to know why Beckman Coulter Life Sciences supported the development of Swish, an extension of Chez Scheme that provides Erlang-inspired message passing.

Gambit and Scheme have been used for health applications. Here's one paper on that subject, from 2013: https://ecem.ece.ubc.ca/~cpetersen/lambdanative_icfp13.pdf

I would mention the use of GNU Guile to build Guix, which has had considerable uptake. Guile has been used to build other Linux programs.

Admittedly, Scheme isn't widely used. But impractical? No!


The impression of being 'impractical' came from the Scheme reports, which for a long time, with the exception of the controversial R6RS, only standardized a relatively simple/limited language (example: no error handling).


…if only I knew why Cisco hired Kent Dybvig…

Nobody knows really but my suspicion it's not for moving Cisco codebase to Scheme.


why Beckman Coulter Life Sciences supported the development of Swish, an extension of Chez Scheme

Having worked at Beckman¹ from 1978-2022, I strongly suspect that the support of a Scheme-based system was due to the educational background of several of the senior developers in that group of the Life Sciences software development team.

¹ Beckman Instruments -> SmithKline Beckman -> Beckman Instruments -> Beckman Coulter -> Beckman Coulter acquired by Danaher 2011 -> Beckman Coulter Capillary Electrophoresis business moved (2013) to AB Sciex (also a Danaher company)


Probably impracticality via ecosystem. Chez scheme is fast, but more limited library wise than common lisp and much more limited than Python in that manner. Note that I like Chez scheme and think it's really cool.


I feel like Racket sorta changed the "practicality" story in many respects for Scheme. Not in the sense that Racket itself is practical, but the macro system is really good and inspired other Schemes like Gerbil that are more practical.


The Notecards application is interesting.


[Article author here]

It is. The whole thing is.



Yikes! Merged hither now. Thanks!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: