Hacker News new | past | comments | ask | show | jobs | submit login

To what extent is the CL condition system inherently tied to Lisp(s)? Is there anything about it that makes it a natural fit for Lisp but not for other languages?

Macros, for example, are a natural fit for Lisp because of the parentheses. It would be difficult to add Lisp-style macros to a language like Python because Python doesn't have Lisp parentheses. In contrast, there's nothing about multiple namespaces that is particularly tied to Lisp. Common Lisp and Emacs Lisp have multiple namespaces, but Scheme doesn't. Python doesn't have them, but it just as easily could.

So is the condition system more like macros or more like multiple namespaces?




From Appendix E:

> It is also noteworthy that this aspect of the condition system is fully independent from Lisp’s homoiconicity; rather, it is a consequence of the way other programming languages are designed. For instance, when one divides by zero in a Java program, then there is nothing carved in stone which would prevent the language from winding the stack further and executing some code that will analyze the dynamic environment in which the error happened, calling all error handlers found in the dynamic environment, and then—if no handler transferred control outside the error site—either entering an interactive debugger of some sort or giving up and crashing. However, Java goes a different way: it immediately destroys the stack by throwing an exception. That is a design choice which was made by the Java creators; Lisp’s homoiconicity has nothing to do with this fact, as can be demonstrated by the multiple independent implementations of condition systems in non-homoiconic languages that we have mentioned earlier.


Oh golly. To think that people would quote my own words to answer Hacker News questions...

(Thanks for the assist!)


Why did Java's creators decide to have it do such a thing? Performance? Or just an immediately fail and let the author know philosophy?


I actually don't know this. Java creators were well aware of Common Lisp when they were working on Java (the famous quote from Guy Steele, "We were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."), but for whatever reason they've adopted the stack-destroying approach instead of the Lisp one.


The original language was designed to run on small devices and I don't think the designers thought of it as a useful feature for their kind of application domain. Steele wasn't even there at that time. Gosling also had only implemented a weak variant of Lisp (Mocklisp) before.


I'm not very knowledgeable but Java was designed as a compromise. But for instance anonymous inner classes, AFAICR, were put as a false lambdas but never really emphasized. Not surprised they left out a condition system.. the exceptions were already a step up ?


What phoe-krk mentions is indeed the main thing that blocks “real” integration into existing languages with exceptions: they always unwind before executing the handler. (JWZ even complained about this while writing about Java.)

To expand on their comment: if `throw` in Java or C++ is similar to `error` in CL (or `throw` in some special cases), a `catch` clause in Java or C++ is equivalent to a label, where the `try` binds a handler that exits to it immediately. There's no equivalent of putting code in the handler other than a single unwind-and-jump, and there's no equivalent of restarts.

In CL, a condition handler gets called on top of the existing stack and can inspect what's going on before choosing where to exit to. Other functions in the call stack can provide alternative exits (restarts), like “continue processing anyway” or “substitute a placeholder value”; these are dynamically named, rather than lexically bound like `catch` . So there's a lot more possible decoupling, at least in theory. The equivalent of `finally`/destructors is `unwind-protect`, which has to interoperate with the condition mechanism but doesn't deal with conditions itself.

In C++ or Java, you could implement the restarts with a (thread-local) stack of restart descriptions plus try/finally or constructor/destructor, and the same for handlers, and then do your nonlocal exits with specialized throwables. I did something similar in Lua, in fact, while trying to extend it into a fancier language. But a “normal” `throw` will bypass all of that. That's not dangerous if you do the unwind-protects properly, but none of your existing libraries will be built for it, and the results will be kind of anemic.

In the Java-style objects+throw/catch world, similar things can be achieved by toggling “what to do if X happens” state or plumbing callback pointers through the object graph beforehand, which is similar but more ad-hoc, and possibly harder to add to existing systems. That said, the CL style proper is very tied to the call stack, which can also make things tricky.


In C++, you can override the __cxa_throw() function to implement a fully-fledged condition system that calls handlers on top of the stack instead of unwinding first. Call the real __cxa_throw if there's no dynamic handler.

To top it off, you can provide a "restart" class that this new __cxa_throw treats like ordinary C++ exceptions, and throw an instance of it to perform the "exit".

I have no idea if this hack would comply with the standard, but it works with GCC.

You'd be missing COMPUTE-RESTARTS, however, so there'd be no asking the user where to jump.


The Itanium C++ ABI mentions something similar

> A two-phase exception-handling model is not strictly necessary to implement C++ language semantics, but it does provide some benefits. For example, the first phase allows an exception-handling mechanism to dismiss an exception before stack unwinding begins, which allows resumptive exception handling (correcting the exceptional condition and resuming execution at the point where it was raised). While C++ does not support resumptive exception handling, other languages do, and the two-phase model allows C++ to coexist with those languages on the stack.

http://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html


I am aware that there have historically been other languages besides Lisp that allow resumptive exception handling (PL/I is a historical example), but I'm unaware of any modern language besides Lisp that does it.

What would C++ be coexisting with on non-mainframe hardware?


Could you link me to any sources for that GCC behavior?

Also, if we can have dynamically established handlers, then we could also have dynamically established restarts (even if by means of a dynamic variable implemented via a lexical variable + a destructor), and therefore we could have a COMPUTE-RESTARTS of our own and then be able to invoke it arbitrarily as well as invoke individual restarts.


This isn't about GCC, but GCC's behavior seems to be compatible:

https://libcxxabi.llvm.org/spec.html


Thanks for the assist! I have one comment:

> The equivalent of `finally`/destructors is `unwind-protect`, which has to interoperate with the condition mechanism but doesn't deal with conditions itself.

It actually doesn't need to interoperate with the condition system; it has to interoperate with the stack-unwinding primitives - `go`/`tagbody`, `return-from`/`block`, and `throw`/`catch` - that are one of the foundations of the condition system.

If it works with those, then it works with a condition system, since all control flow that happens inside the condition system is a derivative of those primitive operators.


Ah, yes! I was treating those as effectively “part of” the condition mechanism for the comparative explanation, but you're right that that's misleading in CL. Thanks for the correction, or, shall I say, good catch. :-)


> good catch

Ouch. Kudos for the pun, that was truly awful. :D


It is the latter.

It is possible to implement a condition system on top of any language that has dynamic variables, a `finally`-style construct and some mechanism for unwinding the stack. Since dynamic variables are implementable on top of lexical variables and `finally`, it's basically just about unwinding the stack and `finally`.

The main issue is how a condition system would fit with any existing system which likely works by immediately unwinding the stack rather than allowing to wind it further; that's the case e.g. in Java or C++.


Isn't it possible to do in any system with CPS as well? I've not tried it but I would expect to be able to have a "conditions system" using the CPS monad in Haskell, for example. And Haskell doesn't even have variables the way most programmers think of them.


I haven't explored the CPS topic deeply, so I cannot really answer. As far as I understand, CPS preserves the state of the program or stack information by storing it in closures that are ready to be called at any time, whereas a condition system simply does not unwind the stack by simply not unwinding it and executing on top of the already existing stack, ready to both return control to the signaling code and to transfer it somewhere up the stack.

In its nature, a condition system is simply a means of executing code that has been provided dynamically, including transfers of control. I think that a closer term would be algebraic effects, which seem to be an equivalent of a condition system in a strictly typed strictly functional world.


CPS can be kind of thought of like a normal programming language with the modification that every function takes an additional parameter "the rest of the program" and calls this instead of "return". Of course once this mechanism is in your language you might have multiple "rest of the program" parameters which the function can pick between.

I found this old thread talking a bit about the lisp condition system [1] which also mentions implementing it in a typed manner.

I used Lisp for a while myself as my favourite language (the condition system was part of the reason) but the problem I ended up switching to Haskell because it has much of the power macros provide coupled with a very strong type system. That's why the potential occurred to me that CPS can probably implement something effectively equivalent condition system.

[1] http://lambda-the-ultimate.org/node/1544


I see. I know that it is possible to transform primitive CL control flow operators into CPS, as it is shown on https://gitlab.common-lisp.net/cl-cont/cl-cont/-/blob/master..., so I assume that it is also possible a condition system into CPS as a derivative of those operators and then possibly optimize it further to take CPS-specific code traits into account.


In principle most other compiled languages could have macros that manipulate the abstract syntax tree. Lisp makes it easy since there is little separation between the source code and AST, but Scala now has macros despite using totally different syntax.


Macros for python have been recently proposed [1], and oh boy aren't they ugly!

[1] https://www.python.org/dev/peps/pep-0638/


I'm glad they've finally been proposed, as ugly as they need to be in Python. It's not even about full homoiconicity; a language which is capable of easily understanding its own symbolic representation along with quote/unquote, as opposed to the direct AST representation, is capable of having somewhat bearable macros. See Elixir for a good example, e.g. at https://elixir-lang.org/getting-started/meta/macros.html.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: