Hacker News new | past | comments | ask | show | jobs | submit login
Chez Scheme as the Racket VM (groups.google.com)
218 points by nickmain on Feb 15, 2017 | hide | past | favorite | 91 comments



I'm curious why people don't prefer Chez Scheme to Racket if it is so fast and can make real binaries (I think anyway). Maybe because until recent it was proprietary. Racket is cool, but I already have Python which is similar from a performance perspective. Chez with the Racket ecosystem might be worth a switch.


Is python really that similar from a performance perspective? The highly scientific benchmarks game cough seems to suggest Racket is usually an order of magnitude faster

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...


Racket is a lot faster. Python is generally slower than guile, which is a lot slower than racket.

Racket is according to my unscientific benchmarks, unless the problem lends itself well to an imperative style, in the same ballpark as, but still slower than, C# on mono.

Edit: pypy on the other hand is, if the project euler forums are a good source, on par with racket.


Sooth that nasty cough. Show us your preferred alternative to the benchmarks game.


Speed is probably not your most pressing concern if you're using Scheme, I think. Moreover, Racket has the best ecosystem out of all the schemes, with lots of high quality and maintained libraries and a massive standard library, and it's not only a language, but also a platform for creating DSLs fully integrated with its IDE, DrRacket. Also, Racket can create standalone binaries (though they don't make programs any faster).


And in terms of speed, Racket is generally quite a lot faster than Python. I remember the days when rhetorically people almost assumed that poor execution speed meant higher-level abstractions that translated to greater developer productivity. Slow execution practically became a fitness function (I'm looking at the commentary during the early days of Ruby and Rails here).

But with Racket you've got high-level, comparable-to-Python abstraction combined with way better performance today, and even better performance tomorrow. Plus as a bonus, the typed story is good.

EDIT: In summary, even if performance wasn't your objective, you're still punching well above comparably high-level languages like Python.


> But with Racket you've got high-level, comparable-to-Python abstraction

Racket's abstraction is indeed high level but in no way comparable to Python. Racket's homoiconicity and syntactic abstraction put it leagues above Python in that regards.


> Plus as a bonus, the typed story is good.

Seriously considering Typed Racket at this point. I like Python, but lately I've been bumping up against the limitations of its type system. (There's only so much you can say about a type in Python.)


How will it get even better besides occasional tweaks here and there?


The original article describes moving from Racket's present VM to the Chez Scheme VM. As linked elsewhere in this thread, the performance benchmarks for Chez Scheme are remarkable. My other comment muses about the possibility that Racket will be able to capitalize on the performance gains afforded by moving to a much faster VM.


> and can make real binaries (I think anyway)

No, it cannot currently. It makes boot files which still need to be launched with scheme / petite.

The build + library/module system with chez is also pretty unusable. If you want to write scheme for a system where you deploy an entire OS, it's not too much to work around. If you're trying to write a 1-off binary, you should go with chicken.


There's a pull request that can effectively fix this, by finally exposing the Chez Scheme C library. Then you can finally write custom boot kernels. I pinged on it recently but nobody updated. I might try resubmitting it.

I actually have a full demonstration of using Chez to build a full "Unix application" that works 'just as you would expect'. Comes with complete Chez support: profile-guided rebuilds, coverage support, C extensions, some basic test stuff, a custom C boot kernel, and utility for portable .tar.gz builds. It even correctly copies out the boot files for a reidstributable build.

I could have extended this to "unpack the boot file out of my own elf executable" for a truly reusable binary but that seemed overkill. Also, if you keep the boot files, it then becomes possible to still drop into a Chez prompt and load your boot file from there for early debugging, which can be convenient and I didn't want to spend more time on that.

But, uhhhh, you're right if you want a one off binary, Chicken or something is way better. I spent like a week working on that skeleton so I could start working on my application itself. And my Makefile to support all this is extremely intense. But it works pretty well now and is very reusable and generic.

I'll get around to releasing it now I suppose!


Please let me know if you do get around to releasing it!

I think scheme is woefully underrated as something that can be used for "real work", especially with something like chez. You can find my email or other contact info through my bio, I'm dead serious!


I have had ok luck with using Gambit for standalone executables


How big is "hello world"


    (display "Hello, World!~n") 
Comes on at 400kb for me under Gambit.

Gambit makes fast, small binaries. Unfortunately you pay for that with a lack of libraries.


Now that is pretty nice. Something like that should always be in the kb range I think (preferred anyway).


In my own restricted view (I didn't play with may scheme implementations) Racket has some serious non-strictly-technical winning point that make it feasible for non-toy applications:

* Well-written, comprehensive language and package documentation

* Tooling

* Included libraries (example: I didn't expect racket to have libraries for imap... but it's there, in the standard package)

A nice development would be same-process parallelism. Afaik, at the moment you can have concurrent "threads" within the same process, but you need to spawn multiple racket vm processes to get parallelism (this is done "under the hood" by the racket vm, they call this things "Places").


racket is just a pure pleasure to use as a language, more so than either scheme or python.


I thought Racket was a scheme implementation?


Scheme (as in the R5RS spec and earlier) is a very minimal language specification. This means that all practical Scheme implementation add features not present in the specification. In some areas the Scheme implementors agree - and in others: not so much. Racket has many features not covered by R6RS/R7RS. However due to the module and macro system that enables you create languages (that compiles to Racket modules), it is possible for Racket to provide a language compatible with Scheme. That is: Racket includes an implementation of Scheme.


It was, but then diverged from r6rs and became it's own. One big things is that the cons cell is immutable by default (which means immutable lists), where scheme has set-car! and set-cdr!

The racket team wrote a bit about it here: http://racket-lang.org/new-name.html


The Racket language does not follow any of the Reports, so it's technically not a standards-compliant Scheme. On the other hand, the Racket platform supports both R5RS and R6RS out of the box, and there's a library for R7RS support.


They're doing a lot of research, so their personal views on how to leverage lisp/scheme heritage shows. For instance, they go deep into eDSL.


Raw performance isn't all there is to a programming language. Racket is so much more flexible and beautifully designed than Python.


Agreed, but if Python is your main weapon of choice, one will usually want to invest next in something really fast to bridge the gap. I wasn't aware Racket is that much faster....I'd like to see some benchmarks.


I tend to think of Racket as an alternative to Python, rather than a complement to it. C and Rust fit better the latter description.

Racket is faster than CPython, but that isn't anywhere near the top of my list of reasons to use Racket instead of Python. When speed is a priority, Rust and MLton (Standard ML) are both much faster than Racket. But:

(0) Racket is much more malleable than either Rust or SML. Rather than bash your head trying to model your problem domain in the existing language, you can redefine the language until it's the ideal tool for your problem domain.

(1) Unlike other languages that also advertise malleability (Common Lisp, Smalltalk, etc.), Racket is malleable in principled ways, so you can define abstractions without figuratively stomping on abstractions defined by other people (which you might also be interested in using).


> Unlike other languages that also advertise malleability (Common Lisp, Smalltalk, etc.), Racket is malleable in principled ways, so you can define abstractions without figuratively stomping on abstractions defined by other people (which you might also be interested in using).

Just like the Common Lisp Object System Meta-object protocol, readtables, symbol packages, ...


I prefer defining things to tweaking them, so I don't find dynamic metaprogramming (CLOS) to be a good enough substitute for static metaprogramming (macros), and Racket's macro system is light years ahead of Common Lisp's. YMMV, of course.


I don't think macros are a good way to implement or even extend object-oriented languages. So I don't think static macroprogramming is a good substitute for a meta-object system.


Give me a compelling reason to design programs in an object-oriented way. Just to be clear, I don't find the ability to subvert existing definitions particularly attractive, because it undermines my ultimate goal to actually prove things about my programs.

---

@lispm, I have reply here because I'm temporarily unable to make new posts.

> A large share of contemporary software is written in an object-oriented way.

Sure, but how does that contradict my original assertion that Racket's mechanisms for extending and redefining the language are more principled than either Common Lisp's or Smalltalk's?

---

> Could it be that you just don't like/use/need/want the object-oriented ways to extend those languages and thus these mechanisms are not 'principled'?

You're right that there's an aesthetic component: I don't have much taste for object-orientation. But this isn't just me being close-minded: I would take object-orientation far more seriously if someone could provide an interpretation of object-oriented programs as mathematical objects with nice properties.

> How are Racket's mechanisms for extending and redefining the language more principled

Racket's macro system is based on a theory of how to macro-expand in a capture-avoiding way, ruling out by construction errors that are very difficult to debug when doing similar things in Common Lisp. And macro systems are more principled than MOPs in that macros are purely static devices: you (re)define the language first, and you use it later.

> Especially since Racket provides some of that, too -> Swindle.

Object-orientation in Racket is completely opt-in. You don't have to use it if you don't want to, and, in fact, I just don't use it...

> Is the CLOS MOP not principled? Why?

... OTOH, in Common Lisp it's far more pervasive. Everything is an object and has a class, and classes can be tampered with in arbitrary ways, precisely using the MOP. How can then you prove anything interesting about how instances of a class will behave under all circumstances?

The essence of abstraction is to ignore implementation details, and concentrate on, well, abstract properties. A MOP achieves the opposite thing: it makes all implementation details available everywhere, complicating the problem of abstraction enforcement.


> Sure, but how does that contradict my original assertion that Racket's mechanisms for extending and redefining the language are more principled than either Common Lisp's or Smalltalk's?

Your original claim was "Unlike other languages that also advertise malleability (Common Lisp, Smalltalk, etc.), Racket is malleable in principled ways". Now you added 'more'. We can discuss which you or me like more, but you claimed that Smalltalk and Common Lisp can't be extended in 'principled' ways. Could it be that you just don't like/use/need/want the object-oriented ways to extend those languages and thus these mechanisms are not 'principled'?

How are Racket's mechanisms for extending and redefining the language more principled than for example Common Lisp's CLOS and the Meta-object protocol for CLOS? Especially since Racket provides some of that, too -> Swindle.

I'm using for example LispWorks, a Common Lisp, which uses CLOS throughout to make the language flexible and extensible.

Is the CLOS MOP not principled? Why?


> Everything is an object and has a class, and classes can be tampered with in arbitrary ways, precisely using the MOP.

CLOS provides specific and well-designed mechanisms to extend the language via the MOP providing protocols for:

  * metaobject initialization
  * class finalization
  * instance structure
  * funcallable instances
  * generic function invocation
  * dependent maintenance
> The essence of abstraction is to ignore implementation details, and concentrate on, well, abstract properties.

That's why CLOS has meta-classes, meta-objects and generic functions. They provide the facilities for abstraction.


> That's why CLOS has meta-classes, meta-objects and generic functions. They provide the facilities for abstraction.

You must have different standards from mine for what kind of implementation details can be safely ignored. If my code depends on an abstract property of Foo, and the abstract property is observed not to hold, I must lodge a complaint with Foo's implementor, who will tell me either:

(0) It's my fault, I will fix my code. Thanks for telling me.

(1) It's your fault, I never promised to uphold that abstract property.

However, if arbitrary implementation details of arbitrary abstractions can be subverted by anyone, then another answer becomes possible:

(2) It's someone else's fault, and there's nothing either of us can do about it.

I object to the very existence of the last possibility. In what you call a “dynamic object-oriented” program, I prefer to regard genuine abstractions as completely inexistent, and all the implementation details of everything become the collective responsibility of all programmers.


Just check the properties at runtime. CLOS provides the mechanisms for that.


> Just check the properties at runtime.

I'm not gonna be there to fix anything that went wrong at that point.

---

I really like this analogy: assertions that are meant to always succeed are like crutches. Just like able-bodied people don't need to use crutches to walk, correctly designed programs don't need to test their own invariants to run.

Just in case: I don't mean any disrespect to people who do need crutches to walk. Nobody chooses to be disabled, but some programmers choose not to prove that their invariants hold.


> Just like able-bodied people don't need to use crutches to walk, correctly designed programs don't need to test their own invariants to run.

Actually they do. When they have an accident and break something, an operation will fix it and after some healing period they can walk again.

That's why we have X-ray to inspect the body and various types of operations to fix broken bones.

The human body can be repaired in case of broken legs.

Inspect, repair, heal.

No need to start over.


> When they have an accident and break something, an operation will fix it and after some healing period they can walk again.

People who have an accident may become temporarily disabled, i.e., not able-bodied.

And correctly-designed programs don't have “accidents”.

---

@lispm: Argh, again I'm temporarily unable to make new posts, so here goes my reply.

> Temporarily -> no need to start over.

What's an incorrect program going to do about its own incorrectness? Rewrite itself?

> Many mission critical software has bugs.

Yeah, well, that's in itself precisely what's so terrible.


> People who have an accident may become temporarily disabled, i.e., not able-bodied.

Temporarily -> no need to start over.

> And correctly-designed programs don't have “accidents”.

That's dangerously naive. Many mission critical software has bugs. That's why airplanes for example from Airbus use 'diversity' in both hardware and software. The same functionality is implemented with different sets of hardware and implemented by different teams using different programming languages. The systems are additionally designed for graceful degradation, dynamic reconfiguration, switching to alternative control software, ...

Still: Lufthansa Flight 2904 -> 'Computer logic prevented the activation of both ground spoilers and thrust reversers until a minimum compression load of at least 6.3 tons was sensed on each main landing gear strut, thus preventing the crew from achieving any braking action by the two systems before this condition was met.'

The software was surely not written in Lisp and I also would doubt they would allow Racket 'principled' macros anywhere near Flight Control Software.


> Argh, again I'm temporarily unable to make new posts, so here goes my reply.

Please don't.

You don't understand hackernews. That's a feature of this website to slow down rambling discussions. In deep discussions take your time to answer. After a certain amount of time you can reply.

It's all in the Lisp code for this website.

> What's an incorrect program going to do about its own incorrectness? Rewrite itself?

There are a lot of options:

  * inform the next system to take over some functions
  * remove some features, while they are faulty, until patches are loaded in
  * use alternative implementations
Look at actual Flight Control Software. That's what it does and what it is designed.

Similar for other control systems, for example in power plants. They also need independent implementations controlling each other.

> Yeah, well, that's in itself precisely what's so terrible.

It's the reality. That's why mission critical systems don't believe that even verified software has no bugs.


> In deep discussions take your time to answer.

There's nothing “deep” about nonsensical justifications for sloppy programming and buggy software.

> It's the reality.

Only because we make it that way. It's not driven by some law of nature.


> There's nothing “deep”

deep, in the sense of a graph depth of replies.

The website is designed to slow down 'deep' discussions.


Anyway, I'm not terribly interested in meta-discussion, so I'll bow out.


> Give me a compelling reason to design programs in an object-oriented way. Just to be clear, I don't find the ability to subvert existing definitions particularly attractive, because it undermines my ultimate goal to actually prove things about my programs.

Probably there are other software developers with other requirements. A large share of contemporary software is written in an object-oriented way.

Or is OOP a big swindle???

http://docs.racket-lang.org/swindle/


> And macro systems are more principled than MOPs in that macros are purely static devices: you (re)define the language first, and you use it later.

I would just define the classes/objects/functions first and use them later. One can also develop static MOPs.

Nobody forces me to modify a running system, though it usually is a huge time saver - that's why for example most web browsers provides an implementation of Javascript - a dynamic object-oriented language.


> Nobody forces me to modify a running system, though it usually is a huge time saver

I won't dispute this fact, but it's not a principled way to work. The principled thing is to have two separate phases, one for designing abstractions, another for using them. You can't prove anything definitively about things that are eternally open for modification.


How often are you doing something that needs to be turned into a DSL, or is this something that you do all the time once you grok it?


More often than I'd like to. In an ideal world, language designs would be perfect out of the box, and the right abstraction for expressing what I want to express would always be available without me having to do anything. Alas, in the real world, languages have limitations, and you can either metaprogram them away or put up with them.

For example, a while back I was investigating a general framework for expressing various types of self-balancing search trees without tediously reimplementing the same ideas over and over:

(0) A search tree is either a leaf or node. A node is a alternating sequence of subtrees and individual elements, beginning and ending with a subtree.

(1) Rebalancing a tree preserves the sequence obtained from traversing it in-order.

(2) In the specific case of binary search trees, there exist two kinds of rotations: 3-rotations (rebalancing the sequence “a,x,b,y,c”, where “a,b,c” range over subtrees and “x,y” range over individual elements) and 4-rotations (rebalancing the sequence “a,x,b,y,c,z,d”, where “a,b,c,d” range over subtrees and “x,y,z” range over individual elements).

(3) It is very convenient to manipulate purely functional search trees using zippers, for more or less the same reasons it is very convenient to manipulate imperative search trees using iterators. Zipper types can be obtained from tree types using a generic procedure (given the recursive type “T = μR.F(R)”, find the derivative of “F(R)” with respect to “R”, then instantiate it at “T = R”)... if only you could express this procedure in the first place.


Thanks for the reply!


If you redefine your whole language, how is your experience with understanding your code after say 6 months?


I've generally found, both with my own code and others' code, that `some-undocumented-special-form' is about as understandable as `some-undocumented-function' (and a similar equivalence for things that are documented).


That greatly depends on how principled (hence convenient) the mechanisms for redefining and/or extending the language are. Metaprogramming in Python has the annoying tendency to create a mess where the implementation details of different parts are tightly coupled to each other. But metaprogramming Racket, especially using syntax/parse, is very clean.


Interesting. I wish there as a Manning "Racket in Action" book. The problem with a lot of Lisp materials is that they show you the basic constructs and not how to best use them (like you talk about above).


On Lisp. PAIP, Common Lisp Recipes, Practical Common Lisp, Lisp Style and Design, AMOP, Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS, ...

There are various Lisp books which go beyond the basic constructs.


Maybe "Beautiful Racket" ?

http://beautifulracket.com/


> I'm curious why people don't prefer Chez Scheme to Racket if it is so fast and can make real binaries (I think anyway).

Just a FYI for those following along at home, racket comes with built-in support for bundling programs as executables - but as I understand it those are really archives with a vm and (byte?)code.

I recently tried on Windows, and a "hello world" graphical app was a ~11mb - not bad IMNHO - but bigger than the equivalent Lazarus / free pascal app.


Try it again against Chicken and Gambit, which actually produce native executables. There's a decent difference in size, and often in speed.


I will, if I find the time. Do they support graphical applications across Mac, Linux and Windows?


Chicken uses iup, which you can find in Racket as well.

Gambit is... Spartan. You'd have to write some bindings yourself.


Yea, I'd expect it to be big, but 10 MB is still easily distributable, so not bad. How is the licensing though? Do you have to include the source (GPL) or is that only if you modify the source?


Racket is LGPL - but I'm not crystal clear on the implications for executables - according to the racket team (in this random reddit thread) - it looks like they see stand alone executables as being fine under their LGPL :

https://www.reddit.com/r/scheme/comments/2bu321/not_that_i_d...

Shouldn't be too far off from gcc/glibc really.


Not for long! The Racket community is in the process of relicensing under a more permissive license.

See https://github.com/racket/racket/issues/1570


Some of us do. (Racket is neat too.)


I'm glad I've stuck with Racket. The work on it continues to surprise me in many ways. I have Chez and Racket on my box, and Wasp Lisp. Now I can look forward to a fast Racket with scheme code I can dig into vs. C, which I am only ok at.


The very same delight, over here as well :)


The benchmarks for Chez Scheme are pretty impressive. Provided Racket-on-Chez is able to captitalize on that, this is pretty exciting.

Also, that makes it hypothetically possible that HN could end up running on Chez's VM because HN is written in Arc, which is I believe written in Racket, which may in the next year be written for Chez Scheme's VM.


If anyone is looking for benchmarks, the most comprehensive comparison of scheme implementations can be found here: http://ecraven.github.io/r7rs-benchmarks/benchmark.html


Wow, Chez mostly beats out Stalin.


If we look at the benchmarks where one can expect stalin to be fast (numeric, and more specifically floating point benchmarks.) stalin is really in it's own league.

The puzzle benchmark is nice, since it benchmarks many common compiler optimizations - with code written to be easily optimized. Stalin of course wins.

The other benchmarks are written in a more general style, which I have found does not always produce the best output with stalin. If I would spend some time optimizing these benchmarks, I could probably make Stalin come out on top a lot more often.

That, however, takes you are back to the old problem: A heavily optimized C program looks like C. A heavily optimized [insert functional programming language here. Most often haskell] program looks like shit.

What impresses me the most about chez is that it takes idiomatic scheme code, and produces neat and fast machine code.


> A heavily optimized C program looks like C.

Really? I haven't found that to be the case very often.

> A heavily optimized [insert functional programming language here. Most often haskell] program looks like shit.

Here we definitely agree :). It's also worth mentioning that it's not often worth it to optimize very much of your code. (The 80-20, or perhaps 95-5, rule applies in full force.)

Obviously, it's still worth it to have compilers optimize idiomatic code.


> A heavily optimized C program looks like C.

Not in the 80's and early 90's.

What C has, is 40 years of development effort invested by several multinationals with deep pockets and researchers, improving the optimizer algorithms of their compilers.


Well. Fast c doesn't look quite like regular C, but seeing C optimised for speed isn't rare.

Looking at older C code is still better than seeing the hoops people jump through to get [insert programming language] to run at regular C speeds.

Fast C usually doesn't include weird workarounds for things like Implementation-specific GC quirks or bending language semantcs to force it to do what you want.


No, fast C in the 80's and early 90's meant writing code that was actually like this:

    naked void my_C_func()
    {
      asm {
       ...
      }
    }
Which basically meant using C as a poor man's macro assembler and nothing to do with what ANSI C is.

There were also other tricks related to unions and bit fiddling.

In those days, on 8 and 16 bit home computers, C was seen like managed languages are nowadays.


I see what you mean. I never did those gigs, so I wouldn't know. In hindsight I don't regret it :)


> That, however, takes you are back to the old problem: A heavily optimized C program looks like C. A heavily optimized [insert functional programming language here. Most often haskell] program looks like shit.

Couldn't that be explained by observing that C looks like shit, and we've just gotten so used to the look (and the smell!) that we fail to notice?


Let me rephrase: for some types of problems it is easier to use an imperative language, because it is easier to express the fastest way of doing it imperatively.


The last time I checked, arc required a pretty old version of racket that supported mutable cons cells.

Not sure if that's still true, but AFAIK all work on arc has been stalled for quite a while.


There's a community version of the language being developed here: https://github.com/arclanguage/anarki

It works with the latest Racket versions and includes a HN clone. You can see it running here: http://arclanguage.org/forum

It's true that pg isn't working on it though. I wonder if he's ever getting back to it.


I'm not sure if its super important that PG gets back to it. To be honest, I'd love if someone made a summary of what, in 2017, makes arc unique and worth looking into.


FWIW Racket has two types of cons cells: mutable and immutable. And if you know what you are doing - then you can abuse the FFI to mutate the immutable ones (but don't tell anyone) using unsafe-set-mcar! and unsafe-set-mcdr!.


I think it may be faster to just port, though are there larger libraries that Arc depends on in Racket that you'd also have to significantly port?


I honestly have no idea. I think I remember reading that Arc may also be pinned to a somewhat old version of Racket / PLT Scheme. I could be misremembering though.

Depending on how portable Arc is to new versions of Racket, I'd guess they could move to a new Chez-backeneded Racket without any need to port.


Arc works fine on newer versions of Racket using the MzScheme legacy language bindings.


Is Chez "scheme all the way down"?


The only "scheme all the way down" implementation I know of is T. It's just too much of a pain to fully self-host the garbage collection code, and it's not exactly clear what the gain of doing so is.


The satisfaction of not depending on anything else other than some Assembly.

The moment another language gets used, like C, there is this misunderstanding among compiler design illiterates that without the use of that programming language, writing the compiler wouldn't be possible at all.


PreScheme:

https://en.m.wikipedia.org/wiki/PreScheme

Only C in one of the implementations was an I/O shim for the C-based OS. That could be removed for purity but they were about pragmatism.


Well, that deleted comment was one of most respectable I've seen in a while. You were accurate about its features but maybe missed the justification: it was a replacement for C in terms of low-level use and performance (ideally). That made typical Scheme features impossible. Additionally, it was a critical part of VLISP project to mathematically verify a Scheme (Scheme48). They had to balance power and complexity carefully. So, it was efficient, gave you some Scheme power, and mathematically verified for correctness at the algorithm level for VLISP version.

So, not as nice as a full Scheme but great in other ways and used to write a Scheme.


I deleted it because I realized it was nonsensical. The T garbage collector was written in a subset of T, which is not a strict-subset of Scheme, as it uses system-level features.

It differs from T in that the VM is written in a different scheme dialect than the VM hosts, but it is still as much scheme-all-the-way-down as T was. It's also considerably safer than T as it will prevent you from doing some unsafe things and warn you about others (e.g. run-time closures).


Yeah that's about it. T was interesting too. I discovered it looking for PreScheme stuff actually. Jonathan Rees was involved in both with pg writing a T essay. So Google led me to it. The info on it looks to scattered for me to study easily like I did with PreScheme's all in one place stuff.

I though about a new incarnation of PreScheme. One would add memory safety like Rust's borrow checker. Carp LISP is already doing that. Another was to embed a version of C in it amenable to static analysis and KLEE-like tools with actual coding in Scheme with macros. Last was reviving VLISP using Magnus Myreen's LISP 1.5 or CakeML tools to verify it from LISP form to machine code. Then we'd have a semi-verified Scheme48 where you just trust the high-level code essentially.


This is like a secret wish come true!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: