Peter tech-reviewed the second edition of my Java AI book and made the comment that Java was half as good as Common Lisp for AI and that was probably good enough (we had both written Common Lisp books). He then went to Google and I had lunch with him; I was surprised that he was using Python.
I like his poem in the article!
A little off topic, but I retired (that is a bit of a joke) and at the age of 69, this year I decided that for maximum programming enjoyment I would only use Lisp languages (linking in Python and TensorFlow on occasion). I am approaching 40 years using Common Lisp and using the language is so much fun. I bought a license for LispWorks and using it for developing a semantic web app.
I wrote a short book on Hy last year. A very cool language and just about perfect for using TensorFlow (or PyTorch) in a Lisp language. I am working on a second edition, adding examples converted from my recent work using Common Lisp.
For a few years, most of my consulting gigs used Clojure and I donated financial support in the early days. Great community around Clojure. That said, I like Common Lisp and Scheme better.
If I may ask, what references would you recommend to someone interested in something like Lisp, but who has never touched a functional language before?
Not OP, but Lisp isn’t really a functional language — it’s a usual language that also has some functional ideas inside. Mutation is common, don’t think Lisp will be a language like Haskell.
For resources, I would recommend Practical Common Lisp[0] and PAIP[1].
For some modern development practices, the Common Lisp Cookbook[2] is great.
> Lisp isn’t really a functional language — it’s a usual language that also has some functional ideas inside.
The idea that "functional language" means "purity" is something that was retconned onto functional programming in the 90s decades after Lisp had defined functional programming to mean programming in terms of expressions that produce values.
I like the ML/Miranda/Haskell lineage of languages a lot, but it really bugs me when people in that camp lay claim to some notion of being a "better" functional language than the older Lisp family. (Lispers, while smug about many other things, are generally less smug about how "pure" their languages' approach to functional programming is.)
This is like arguing that Lagavulin isn't "really" whiskey because real whiskeys are made in the US from corn mash.
Perl 5 has truthy values, and it came out a year (1994) before JS (1995). The idea probably appeared in Perl 1 (1987), and probably was borrowed from shell scripting, etc. It wouldn’t surprise me if “Truthy” predated unix.
Sigh. Immutability is a relatively recent addition to the long list of features that defines a language as "functional."
In the beginning, "functional" meant functions were first-class objects and there was little--if any--global state. Lisp passes this test. Lisp invented this test (especially Scheme which introduced lexical scoping and closures).
If you want to write code with immutable data, you can certainly do so with Lisp; it just doesn't force you to. For the most part Lisp creates a brand new data structure when you "modify" an old one. You have to work a bit harder to actually mutate things.
Where e.g. Haskell is "more" functional than Lisp is more about automatic currying than immutability. Plus the static type system that allows monadic programming, which is much more difficult in Lisp because of Lisp's dynamic typing.
Common Lisp can be used as a functional language, or imperative, or extremely advanced OO (see CLOS).
There are lots of different Lisp's though. Clojure is the most popular one that runs on the JVM and idiomatic Clojure is generally pretty functional although it calls out to lots of Java which is OO.
I would like to recommend that after reading PG's On Lisp, that you also consider Doug Hoyt's Let Over Lambda. Both are really great books for advanced study.
I second this, I discovered scheme through racket and got a lot of enjoyment out of it. DrRacket is easy to install and a great platform. I’d it a week to get used to it.
I've never used LispWorks, but I think they have their own GUI, support, built-in logic programming (Prolog) for starters.
They also have IDE like tools that are probably ready to go outside the box. Emacs with Slime isn't for everyone (I find it to be a big initial hurdle). Although I'm sure this guy is an Uber emacs expert.
Perhaps we can summon u/lispm for an answer here. I think he possibly works for LispWorks and is usually one of the lisp users on HN & Reddit that can answer questions pretty quick.
Having worked for a long time with emacs and slime, I'd have the opposite fear, to have to develop anything in lisp without them, like this is the case for LispWorks.
No, you may still want to use the IDE to access functionality that is not available through slime. But you have the choice. According to the slime documentation: "Most features work uniformly across implementations, but some are prone to variation. These include the precision of placing compiler-note annotations, XREF support, and fancy debugger commands (like 'restart frame')."
I'm not working for LispWorks, but I use it since many years.
LispWorks is a relatively expensive commercial implementation of Common Lisp. There might be several reasons to buy it: cross platform GUI support (Windows, Linux/Unix, Mac), integrated IDE, various delivery options for applications, support for various hardware / operating systems, ... I personally also find the runtime system quite robust.
Lispworks has a capable IDE. It looks like a 1995 Geocities page but it works well enough. Their editor is Hemlock which is a Common Lisp clone of emacs, so most of the key bindings are the same as emacs and you can rebind them if you wish. Or you can use Lispworks without the IDE and drive it with (real) emacs and SLIME.
Personally I prefer the CCL IDE but it only works on the Mac.
Cons cells are the traditional Lisp data structure making up the nodes of a linked list. It comes from the cons function which is short for "construct".
Ironically, Clojure doesn't use cons cells, although it does have a cons function.
cons n.v. 1. n. a compound data object having two components called the car and the cdr. 2. v. to create such an object. 3. v. Idiom. to create any object, or to allocate storage.
Peter Norvig is the most inspiring genius in my coding world.
I met(virtually) him via AI course in Udacity and since that time I enjoy all reading/watching from him.
His book AI programming(Lisp version) is a gem that I enjoy reading. I've finish a book a few times already - but every time I read I find something new that I missed previous time.
Maybe he's not as as mellifluous as that, but here's a recording Mitch Bradley singing the Open Firmware theme song! But at least he's more mellifluous than Richard Stallman singing the Free Software song.
That same Open Firmware Forth system [1], which was developed by Mitch Bradley [2], was not only in the PowerPC Mac bios, but it was originally used for the SparcStation boot roms, and eventually in the OLPC, and it was even an IEEE Standard 1275-1994!
In fact: the Open Firmware boot loader and plug-in card firmware interface technology, commonly used by both Sun and Apple, is the only firmware standard in existence to have its own theme song [3] !!!
: OpenFirmwareSong ( - )
\ By Mitch Bradley.
\ Sung to the tune of "The Flintstones".
𝄞
." Firmware" cr
." Open Firmware" cr
." It's the appropriate technology," cr
." Features" cr
." FCode booting" cr
." Hierarchical DevInfo tree." cr
." Hack Forth" cr
." Using Emacs on the keys," cr
." Save in" cr
." NVRAM if you please." cr
𝄒 cr
." With your" cr
." Open Firmware" cr
." You can fix the bugs in no time" cr
." Bring the kernel up in no time" cr
." We'll have an FCode time!" cr
𝄒 cr
\ Thank you and good night!
reboot
;
Arc is underrated as an information management tool. There's something to be said for having a web framework that works out of the box. Rails is probably the only other framework that makes it as easy to "just make some forms that pass data around and run some code on that data." But not quite -- I haven't seen arc's closure-storing technique used in any other web framework.
The main issue that arc solves is that it gives you a full pipeline for managing "objects with properties" via the web. It's so flexible. I wrote a thread on how we're using it in production to manage our TPUs: https://twitter.com/theshawwn/status/1247570883306119170
You haven't been around long enough, at least under the same name, to remember when there was an implicit time limit on the comment reply page, inflicted by those stored closures silently timing out. Having long comments so often eaten that way was actually the specific thing that annoyed me into first installing It's All Text.
It's an interesting approach, as attempts to force statefulness on a stateless-by-design protocol go, but I don't know that I like how it scales.
My account was reset. I've been around since day two. :)
You're right, it has some downfalls. But a lot of the time it simply doesn't matter. All the links at https://www.tensorfork.com/tpus are dynamic, and the speed you gain by being able to whip up a feature in 10 minutes is worth the pain of an occasional dead link.
On the other hand, I did some work for "deduplicating fnids": http://arclanguage.com/item?id=20996 which the site is using, so the links possibly last much longer than the early-HN links.
(Basically, we calculate the fnid key based on the lexical environment, rather than using a random ID each time the page is loaded. So each link gets a unique ID based on the code structure rather than a random ID. Meaning, instead of millions of random links to store, you end up with a few tens of thousands.)
It's short for function ID. If you want to route requests to specific closures, you have to have some sort of ID that you can send down to the user. Arc stores closures in a hash table keyed by random ID, but we use lexical structure plus lexical values (like username) to make the key deterministic. It greatly cut down on the number of closures that needed to be stored.
Basically, if you have a closure that does a certain action – e.g. editing a comment – Arc generates new a closure every page refresh. That was the root reason for the "dead link problem" during the early days of HN. I reworked it to make the closure IDs deterministic.
If you walk the source code from the point of the closure up to the root, and then you stick all the local variables in a list and use that as a key, then hash that, along with the actual source code, you end up with something that (a) has a very low probability of collision, and (b) is deterministic each time the page refreshes, unless the variables' values change. (E.g. if you store the current time in a local variable, the value changes, so the fnid will change as well, since that value is captured by the closure and therefore the closure has to do something different by definition.)
That sounds like a complicated fix! How many tens of minutes did it take? And how many tens of minutes did the bug cost, in effort spent on comments that got eaten instead of posted?
(I know there's no way to answer that last question, but that doesn't mean its answer is equal to zero.)
It was complicated, but at one point I was so in love with Arc that I wanted to give it a real shot at taking over the world. It seemed like a necessary change to make, since the moment someone brought up dead links as a slight against arc, I could point to the change and say, "Already fixed!"
the speed you gain by being able to whip up a feature in 10 minutes is worth the pain of an occasional dead link
This is everything that is wrong with the software industry, summarized in one sentence. Speed gain enjoyed by developers is paid for by the users in pain.
It used to be that developers would go through tremendous amounts of pain just to squeeze out a few instructions from a UI drawing routine in order to make it just a little bit smoother for the users. Now those developers are derided as "greybeards."
I'm sorry, I probably could have found a less harsh and cynical way of writing all that, but I feel like the Internet is getting worse everyday and there's not enough urgency among tech people.
Closures on a server are a powerful way of representing data flows. But they come at a cost: the links expire after some time. How do you strike a balance?
The simplest way is to put in the extra time to make everything into a persistent link. But, that's equivalent to removing all the benefits of lexical scoping. If you've ever created an inner function before, you know how powerful that technique can be. You can encode the entire state of the program into closures -- no need to store things in databases. Want a password reset link? Spin up a closure, email the link, done. Literally identical to storing a reset token in a database, except there's no database.
Another solution is to fix the root problem. Does the closure really need a random ID every time the page refreshes? The closure links die because they have to be GC'd periodically, to keep memory usage down. Even if you cache the page for 90 seconds, that's still 960 refreshes per day for logged-out users. Then if you have a few hundred regular users, that's at least another factor of two. And certain pages might create hundreds of closure links each refresh, so it quickly gets out of hand.
Ironically, the solution was emacs -- in emacs, they store closures in a printable way. A closure is literally a list of variable values plus the function's source code. That got me thinking -- why not use that as the key, instead of making a random key each time the page refreshes? After all, if the function's lexical variables are identical, then it should produce identical results each time it's run. No need to create another one.
That's what I did. It took a week or so, which is a week I'll never get back for building new features. But at least users won't have to deal with dead links anymore.
Clever readers will note a theoretical security flaw: an attacker might be able to guess your function IDs if they knew the entire state of the closure + the closure's source code (which is the default case for an open source project). That might give them access to e.g. your admin links. But that's not an indictment of the technique; it's easily solved by concatenating the key with a random ID generated at startup, and hashing that. I'm just making a note of it here in case some reader wants to try implementing this idea in their own framework.
The closure technique has nontrivial productivity speedups (that I think someone will rediscover some years from now). I hope the idea becomes more popular over time.
How about a keepalive from the client side: little bit of JavaScript that somehow tells the server that the session is still alive, so don't blow away the closure or continuation.
Since this is getting a surprising amount of interest, let me sum up the technique here. It's really not hard to implement it in Javascript using Express.
1. inside of your express endpoint, create a closure that captures some state. For example, the user's IP address.
EDIT: I updated this to capture the date + time the original page was loaded, which is slightly more compelling than a mere IP address.
app.get('/', function (req, res) {
let ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
let now = new Date();
let date = now.getFullYear()+'-'+(now.getMonth()+1)+'-'+now.getDate();
let time = now.getHours() + ":" + now.getMinutes() + ":" + now.getSeconds();
let fn = (req, res) => {
res.send(`hello ${req.query.name}. On ${date} at {time}, your IP address was ${ip}`)
}
... see below ...
})
2. insert that closure into a global hash table, keyed by a random ID.
g_fnids = {};
app.get('/', function (req, res) {
let fn = ...
let id = <generate random ID>
g_fnids[id] = fn;
res.send(`<a href="/x?fnid=${id}&name=bob">Say hello</a>`);
})
3. create an endpoint called /x which works like `/x?fnid=<function id>&foo=1&bar=2`. Use <function id> to look up the closure. Call the closure, passing the request to it:
app.get('/x', function (req, res) {
let id = req.query.fnid;
let fn = g_fnids[id];
fn(req, res)
}
Done.
Congratulations, your closure is now an express endpoint. Except you didn't have to name it. You can link users to it like `<a href="/x?fnid=<function id>&name=bob">Say hello</a>`.
The reason this is a powerful technique is that you can use it with forms. The form target can be /x, and the query params are whatever the user types into the form fields.
I bet you already see a few interesting use cases. And you might notice that this makes scaling the server a little more difficult, since incoming requests have to be routed to the server containing the actual closure. But in the meantime, you now have "inner functions" in your web framework. It makes implementing password reset functionality completely trivial, and no database required.
If it seems slightly annoying to use – "I thought you said this was a productivity boost. But it's annoying to type all of that!" – lisp macros hide all of this boilerplate code, so there's zero extra typing. You can get lisp macros for Javascript using Lumen lisp: https://github.com/sctb/lumen
Even without macros, though, I bet this technique is shorter. Suppose you had to store the date + time + IP address somewhere. Where would you put it? I assume some sort of nosql database like firebase. But wouldn’t that code be much longer and more annoying to write? So this technique has tremendous value, and I’m amazed no one is using it circa 2020.
This is really funny; Back in the warcraft 3 days, we used to do the same thing inside its scripting language --- to attach some data to a timer, we would exploit the fact that a timer is in fact just a 'void *' underneath: so the pointer address gave us the unique ID. We would stash data associated with the timer in a global hash table. Then, in the callback of the timer, we would read the data back from the global hash table!
Your exposition took me a trip down memory lane to middle/high school. Thank you for this :)
Well this is certainly the coolest thing I've read today. I've been trying to grok closures and this helps a bit. Is there a reason to not use the global hash table itself to store the state instead of a closure? This seems to be trading a database with memory. This also seems to be harder to interrogate, what if I want to go in and see what's currently outstanding, instead of going to Firebase, I'll need to go through the hash table and check the content of the function.
I think I may be missing something someone who's actually worked with lisp can see, to me closure, recursion and functional programming is cool but I can do everything its showing off using the standard fare of loops and databases.
The biggest difference is the “...” assignment to fn. The idea is similar to AWS Lambda — write functions, store those functions, and then call them later when you need them. I’ve minimal Lisp experience, but from my perspective, a closure is a function you can store in a variable that’s defined with a scope, or a set of arguments/variables used in your function, that often (but not always) includes variables from the parent scope it was defined in, (often only) if referenced by the function. Because closures need to have independent scopes, by default values (or variables) that can mutate need to be copied — alternatively, you can use immutable data structures which copy more cheaply. The big differences then between closures and other types of code often comes down to how frequently immutability is used, and whether you call functions that assume state or share state (more OO, or non-FP), or transfer functions with state to other functions (FP, though composability and other properties matter too when defining FP, this is a simplification). This is a bit of a vague answer, perhaps others can chime in with a better one. And if you’re not careful, FP can introduce problems too, though that happens more often with distributed, multi-threaded or recursive programs which can themselves be hard to write using non-FP also.
It's not all that similar to AWS Lambdas in concept or in execution. Those are stateless; to a very good first approximation, they're just a single-route web server with all the boilerplate abstracted away, and that starts up a fresh instance to handle each request and is shut down again immediately after.
What 'sillysaurusx describes is much more similar to what, in Scheme and elsewhere but these days mainly there, is called "continuation-passing style". It's a way of pausing a partially completed computation indefinitely by wrapping it up in a function that closes over the state of the computation when you create it, and calling that function later to pick up from where you left off when you're ready to proceed again.
I suppose you could maybe do that with an AWS Lambda, but because the technique relies strongly on the runtime instance staying around until the computation finishes, it would probably get expensive. Lambdas aren't priced to stay running, after all.
As a side note, it's worth mentioning that the "AWS Lambda" product, which whatever its virtues isn't actually a lambda, derives its name from the lambda calculus, where I believe the concept of anonymous first-class functions originates. I don't recommend reading about the lambda calculus itself unless you're up for a lot of very heavy theory, but it's worth knowing that, especially in the Lisp world and realms adjacent, you'll often see the term 'lambda' used in a sense which has nothing to do with the AWS product, but rather refers to a form of abstraction that relies on defining functions which retain access to the variable ("lexical") scopes in which they were created, even when called from outside those scopes. Javascript functions have this property, which is why they're capable of expressing the technique 'sillysaurusx describes, and it gives them a lot of other useful capabilities as well.
True. Good distinctions. To re-iterate the above, the approximation to AWS Lambda would require dynamic AWS Lambda functions -- as in code that creates a Lambda with specific state embedded in it -- then tracks each of those by their unique Lambda identifier and ... yeah, that's where this breaks down because it's not all that similar to Lambda if the best use for a Lambda is repeated invocations of the same code. And Lambda IDs presumably aren't based on a hash of their contents and variables the way this is. But dynamic AWS Lambda functions are possible, so there's that. You could write this in Lambda, it just might be expensive if API calls to create and destroy one-time Lambdas are expensive enough. It's a lot cheaper and faster to build functions and store references to them in a hash table in memory.
Another similarity to this use of hashing the scope of a function would be in memoization of a function, to cache the output based on the input, such that you hash a function's inputs and assign to that hash a copy of the output of the function when run with those inputs. Then you can hash the inputs and skip re-running the function. You have to be sure the function has no side-effects nor any changes in behaviour or inputs not specified in the memoization hash, though. "Pure" functions are best for this use case.
Memoization is usually preferable if you can do it, sure. But you can't memoize a continuation, because what it expresses is a computation that has yet to complete and produce the result you'd need in order to memoize. And the use of the g_fnid hash table doesn't qualify as memoization, either, because the keys aren't arguments to the function that produced the values; what it actually is is a jump table, cf. https://en.m.wikipedia.org/wiki/Branch_table#Jump_table_exam...
Thanks for your reply. I ended up looking for a bit more on continuations from the perspective of JS Promises and found https://dev.to/homam/composability-from-callbacks-to-categor... which was a pretty easy to follow read on this if you take the time to understand the JS, though there might be better references to continuations elsewhere, this was just one of the first I found.
It works a lot better in a proper Lisp, where the REPL and debugger are first-class citizens. In Javascript, you can do it, but it's a dancing bear at best; as you note, the observability is poor to nil without heroic effort, and scalability's a problem too.
I mean, I can tell you right now why I'm not using it circa 2020, nor do I expect I shall in future. For sure, it's clever and it's elegant, a brilliant hack - but it's not durable, and in my line of work that counts for more.
On the one hand, as you note, this can't scale horizontally without the load balancer knowing where to route a request based on the fnid, which means my load balancer now has to know things it shouldn't - and that knowledge has to be persisted somewhere, or every session dies with the load balancer.
On the other hand, even if I teach nginx to do that and hang a database or something off it so that it can, with all the headaches that entails - this still can't scale horizontally, because when one of my containers dies for any reason - evicted, reaped, crashed, oomkilled because somebody who doesn't like me figured out how to construct a request that allocates pathologically before I figured out how to prevent it, any number of other causes - every session it had dies with it, because all that state is internal to the runtime instance and can't be offloaded anywhere else.
So now my cattle are pets again, which I don't want, because from a reliability standpoint shooting a sick cow and replacing it with a fresh one turns out to be very much preferable to having to do surgery on a sick or dying pet. Which I will have to do, because, again, all the persisted state is wrapped up tight inside a given pod's JS runtime, so I can't find out anything I didn't know ahead of time to log without figuring out how to attach a debugger and inspect the guts of state. Which, yes, is doable - but it's far from trivial, the way Lisps make it, and if the pod dies before I can find out what's wrong or before I'm done in the debugger, I've got a lot less to autopsy than a conventional approach would give me. And that's no less a problem than the rest of it.
Yes, granted, the sort of software you describe is incredibly elegant, a beautifully faceted gem. It's the sort of thing to which as a child I aspired. But as it turns out, here thirty years on, I'm not a jeweler, and the sort of machine my team and I build has precious little need for that sort of beauty - and less still for the brittleness that comes with it. Durability counts for much more, because if our machines break and stay broken long enough, the cost is measured in thousands or millions of dollars.
That's not hyperbole, either! Early one morning last November, I ran two SQL queries, off the top of my head, in the space of two thirds of a minute. When all was eventually said and done, the real value of each of those forty seconds, in terms of revenue saved, worked out to about $35,000 - about $1.4 million, all told, or seven hundred thousand dollars per line of SQL. And not one of the people who gave us all that money ever even knew anything had been wrong.
Granted that a couple of unprecedented SQL queries like the ones I describe, written on nothing but raw reflex and years of being elbow deep in the grease and guts of that machine and others like it, constitute a large and blunt hammer indeed. But - because we built that machine, as well as we knew how, to be durable and maintainable above all else - in a moment which demanded a hammer and where to swing it, both were instantly to hand. In a system built as you describe, all gleaming impenetrable surfaces between me and the problem that needed solving right then, how could I have hoped to do so well?
Only through genius, I think. And don't get me wrong! Genius is a wonderful thing. I wish I had any of it, but I don't. All I know how to be is an engineer. It's taken me a long time to see the beauty in that, but I think I'm finally getting a handle on it, these days. It's a rougher sort of beauty than that to which I once aspired, that I freely concede, and the art that's in it is very much akin to something my grandfathers, both machinists and one a damned fine engineer in his own right, would have recognized and I hope might have respected, had they lived to see it.
Do you know, one of those grandfathers developed a part that went on to be used in every Space Shuttle orbiter that ever flew? It wasn't a large part or a terribly critical one. You wouldn't think much of it, to look at it. But he was the man who designed it, drew it out, and drew it forth from a sheet metal brake and a Bridgeport mill. He was the man who taught other men how to make more of them. And he was a man who knew how to pick up a hammer and swing it, when the moment called for one. He was possessed of no more genius than am I, and his work had no more place in it for the beauty of perfectly cut gemstones than does mine. But he was a smart man, and a knowledgeable man, and not least he was a dogged man. And because he was all those things, my legacy includes a very small, but very real, part in one of the most tangible expressions of aspiration to greater, grander things that our species has ever yet produced. Sure, the Space Shuttle was in every sense a dog, a hangar queen's hangar queen. But, by God, it flew anyway. It 'slipped the surly bonds of Earth, and touched the face of God' - and next time, we'll do better, however long it takes us. And, thanks to my grandfather's skill and effort, that's part of who and what I am - and there's a part of me in that, as well.
No gemstone that, for sure! It has its own kind of beauty, nonetheless - the kind that leaves me feeling no lack in my paucity of genius, so long as I have an engineer's skill to know when and how to swing a hammer, and an engineer's good sense to leave myself a place to land it. If that was ever in doubt, I think it can only have been so until that morning last November, when I saved ten years' worth of my own pay in the space of forty seconds and two perfect swings of exactly the right hammer.
There's a place for the beauty of gemstones, no doubt - for one thing, in seeing to it this very long comment of mine isn't lost to the vagaries of a closure cache. And I appreciate that, for sure! It'd be a shame to have wasted the effort, to say nothing of any small value that may cling to these words.
But there's a place for the beauty of hammers, too.
The vast majority of websites don't need to scale beyond what a single computer can do, especially with an efficient runtime. You're right that if you're building Wikipedia or Amazon you need to scale horizontally. But most sites aren't Wikipedia or Amazon.
It's true that JS systems like Node aren't really designed for this kind of thing, although they could have been. Arc is.
Yup. I somehow became a graybeard. I really didn't fit in at my last three gigs doing "backend" work.
I always play to win, so try to understand why & how I failed.
My current theory:
I had good successes doing product development. Shipping software that had to be pretty close to correct.
Today's "product development" is really IT, data processing. Way more forgiving of stuff that's not quite right. Often not even close to right. (Guessing that about 1/3rd of the stuff I supported didn't actually do what the original author thought it did, and no one was the wiser, until something didn't seem quite right.)
One insightful coworker said it best: "I learned to do everything to 80% completion."
My observation is that most team mates created more bugs than they closed. Maybe incentivized by the "agile" methods notions of "velocity". And they were praised for their poor results.
Whereas my tortoise strategies nominally took longer. So I had fewer, larger commits. Way fewer "points" on the kanban board. Created much fewer lines of code.
(When fixing [rewriting] other people's code, mine was often 50% to 80% smaller. Mostly by removing dead code and deduplication.)
I was able to bang out new stuff and beat deadlines when I was working solo.
I think the difference between solo and team play is mostly due to style mismatches. It's very hard for me to collaborate with teammates who are committing smaller, more frequent, often broken, code changes.
Any way. That's my current best guess at what's happening to this graybeard.
More optimistically...
I'm very interested in the "Test Into Prod" strategies advocated by the CTO from Gilt (?). It's the first QA/Test strategy (for an "agile" world") that makes any kind of sense to me. So I think I could adapt to that work style.
(I served as SQA Manager for a while. It's hard to let go of those expectations. It's been maybe 20 years since I've seen anyone doing actual QA/Test. I feel bad for today's business analysts (BAs) who get stuck doing requirements and monkey style button pushing. Like how most orgs functioned in the 80s.)
I see more gray in my beard every morning. And the thing about "...to 80% completion" is that the first 80% of value is captured in the first 80% of effort, and the last 20% of value in the other 80% of effort. It's important to know when to follow that ROI graph past the knee, for sure. But it's just as important to know when not to.
(I mind me of a time a few years back when I was surprised to learn that the right method of exception handling, for a specific case in some work I was doing on a distributed system, was none - just letting it crash and try again when the orchestrator stood up a fresh container. It felt wrong at first; ever before I'd have instead gone to a lot of painstaking effort to reset the relevant state by hand, and my first instinct was to do the same here. But crashing turned out to be the right thing to do, because it worked just as well and took no time at all to implement.)
HN still uses it extensively for its more obscure operations—the ones that don't need to scale. We switched all the most common handlers to regular links years ago. It's hard to remember now that the most common complaint here (by far) used to be about those "Unknown or expired link" messages. In fact, you can tell from the histogram of https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... when it was that we fixed this—six years ago—because that's when the complaints slow down to a trickle.
As long as you don't use it for things that you need a lot of, it's a great approach that holds up well. The primary downside is that they all get discarded when we restart the server process. Another downside is that they don't work well with the back button.
Edit: I just remembered another issue with them. Sometimes browsers pre-visit these links, which 'uses' them so that by the time the user goes to click on it, it has already expired. (Yes, this is an issue with using GET for these.)
It's way easier in elisp than in racket, but, the idea is to write out the function + the closure variables to disk.
The hard part is that you'd have to fix up object references. (If two closures both capture x, and x is a hash table, then the values of x for both closures should be the same hash table after a reboot.) But it's doable.
And of course, if it's possible to write the closures to disk, that means you can write them out to a database shared by multiple Arc webservers. As long as the state is also shared (perhaps the values of the variables can also be stored in the database?) then this means the technique can horizontally scale, just like any other.
I spent like a year trying to brainstorm ways of showing that Arc can go toe-to-toe with any of the popular frameworks, with no downsides. "Reboots wipe the closures" implies "Arc can't scale horizontally," which would be a serious limitation in a corporate setting. But in principle I think you could write closures to disk.
That would also result in a funny situation: if closures persist forever, it means that a closure could potentially be activated years after it was first stored. So it'll run with years-old code, rather than the latest version. :) But if people are using global names for functions, then it'll just call the latest versions of those functions, which will probably work fine in most cases.
I know I've seen other work on serializing closures, but it was years ago and it just left me with the impression "hard". Maybe one could get a subset of it working nicely for the cases that an application like HN needs.
Some arc code I wrote back in the day, for generating textual descriptions from structured data using a web frontend, is still in production at a previous company of mine (as far as I know). It was indeed a useful tool.
It's a shame arc didn't have persistent data structures (besides alists :-) and a native hashmap type though.
I'm sure somebody has written an ADVENTURE front-end for a LISP debugger.
You are in a CONS cell. The there is a CAR to the left, leading off to another CONS, and a CDR to the right, containing NIL. The garbage collector briefly enters the CONS, marks it, quickly glances at the NIL in the CDR, smiles quietly to itself, and looking relieved, hurriedly sweeps away through the CAR.
>GIVE CONS TO CDR
You give the CONS cell to its CDR, creating a circular list.
I am ready to be corrected but I'm a fairly sure there have been a few other continuation-based web frameworks, Seaside in Smalltalk was the one that made the idea popular if I recall correctly. "Href considered harmful" comes from that.
problem with those frameworks that "just works" is that they get old pretty darn fast.
it handles all your sql and auth cookies? too bad now there's an easy way every script kiddie can now login as admin or guess cookies without the security header du jour.
It's all nice and all when it's being actively updated. But arc, rails, phoenix-ecto, node/react, drupal, spring, etc it all get old pretty soon when the core maintainers loose focus, and then instead of just keeping an eye on the latest best practices and implementing it yourself, you have to dive deep down into years of feature creep and bad coding practices to do the little thing you need to keep things going.
i think seaside (a smalltalk web framework) pioneered the continuation-based technique, though it could have been around earlier. http://www.seaside.st/
Skip here comments regarding AI online courses, questions related to AIMA and PAIP books. It's not quite a good opportunity to inquire about finer points of JScheme or even ask clarifications along last September interview with Lex Fridman. The art of maintaining approachability for beginners and reference level of usefulness for advanced is not easy, but it's demonstrated - and here is flipped on another human side. Peter, thank you for your works for all of us.
I feel like there were other things like this that people like Guy Steele wrote in the heyday of things like this. More serious people with better memories will be able to say.
The most important thing I ever learned from Professor Norvig was how to not get fat at Google. He said, "Never take a tray. If it won't fit on one plate, it's too much".
A wise man.
I mean I suppose some of the stuff I learned from his textbook was pretty useful too.
What advice would you give to those of us who struggle with our weight, regardless of the size of our plate, who came here to discuss lisp and not what the author of the blog post said to you one time about food portions?
As a person who recalls every bite and gains weight at one meal a day with no snacking at all, this is not helpful. I'm still looking for advice on why I'm being told to count my food intake in a lisp discussion.
This is no more a lisp discussion than a Johnny Cash discussion. You can ignore the top post if it doesn’t interest you.
As an aside, obesity is a disease that hormonally and mentally encourages self deception. The point of writing everything down and (most importantly) translating this list into objective calorie counts (usually looked up from a third party reference) is to remove this self deception. It is one of the most effective ways to leverage logic and willpower over habits and hormones. There are other ways of course.
The post is a topic that discusses tech. The commenter discussed eating advice to “avoid getting fat” that is plainly unbelievable to people who experience weight gain following said advice and who don’t benefit from being told to starve themselves more or be more hyper aware of the fact that it doesn’t matter how little they eat they gain weight. The post had nothing to do with getting fat.
Of course I’m welcome to ignore everything that makes me feel bad for gaining weight but it’s shitty to be gaining weight and told it’s my fault for eating too much when I barely eat.
The topic was an amusing anecdote from Peter Norvig with a tech slant. Other anecdotes are likely on topic
Regarding Obesity, I am not assigning blame or trying to make you feel bad. I’m calling the situation (that I too suffer from) a disease. There are various practices, with varying effectiveness for individuals, to approach the management of this disease. There is no fault in disease, there are only victims of it.
Portion control is one practice, as is writing down consumption. They might not be effective for you. I have been most successful with protein sparing modified fasts such as Lyle McDonald’s rapid fat loss and/or flexible dieting. He takes a very scientific approach to body composition and realistic food intake without shame.
Have you tried lifting heavy weights? Follow a program like Starting Strength and make that calorie surplus work for you. Being skinnyfat isn’t really a compelling goal. Lift and gaining weight becomes a good thing, because it means you’re putting on muscle.
Starting strength is great for smaller men and women but can be hard on bigger people. Especially with Rippletoe's eating philosophy. I gained 15kg when I started SS so be careful. It was worth it and my numbers went up a ton, but it's always good to measure your options. If you're very large it is worthwhile to hire a trainer. They will teach you how to get started easily without jumping off the deep end.
I’ve done a diet in which I used packaged and prepared food for ease of calorie counting and ate about 1400 calories a day for couple months. I also did a lot more aerobic exercise than I usually did (running, bicycling, elliptical machines).
Result? Not what any of the “calories in calories out” calculators claimed. About a pound down. Should have been more like seven to ten.
What works for me: weight lifting, swimming. Both wildly more effective than they “should” be. Dieting and aerobic exercise do about dick-all to take fat off my frame. Try various things in various combinations.
This reminds me of a friend who got on the 1400 calories diet with reduced carbs. He lost 50 pounds in about 3 months but the losses started to slow down after 3 months. He is unrecognizable now. Not sure if this counts but he first put on this extra weight during the last 3 years and mostly from stress eating.
His cover of NIИ's Hurt is so powerful. June Carter (his wife) died a few months after filming the video, and that just broke his heart. Johnny followed her less than six months later.
He was humping the pretty young girl singer, and murdered the mean old lady bible thumper in an airplane crash, then when Columbo finally (but barely) caught him, he was so remorseful he said he wanted to get caught all along.
I like mostly metal, industrial, heh even ska, ... bust mostly death metal, but then there is Johnny Cash, and I hate country - though not Johnny - there is something about it - also Willie Nelson (and few other, probably there is more, it's just that I don't like mainstream one...)
Nah, most country on the radio just doesn't have the same kinda honesty that Johnny Cash or Willie Nelson had. Nor do they have the kind of intensity that a man like Cash had. It just isn't the same. Fellow metalhead who likes ska, btw.
Hey thanks, just googled and found this concert - and yes, it's my thing - https://www.youtube.com/watch?v=y9_xBIuV9nE - wonder why country is not like this... then again even modern metal is not to my liking - so I maybe just old :)
That to be said, recently discovered Ho99o9 and love them.
I'll admit I don't think I've really noticed the presence of lisp online other than when people want to talk about lisp. Can someone share some practical examples of where lisp is being used? Maybe a popular open source project I never realised was written in a lisp family language?
Some companies still pick and use CL: Rigetti Computing (quantum computing), 3E (realtime aggregation and alerting engine of sustainable energy systems), OpusModus, an award-winning music composition software, ScoreCloud, an impressive speach-to-text music notation software, RavenPack (big data analytics provider for financial services), SISCOG (underground systems of many european capitals), Genworks (knowledge-based engineering),…
I have also just deployed a website to a client last week: it reads an existing DB and shows products to the user. Simple, effective. I can hot-reload it if I want, it's built-in (I just use the REPL, I can even install new dependencies without a restart).
I'd recommend giving Emacs a deep dive. That alone should be impressive enough, but if it doesn't satisfy you I'm not sure there are other projects that would.
More than one, if you look at it. Emacs is best viewed as a Emacs Lisp runtime that ships text editor as a default application ;). That's how it ends up with extensive outliner/productivity suite, e-mail/news client, file browser and a bunch of other applications within it, and a lot more of third-party one available in the built-in package manager.
For sure. But most people in technology, and even a large number of people adjacent, know of GDB and GIMP so I always use those as my examples of "lisp in the real world".
It's a little sad that it's declined so far. In a way, it's a victim of its own success--it's so easy to write a lisp system that literally dozens, if not hundreds, of variants sprang up. The community was divided, and it never quite came through.
IIRC, our fearless leader PG made his zillions using Lisp.
Maybe it declined for other reasons, but my impression was that the Lisp world was very splintered. I certainly can't think of any other example of a set of quite similar languages that's anywhere near as large.
The Xerox Dandelion and friends was a thing to behold. If you ever got to touch one, you never forgot it.
In my school, implementing a basic Lisp interpreter was part of a required class in the CS curriculum. (If it's not still, everywhere, well, then for shame.)
Yeah, production is something else entirely. By "easy", I mean that dozens if not hundreds of people were tempted to write their own slightly better Lisp. Many of those did reach a production-ish level. But then what? There was no flag to rally around--just lots and lots of niche systems.
I'm not sure. I think if the community had rallied around Common Lisp or Scheme, or maybe just those two, it might have ruled the world. It just didn't happen.
And I'm miserably sad about that. What do we have now? Elisp is great, but I can't build production on emacs. I'll look at Clojure, but I'm dubious. Python is kind of lispy, but has had its own destructive schism. And C++ marches on--somewhere buried in there is a mildly functional lisp, using the worst of all possible syntaxes.
I'd think a refreshed Common Lisp would still have a chance. It's a great language with some really solid open-source implementations, but it got standardized at the transition point in our industry, so the spec doesn't even consider (now-commonplace) things like threading or networking, while exposing you to some abstractions over systems died out.
The problems of CL wouldn't be unsurmountable if the community was larger, though. All the important stuff that wasn't standardized gets added by each implementation anyway, and then portability libraries get created that ensure consistent interface. But the state of most libraries is... rough at the edges. I contrast that with Clojure, with which I spent some months over the last two years. The library ecosystem (not Java-side, but Clojure-specific) is great, and a lot of care goes into it. I particularly remember being in awe of just how thorough Liberator is[0].
If you haven't heard of Clojure, you probably haven't listened to a Rich Hickey talk. They're worth it even if you don't use Clojure. "The Value of Values" and "Simple Made Easy" in particular.
cons lists are kind of slow because they don't play well with CPU caches. Are there any ideas how to adapt Lisp (or LISP) so that it plays well with current CPU architectures?
Clojure uses persistent vectors which are essentially trees of array chunks (I think 32 elements per array chunk) that support structure sharing and a version of cons called conj that runs in order log32(n) time. But the language isn't really designed for high performance in practice despite some of the early marketing.
Anyway, caches are only part of the problem with linked lists. The root problem is that they inhibit out of order execution. Work out the data dependencies and scheduling of a simple summation loop for an array compared to a linked list. Assume everything fits in L1. The out-of-order core goes to town with the array code and overlaps the fetches for subsequent iterations of the loop. But the linked list code is serialized with almost no instruction-level parallelism; you can overlap the summation of an element into the accumulator with the start of the deref of the next pointer, but that only saves you one cycle per iteration compared to what an in-order core would do with the same code. Now suppose the data is in L2. In that case the out-of-order core can overlap the loads of the subsequent array elements and the throughput is only diminished by a little if at all compared to the L1 case. The linked list code, on the other hand, works the same as before but because it cannot overlap the fetches for sequential elements due to the dependent loads, you go from say 5 cycles per iteration to 15 cycles per iteration. If you have to go out to L3 or DRAM the chasm dramatically widens even further.
More obviously, linked structures also increase pressure on cache capacity since they have to store their links explicitly. Yet another factor is that modern caches will expend bandwidth on speculative prefetches to reduce latency. This can help for both flat and linked data structures. For example, if you have an AST you should linearly allocate the nodes in the anticipated traversal order to get help from the prefetcher. (And please slim down those fat AST nodes.) If you did that for our linked-list summation example, you'd pay the 15 cycles for the first iteration but then you'd pay 5 cycles for the remaining entries in the same cacheline and also 5 cycles for all remaining entries in other cachelines since the prefetcher will have kicked in. So aside from the startup latency, you're back to running at the same speed as when the list started out in L1. But you're still losing out on the out-of-order execution for the L1 fetches: 4 cycles on a modern core is an opportunity cost of 16 instructions. You're reducing your core to a souped up 486.
By the way, for this particular toy example you should go beast mode with SIMD instructions for the array case which will net you another factor of 4x to 32x depending on the element size and the width of your vector unit and then you parallelize across your cores to get another factor of 4x. While those particulars might not generalize to less simple problems, it illustrates that the major issue with linked structures is that they force serial processing.
> cons lists are kind of slow because they don't play well with CPU caches.
Cons lists were slow on the IBM 704 too. Good thing that didn't stop anyone, right?
> Are there any ideas how to adapt Lisp (or LISP) so that it plays well with current CPU architectures?
An ARRAY feature was described in the 1960 Lisp manual. That's probably because it was recognized that linked lists weren't the be-all data structure even on the IBM 704; sometimes it's nice to have compact storage and fast random access.
Can we move past this in 2020?
Let's leave this to the people who have a single aggregate structure in their programming language that is ambiguous between list and array.
If I can elaborate just a bit on kazinator's comment: While it's still possible to use cons cells as one's sole data structure in Lisp, no Lisp programmer with any experience does that. Making cons cells faster would be like forcing horses to drink Red Bull so they could pull chariots as fast as cars. It's a solved problem -- just use a car (npi) -- and there's no need to abuse any horses.
> Cons lists were slow on the IBM 704 too. Good thing that didn't stop anyone, right?
The IBM 704 didn't have a cache hierarchy. The problem that the person you're replying to is talking about didn't exist on that architecture.
Accessing any memory location on an IBM 704 took the same time, so it doesn't matter if your values are consecutive in memory as in an array, or somewhere far away as possibly in a linked list.
But on a modern architecture reading a value far away can cause a cache miss.
What do you think the time penalty is for a full cache miss? Something small? Maybe a couple of times slower? No, the difference between a level 1 hit as often in an array, and a cache miss, as often in a linked list, is around two orders of magnitude.
It's literally multiple orders of magnitude worse relatively than it was on the 704.
Ideally modern Lisp implementations would use some kind of variant of the storage strategies pattern, as used for languages like JavaScript, to give the same semantics as a cons cell but actually using a cache-friendly implementation, but I think this is an unsolved research problem.
I remember reading something someplace that was something like ... the problem with being old and full of wisdom is it's hard to share that wisdom with the people that need it most without sounding like an old condescending jerk. Plus, they won't listen anyways. The best you can hope for is that when they are old and full of wisdom they'll remember you and think "hey that old guy was right" . I know when I was 25 I was damn sure I knew better than anyone twice my age.
> I know when I was 25 I was damn sure I knew better than anyone twice my age.
This is why I've long believed voting systems should restrict the voting age to 16-28. After all, whom would you rather have voting: people wracked with doubt and who fear their own ignorance, or people who know everything?
But actually, "prepared" is often literally the case.
Before critical depositions, experts spend a day or two, with attorneys and consultants, practicing replies. And experts are very cautious about answering unexpected questions.
You are starting too late. In my experience 10 years old is when the all-knowing really kicks in. At least that is that age when each of my children gained omniscience. Apparently.
A while ago I was arguing with someone online about the best way to structure a matrix library. Then around four years later I realized he was completely right and I just didn't know enough to realize why. I'm sure this is a universal experience.
One of the founders at my work was really good at designing production software. After working with him for a long while, I picked up some of his insights. After a design meeting where I saw him shaking his head (and I knew why), I asked why he didn’t speak up. He said that you can’t always tell someone the solution; sometimes they have to figure it out on their own. Not sure yet if I fully agree, but there is something to be said for that.
It was so long ago. If I were to haphazardly venture to guess, likely something around assuming the network is reliable, scheduling retries, caching or cache invalidation.
One thing a number of people told me when I was younger was “you should travel when you’re young and don’t have kids”.
... and I’m happy to have listened and acted on that advice.
Not everyone will listen, but sometimes there’ll be someone that does, or having heard the same advice numerous times from different people it’ll start to sink in at least a little.
For engineering/creative endeavours at least, the trick is to keep making things that support what you say. It's pretty easy to get people to listen to you if they respect your skill. On the other hand, words are cheap.
If somebody is trying to school you, and if there is a reason to believe you work in a genius-factory, then its ironic that the school-er doesn't do their homework into the school-ee to understand what they might be doing.
Sounds like someone mentioned how awesome a Lisp is. Ironically he said to someone who has done a bit of Lisp before. There ain't much to the OP story, or at least he hasn't divulged that much, but the song he wrote is pretty cool.
1) he knew he did LISP like languages and wanted to share
2) he didn't know he did LISP like languages and wanted to share
3) he was showing off.
4) the other choices.
I went to 3) based on my 40 years experience in computer science.
Maybe he just didn't recognize his face? Or might have recognized the name but lacked knowledge about his career? Remember Hanlon's razor: "Never attribute to malice that which is adequately explained by stupidity." Or, more politely and from a different angle, don't assume that the collection of facts that you happen to be aware of is obvious to everyone.
Imagine telling Brian Kernighan about a cool language you found called C, not realizing Brian wrote a lot of C.
Imagine telling Dave Thomas about a cool language you found called Ruby, not realizing Dave has written a lot of Ruby.
Imagine telling Douglas Crockford about a cool language you found called JavaScript, not realizing Douglas has written a lot of JavaScript (much of it full of bad ideas, but that's irrelevant in this case).
Imagine telling Harold Abelson about a cool language you found called Scheme, not realizing Harold has written a lot of Scheme.
Imagine telling Randal Schwartz about a cool language you found called Perl, not realizing Randal has written a lot of Perl.
Like all of the above, for the respective language (LISP in Peter's case, C in Brian's, and so on), Peter Norvig is a well known master who has written a seminal work about the language.
q.v. Paradigms of AI Programming: Case Studies in Common Lisp
He also wrote Teach Yourself Programming In Ten Years, which many people who know basically nothing about LISPs have encountered and consider important.
If I would have any objection to the brief story's use of the term "ironic", it would be about the fact that many LISPers might not be as enthused about Clojure, so it's possible Peter hasn't really looked into it. Your question implied that Clojure is just "Lisp", though, and if we're talking about "Lisp" in a generic sense (i.e. the LISP family), I think Peter Norvig is one of those luminaries that everyone who pays attention to the social context and most important written works about LISP would know. I don't even pay that much attention to those aspects of LISP knowledge, and I know about him in the context of LISP.
That's not to say everyone who has an interest in LISP should be thought deficient for not knowing about Peter. It's fine if you haven't stumbled across him before. Sometimes, people just miss things that, in retrospect, might seem like something they should have known.
That doesn't mean you have to even have an interest in LISP.
This should, however, hopefully, give you a sense of why Peter might think it was "ironic", even if "ironic" feels a bit questionable as the appropriate choice of term there. I'm sure to him it felt a little ironic, and that makes sense to me as an emotional response, at least.
Does that help?
(Imagine telling William Shakespeare about the Renaissance era techniques of being a playwright, or telling James Cooke Brown about a conlang called Lojban. Of course, I suppose James might be dismissive of Lojban, given he tried to control Loglan by proprietary means after creating it, thus prompting others to reimplement it as Lojban, undermining his efforts at such control.)
edit: Note that I'm not saying there's anything wrong with someone not realizing one is speaking with a Big Name in LISP when meeting Peter Norvig. I'm just saying that Peter is perhaps justified in finding it a little "ironic", and jumping to conclusions about why he would feel that way (if that's what you're doing) might not be fair.
I would probably feel awkward in Peter's position. I've been in a similar position before, including once being told about a cool article that I actually wrote as if it would change my life, and I never know what to do in a case like that. How do you point it out without possibly making someone feel offended? Humans get offended at the silliest things sometimes, and I occasionally find it quite difficult to predict.
> While that is a sentiment with which I can wholeheartedly agree, ...
... (since, ya know, I rewrote a Lisp book to make it about Python, so if I said what I really think, there would be no end to the crow I'd have to eat), but anyway ...
>norvig on Oct 18, 2010 | parent | favorite | on: Ask PG: Lisp vs Python (2010)
>Peter Norvig here. I came to Python not because I thought it was a better/acceptable/pragmatic Lisp, but because it was better pseudocode. Several students claimed that they had a hard time mapping from the pseudocode in my AI textbook to the Lisp code that Russell and I had online. So I looked for the language that was most like our pseudocode, and found that Python was the best match. Then I had to teach myself enough Python to implement the examples from the textbook. I found that Python was very nice for certain types of small problems, and had the libraries I needed to integrate with lots of other stuff, at Google and elsewhere on the net.
>I think Lisp still has an edge for larger projects and for applications where the speed of the compiled code is important. But Python has the edge (with a large number of students) when the main goal is communication, not programming per se.
>In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you; if you don't have those things you're in trouble regardless of your language choice.
Kenny Tilton (smuglispweeny) tells this story, that ced posted a link to in the same discussion:
>ced on Oct 18, 2010 | parent | favorite | on: Ask PG: Lisp vs Python (2010)
>That reminds me of a cool story, in Norvig's talk about Python...
>When he finished Peter [Norvig] took questions and to my surprise called first on the rumpled old guy who had wandered in just before the talk began and eased himself into a chair just across the aisle from me and a few rows up.
>This guy had wild white hair and a scraggly white beard and looked hopelessly lost as if he had gotten separated from the tour group and wandered in mostly to rest his feet and just a little to see what we were all up to. My first thought was that he would be terribly disappointed by our bizarre topic and my second thought was that he would be about the right age, Stanford is just down the road, I think he is still at Stanford -- could it be?
>"Yes, John?" Peter said.
>I won't pretend to remember Lisp inventor John McCarthy's exact words which is odd because there were only about ten but he simply asked if Python could gracefully manipulate Python code as data.
>"No, John, it can't," said Peter and nothing more, graciously assenting to the professor's critique, and McCarthy said no more though Peter waited a moment to see if he would and in the silence a thousand words were said.
I like his poem in the article!
A little off topic, but I retired (that is a bit of a joke) and at the age of 69, this year I decided that for maximum programming enjoyment I would only use Lisp languages (linking in Python and TensorFlow on occasion). I am approaching 40 years using Common Lisp and using the language is so much fun. I bought a license for LispWorks and using it for developing a semantic web app.