If I had to put my finger on why Forth is hard for most programmers, it's like this: the structured program theorem suggests using sequence, selection, and iteration to control program logic. Assembly code on hardware architectures will assign all meaning sequence-relative, late-binding the selection and iteration. Forth doubles down on this sequence-first approach through a more versatile assignment mode(the stack) while only minimally accommodating the other two. That makes it easy to implement, while not explicitly addressing expressability.
Forth written "flat" with shallow data stack usage is assembly by another name, a wrapper for load-and-store; Forth that does everything with intricate stack manipulation is a source of endless puzzles since it quickly goes write-only.
But, like any good language with meta expression capabilities, you can work your way up towards complex constructs in Forth, and write words that enforce whatever conceptual boundaries are needed and thus check your work and use the system as a compiler. That's what breaks Forth away from being a simple macro-assembler. But you have to write it in that direction, and design the system that makes sense for the task from the very beginning. That falls into an unacceptable trade-off really easily in our current world, where the majority of developers are relatively inexperienced consumers of innumerable dependencies.
For me, the main problem with forth is a lack of names -- most languages (functional or imperative or even declarative) assign names to things -- things like function parameters, temporary values and so on.
Even Prolog, which is as far from traditional structural program as one can go, usually has descriptive names for unbound variables.
Compared to this, Forth is very name-terse. You get "function names" at best, and nothing else. This really makes programs much harder to understand, as it requires one to remember much more things while reading the code.
The ANS Forth standard supports named local variables. Unfortunately Forth dialects vary quite a bit in their implementation and prefered use of local variables, and the semantics of local variables are a bit hairy.
Note that Forth also supports locals. These will consume and remove N members from the stack so you can summon them as needed prior to calling the words.
But as someone that really got into Forth this year, I can assure that it does get better once you get more familiar with the language. I used to write commenting the stack effects on each line and now I barely need them as I write words in one sitting without that much effort.
Other developers are doing tacit programming in J or Haskell, pipe forward operators in F# or OCaml and threading macros on Lisp dialects and they seem to do fine.
It's been a long time since I did anything substantive with Forth, but, as I recall, the big problem here is that most Forth guides get so excited to show you how much rope the language gives you that they forget to teach you how not to hang yourself with it. So you're kind of left to figure out all the little idioms and best practices for stack management all on your own.
>> So you're kind of left to figure out all the little idioms and best practices for stack management all on your own.
That was my experience learning Forth in the 80's. Initially I tried to add compositional features I was familiar with from APL and Lisp. I raged against the limitations of the cell and schemed to use a typed stack. Eventually I became one with PAD and was able to ALLOT peace to my code. At some point I ranted on how the idiom of counted strings could be generalized to all sorts of useful sin and had an epiphany. 2ROT on!
> But, like any good language with meta expression capabilities, you can work your way up towards complex constructs in Forth, and write words that enforce whatever conceptual boundaries are needed and thus check your work and use the system as a compiler. That's what breaks Forth away from being a simple macro-assembler. But you have to write it in that direction, and design the system that makes sense for the task from the very beginning. That falls into an unacceptable trade-off really easily in our current world, where the majority of developers are relatively inexperienced consumers of innumerable dependencies.
Well expressed, I often think of the stack as the syntax of the language, not the runtime implementation, and it is quite difficult to hire for this skill without spending a lot of money.
> That falls into an unacceptable trade-off really easily in our current world, where the majority of developers are relatively inexperienced consumers of innumerable dependencies.
aka, this is hard to do, and a difficult mental model to learn. It's the same kind of thing with LISP'y languages and functional languages (not the same model, but the same level of difficulty).
If this gets you interested in z80 hardware, I would recommend building an RC2014: https://rc2014.co.uk/
I built an RC2014 after CollapseOS was posted last year, and thoroughly enjoyed it.
I ended up adding a front panel, complete with switches and lights, to allow toggling in and executing code without a ROM, and also wrote a HTTP/1.0 server for CP/M (which was an enormous headache for lots of different reasons). Never did get around to running CollapseOS on it though.
I love this project, and check in on it every few months. I particularly like the idea of scavenging hardware:
> With a copy of this project, a capable and creative person should be able to manage to build and install Collapse OS without external resources (i.e. internet) on a machine of her design, built from scavenged parts with low-tech tools.
I'd be interested in trying to build my own proof-of-concept, but my hardware experience pretty much starts and stops with kids' electronics kits from 20 years ago.
Assuming I can scavenge a Z80, what would be a logical next step? What books/other resources should I be reading to learn more?
It gets a bit dry when it gets to the bit that documents the individual instructions, but the overview before that point is well worth reading.
Once you've done that and you're trying to write code, this page is a good reference for the instructions: http://clrhome.org/table/
To build a scavenged machine, you'll need a Z80 CPU, a clock source (I think an RC oscillator would be the simplest working setup?), probably some sort of ROM, an SRAM, some sort of IO device (probably a Z80 SIO/2 for serial), and a 5V power supply. I think that should be all that is required. Then you need to load your code into the ROM at address 0.
Then you can connect the SIO to a USB serial cable (e.g. FTDI) and communicate with your Z80 using a modern PC.
I can't remember if the SIO/2 needs external address decoding logic, maybe you could cope without it if you are happy for it to use up every address?
But building an RC2014 would be easier and more likely to result in a working machine, and would give you almost all the knowledge required to build a scavenged one later if you still want to.
And I'd also happily receive RC2014 or Z80-related questions by email (available in profile), although I'm not much of an expert compared to many of the people who are active on the mailing list.
It is my belief that after civilizational collapse, a working macbook will still be much easier to scavenge than assorted parts such as a Z80, a memory controller, compatible memory, a writable EPROM for program storage, peripherals such as keyboard, video circuitry and a screen, and a power supply, nevermind much more useful.
Not too quite so, and the COS author addresses this in its website. (https://collapseos.org/why.html, section "There are two stages of collapse")
To be clear I don't necessarily share the view that this will likely happen, but for the sake of discussion I will assume it will.
A working macbook will indeed be much easier to encounter working at first, but not only is it much less durable (it isn't even particularly durable from current laptop standards), and parts are much more numerous, specialized, and hard to find. Macbooks in particular have a very tightly controlled supply chain, but this applies to other laptops too.
Z80 style processors and peripherals are still in use in various industrial and home appliance products, so it is still rather widespread anyways.
This means that while a laptop might be way more useful at the beginning of a collapse, it will probably stop being maintainable much earlier than a simpler computer will. In fact, if push comes to shove, it is plausible to actually build a Z80-compatible processor from discrete transistors.
I think I've had my own moment of clarity that spans both Forth(s) and Lisp(s) and explains why neither is as common as other languages.
In most common languages, there is a complicated base spec that covers many cases and defines a broad range of affordances, plus libraries and libraries that expand on an already fleshed out collection of tools and etc.
Forths and Lisps give you the core of an environment, and let/expect you to build on the foundation to create your own implementation. Like someone else in this thread said, N programmers, N dialects. Or, more accurately, every Forth program is its own DSL for accomplishing its work.
I think you can draw this ‘core of an environment’ parallel between Forth and Scheme: both are small languages and they emphasize growing the language to the problem domain [1]. Common Lisp, on the other hand, is a large language: implementations provide much more than a foundational core, and a fairly comprehensive list of libraries exists. I think RPG’s Worse Is Better highlights some of the reasons why CL isn’t as popular as other languages [2].
Off topic, but this (from RPG's Worse is Better) sounds very familiar:
> Part of the problem stems from our very dear friends in the artificial intelligence (AI) business. AI has a number of good approaches to formalizing human knowledge and problem solving behavior. However, AI does not provide a panacea in any area of its applicability. Some early promoters of AI to the commercial world raised expectation levels too high. These expectations had to do with the effectiveness and deliverability of expert-system-based applications.
You are giving technical merits way too much credit.
People will put up with whatever bullshit as long as there is demand and helps them get a job.
The thing is, UNIX was a massive success, and it happened to be written in C. Since then, all successful languages had to have a familiar syntax with the host language.
It was UNIX that killed the Lisp Machine (by being given away for free). Programming languages never got to play a role.
Unix wasn't given away for free; it was strapped by AT&T licensing and you needed hardware that certainly wasn't free, and not still not affordable to individual consumers. But Unix was a resource-efficient system that scaled down to cheaper hardware with less RAM.
Even the dyed-in-the-wool Lisp enthusiasts headed by Richard Stallman were compelled to reproduce Unix, even though their stated goal was to have a system running Lisp.
This is not the complete picture. Stallman decided to reimplement Unix because it had a combination of popularity and technical merit.
For instance, MS-DOS had a larger installed base than Unix at the time, but ... enough said about that, right?
Imitation is the sincerest form of flattery, as the saying goes.
A hacker like RMS isn't going to pour years of coding into making a C compiler, and Unix utilities, in his spare time, if he thinks those technologies do not have merit.
Lisp peaked very early, and grew at a voracious pace compared to progress in hardware. The result was that it required "big iron". Lisp was a victim of the same process that killed the mainframes: a rebooting of the computer industry with cheap, but (initially) under-powered microcomputers, to which legacy systems were not able to migrate.
There also arose a new generation of hackers brought up on the new microcomputers who didn't care for, know or else even have access to legacy systems. As microcomputers showed signs of advancement, old hackers who had learned how to make things fit into small memories 15 years prior brandished their skills, which popularized tools like Pascal and C. Turbo Pascal for MS-DOS PC's fit a compiler and IDE into under forty kilobytes.
In the 1980's, people who wanted to use their Lisp techniques to deploy into the microcomputer market were faced with rewrites. A blatant example of this is CLIPS: an expert system written in C which retains the Lisp syntax of its predecessor. https://en.wikipedia.org/wiki/CLIPS . CLIPS was inspired by a Lisp-based system called OP5. But that itself had also been rewritten into Bliss for speed: https://en.wikipedia.org/wiki/OPS5 .
Why not say "Every ~~Forth~~ program is its own DSL for accomplishing its work."? For moderately complicated programs in any language you choose, it can take a long time to grok how the literal code relates to solving the conceptual problem. No language can build in every abstraction, and no programmer has time to learn them all.
I think you are close to part of an answer, but it isn't because Forth and Lisp expect one to do more work than other languages. If anything, they expect one to do less. The problem is programmers feel lost because there is no way to differentiate the bedrock of the language from higher abstractions. C has operators and statements and keywords that tell you there is nothing "underneath" what you are looking at. With Forth, everything is words. With Lisp, everything is lists.
To be fair, it is very common for Forth programmers to redefine the interpreter as they go. You literally change the language in your program. That's a very different expectation for other kinds of languages.
There are similar examples in just about any language out there. People use whatever tools the language ecosystem provides to change the language to fit some problems better. Some languages are easier to change and extend, some are harder, but that doesn't stop people from trying to do this anyway.
I think there's a level of familiarity with the language above which changing it is a natural thing to do. It can take years before you learn a "normal" language well enough to be able to do this, but with Forth, Scheme, Prolog, and the like, you're basically required to do this from the get-go. My intuition is that these languages simply target advanced, already experienced programmers, while completely ignoring the beginners. So it's more of the optimization for a different user-base, IMO. That would also explain how these languages are still alive, despite their communities being very small for the last 50 years.
I hate this headline but the project is very cool. People like me who were ignorant should start here instead of the Github to get a better sense of the design goals (and why they are using Forth): https://collapseos.org/forth.html
I've read that page and I'm still wondering why they're using FORTH.
C has excellent portability and performance. The article agrees with the general consensus that C is also generally a better language for the programmer. So why use FORTH? What does it matter that it can do cute things with self-hosting? What does 'compactness' matter?
If the goal is to build a portable means of writing programs for Z80 and AVR, why not develop a C-like language, or an IR, or put work into developing a serious optimising C compiler targeting the Z80? I get the impression that's a relatively unexplored area for (rather niche) compiler research.
The rest of the storyline of this page covers what I think is the core concept that Chuck Moore started from, and defines Forth as its own environment: the core of the language that needs to be defined in order to write and execute Forth words is surprisingly small. Once you have the core words in place, you can layer on exactly the constructs you need.
I won't claim to be a proficient Forth author, but I've used it to accomplish a couple of rather odd one-off projects, and it is amazing how much you can do, as long as you're not expecting graphics or networking or huge storage needs.
Compactness matters because when you're trying to bootstrap into a tiny (or hacked together custom build) environment, the tiny bootstrap footprint means you can be up and rolling that much faster.
C is all good, I've been writing it for a long time, but I'd much rather get a Forth core going in raw assembly than even a stripped down to brass tacks C compiler.
Seems like an IR would make sense here. Perhaps a stack-based one. If you need a compact program representation, that doesn't mean you have to use FORTH as your source language.
A Forth word is simple enough that defining any other level of handling just isn't necessary. Trying to put an IR in would just be another layer and an unnecessary complication.
Using Forth gets pretty close to being the most compact representation all by itself, there's literally no extra tooling needed. No compiler or other translations, it's all just there in the words.
I think the argument against a custom IR is that once you start optimising your IR for compactness you'll probably end up near Forth in the design space anyway. As you say perhaps making the IR stack-based.
Once you're at that point you may as well just use Forth, especially since it's got a proven ability to work in these kinds of resource contrained, self-hosting/self-bootstrapping environments.
> you'll probably end up near Forth in the design space anyway
Good point. Related to this, FORTH can be treated as a target platform for compilers, although I don't think there are many mature compilers that do this.
I wonder if CollapseOS will ever seriously target heavyweight platforms like x64 (and not just through emulation with libz80). I suppose that's out scope, but it would open the door to JIT.
> you may as well just use Forth
Presumably it could be a little more compact if a less human-readable variation were used, no? FORTH uses DROP and THEN, which could be shortened at the cost of readability.
Forth basically is IR between high level and whatever is your concept of "low level". The nature of forth is that forth words can be written as composition of other forth code or syscalls to native code or as chunk of native code.
I say "concept" because you can run forth code on an interpreter or any number of semi-compiled or compiled approaches that get more and more closer to assembly.
For a more optimizing compiler for a machine with more registers, you might, to be able to store the stack in the registers. The Z80 has few enough that I can't see it being an improvement.
You're describing a hypothetical piece of software that would be easily two or three orders of magnitude more complex than the current Forth, and potentially be entirely unfeasible to run on a resource-constrained 8-bit system. Self-hosting isn't a cute trick, either- it's a functional requirement.
Forth makes bootstrapping and cross-compilation a straightforward exercise. C provides no help whatsoever until you've climbed to the top of a mountain of abstraction.
Because after the end of civilization, you may be inputting your first programs on punch cards or something similar, and your system's memory capacity might be measured in a few kilobytes, not gigabytes. Compactness is a huge deal in this context.
Having spent a couple of hours as a kid typing in a https://en.wikipedia.org/wiki/SpeedScript binary printed in a magazine, one byte at a time, I can confirm that compactness is valuable.
First time i hear about this project and i love it.
I often talk with a friend who's an historian, and that makes me realize how our relationship with time, as an industry, is extremely short sighted.
The internet archive is an immensely valuable project, as well as all the websites archiving old documentation, etc. But i don't think a lot of people realize the value of things they're destroying everytime they execute a delete statement in a DB or a filesystem.
This Collapse OS ambition to be able to "bootstrap" something useful over any kind of primitive hardware and sustain the passage of time ( or catastrophic event) may have an immense value in the future.
DELETE statement... That's far too obvious.
With every UPDATE statement you are forgetting the past, yet it's still the goto DB paradigm!
It's okay to constantly lose information apparently...
And it's not like we don't have alternatives. There is Datomic of course, but also juxt/crux and DataHike.
> The Z80 asm version of Collapse OS self-hosts on a RC2014 with a 5K shell on ROM, a 5K assembler binary loaded in RAM from SD card (but that could be in ROM, that's why I count it as ROM in my project's feature highlights) and 8K of RAM. That is, it can assemble itself from source within those resources.
Meanwhile, you can't compile rustc on a 32-bit system because it runs out of address space...
There seems to be a common conception that Forth is always interpreted by some kind of virtual machine. But it's quite possible to compile Forth all the way down to native machine instructions with no loss of interactivity and without losing the rapid bootstrap capability the author describes. Such a Forth runs almost as fast as C (almost because modern out-of-order and other processor optimizations probably don't work well on compiled Forth code).
I used this approach when I bootstrapped a Forth compiler on the TI 34010 graphics chip in a similar fashion as the author. It even had local variables so you didn't drive yourself mad thinking about the stack all the time.
My favorite commercial example of such a Forth was Mach 2 Forth on the early (pre-OSX) Macs. I don't know if any modern Forths do down-to-the-metal compiling or local variables, but I'd be interested to find out.
I believe most modern forths will have these, gnuforth does for example.
> down-to-the-metal compiling
I think there are some that do this, but back in the day, the opinion of lots of forth programmers was that threaded code was good for the 90% of the program where performance didn't matter, and added an inline assembler for areas where it did.
And of course, you can always add support for local variables yourself. For an example, see http://turboforth.net/resources/locals.html, which takes 808 bytes to implement. That’s hefty, for some systems where one would use Forth, but, FTA, “Placing key values in locals will actually both increase peformance, and make your code smaller”, so you can earn that back if you use the facility enough)
I thought this is pretty fun project and the idea is just cool right away, but then I went on reading and was surprised by the fact it seems the author is serious. I would be really curious to hear his opinion on why the supply chains should collapse before 2030. Obviously, there wouldn't be a shortage of people in the comments who can start speculating on why it might happen, and, of course, I myself also can provide a couple of feasible scenarios, but nobody of us were serious enough to actually start this project, so his opinion on the matter is somehow quite more interesting to me than it would normally be.
If you had told me last year that we would have a pandemic, lockdown, massive economic recession and job loss and now riots and protests etc and all at the same time I probably wouldn't have believed it. I'm starting to think the world is a lot more fragile than we all thought.
Not “we all” thought — plenty of people have been saying this would happen for years. Specifically, epidemiologists and black people. Maybe we should listen more!
> I would be really curious to hear his opinion on why the supply chains should collapse before 2030
Think about seat belts. Do you ask the driver:
> "I'm really curious to hear why you think you'll crash the car?"
when s/he puts on the seat belt? S/he'd likely say "No I don't think that at all".
And not impossible that the Collapse OS author might have a similar reply. Still, the project can be well spent time, like, a seat belt in case of the unexpected.
Think about: 1) Likelihood-of-Global-Supply-Chain-Collapse x How-Bad. And 2) How-much-does-Collapse-OS-mitigate-the-bad-things. And compare that, with 3) time spent building C.OS.
He said in the Why? article linked above somewhere that he really does believe, even if he has no evidence, that we'll be experience a collapse sometime around 2030. That's not really comprehensible to me, even though your description of a possible rationale, which makes more sense.
In it, he writes: "... two important stages of collapse ... the second one is when, in a particular community, the last modern computer dies ... decades between the two ... Collapse OS won't be actually useful before you and I are both long dead"
making me wonder if one scenario he has in mind, is the different biggest countries in the world stopping trading with each other, so it won't be possible to get more rare earth metals (needed for today's computers, right). And then, maybe downhill from there, the thereafter following 50 or 100 years? — But not _necessarily_ a nuclear winter or something that dramatic & sudden.
And ... He also writes:
> What made me turn to the "yup, we're fucked" camp was "Comment tout peut s'effondrer" [a book]
I think you'd find the answers in that book then? Seems the book got translated: "How Everything Can Collapse" [in our lifetime] by Pablo Servigne.
There's apparently an English translation released very recently: "How everything can collapse: A manual for our times."
I haven't read either version, but piecing together the thesis from reviews, it seems to be a somewhat more evolved form of Malthusian catastrophe and Peak Oil(/Energy), with a dash of climate change alarmism [1] and Piketty-style concern over inequality. And technology won't save us because... well, I haven't found anyone who can elucidate that concern. It seems to me that the commenter who wrote "this book seems to be only for those who were convinced beforehand" has it right.
[1] I don't like using "alarmism" here because it suggests that I don't think it's a problem (I do), but I can't think of a better succinct description of "if we don't fix this literally tomorrow, we're totally screwed."
Technology isn't a complete panacea, but humans in general are highly adaptable. History seems to indicate that innovation is the most common outcome of Malthusian catastrophes, so it seems more than reasonable to me to ask that anyone arguing for a Malthusian catastrophe needs to also argue why innovation is not going to again be the outcome.
> Technology isn't a complete panacea, but humans in general are highly adaptable. History seems to indicate...
The most critical processes for human success - population growth - is an exponential process. Pretty much all of it happened in the age of fossil fuels. We have 200 years (out of 200,000 of human history according to Wikipedia) of experience with global populations >1 billion and we are currently cruising at about 8 billion souls on the planet. All of that 200 years is in the context of freely available and rapidly growing utilisation of fossil fuels to power the logistics networks enabling the growth.
History doesn't show us being adaptable, history shows if something happens to the solid/liquid carbon supply around 7/8 of us are expected to die. And we can statistically all-but guarantee something will sooner or later over a long period.
Our major reason to be hopeful is our history isn't a guide and that something other than oil really makes strides. Maybe nuclear, maybe renewables.
> History doesn't show us being adaptable, history shows if something happens to the solid/liquid carbon supply around 7/8 of us are expected to die.
I disagree. Look at the pandemic response here vs. the Spanish flu, or the Bubonic plague.
We also have plenty of alternate energy sources to diversify our power infrastructure, and many nations are taking these steps. The past few decades have been a series of lessons on the importance of resilience over efficiency, and we're slowly learning this lesson.
If you're not terrified at the Western response to the Coronavirus, you're not paying attention.
The low systemic risk events we are living through now, prove that the Emperor is naked. An event that produces moderate to serious systemic shocks would be our DOOM.
Regardless of how insufficient the response was, it was still far swifter and more effective than the response to the Spanish Flu. Also, the knock-on effects will influence significant changes for years come. We learn slowly, but we learn.
Who is we? The divided population that’s ready to ignite? The corporations that are ruining the planet? The politicians that are as effective as puppets? The billionaires that are stocking their bunker mansions? The onepercenters that are doing the same (minus extravagant luxury)?
The system is on a hairline trigger and the control room is empty.
But looking back at our history, one should also keep in mind that it did take hundreds of years for Europe to recover from the fall of the Roman Empire.
'Innovation will fix things eventually' is cold comfort for the generations of people living in the interim.
Yes. But why? Multiple episodes of 'the Plague' and climate swings (probably caused by volcanic eruptions somewhere else).
edit: I mean, look at what is happening now. All sorts of disruptions because of some sneezery (regardless of real or imagined danger, it's the policy that matters). Now imagine further disruptions by volcanic ash particles and gases in the atmosphere. So F-ed!
There is no intriguing backstory for it, like for CollapseOS, but it's a ~6 kiloword, practical 4th environment for Microchip PIC microcontrollers, which are a lot simpler than Z80, btw...
The source code is trivial to understand too.
My father is still using it daily to replace/substitute Caterpillar machine electronics or build custom instruments for biological research projects.
We started with Mary(Forth) back then, when the first, very constrained PIC models came out, with 8 deep stack and ~200 bytes of RAM. Later we used the https://rfc1149.net/devel/picforth.html compiler for those, which doesn't provide an interactive environment.
I made a MIDI "flute" with that for example, which was fabricated from sawing out a row of keys from a keyboard and used a pen house as a blow pipe and a bent razor with a photo-gate as the blow-pressure detector...
There are more minimal Forth OSes, which might be more accessible than a Z80-based one.
I would think those are more convenient for learning, how can you have video, keyboard and disk IO, an interactive REPL and compiler in less than 10KB
But if you really want to see something mind-bending, then you should study Moore's ColorForth!
I found it completely unusable, BUT I've learnt immense amount of stuff from it:
https://colorforth.github.io/
The weird thing about history is that it only makes sense backwards. Someone 500 years in the future might think of this as the Gutenberg printing press. It is often difficult to appreciate the magnitude of actions in the present.
The gutenberg press was used in the real world. There are lots of OS being used in the real world for real work.
So why should someone in 500 years think this is more relevant, than for example Linux?
And if there is a real collapse, then I also do not really believe everyone makes a run for collapseOS. There are other options: all the ones, tinkerers and hackers use already today.
We don't know how history will pan out. Just wanted to share the weird feeling I sometimes have about current events. For all we know, post-collapse technology could be based off QMX reverse-engineered from luxury vehicles. It's just fun to think about how events can have that unexpected butterfly effect. I sometimes catch myself with the implicit belief that we are "late" in the story of history. But given the possible expanse of time, we might be the "hunter-gatherers" of a civilization we cannot even begin to comprehend. This might just be the second of the dark ages.
I love this project. Thank you for posting it. The idea of a post collapse operating system reminds me of slackware/subgenius’s old slogan: “...the world ends tomorrow and you may die!”
I’ll be trying to compile collapseos, write forth and load slackware on floppys in a few years then. (Will systemd survive civilisation’s collapse, especially when it caused it, that is the question....)
Thanks. I'd never heard of that law before. I think he's serious. I'll appreciate his work both pre and pos civilisation collapse. I like his software aesthetic too.
Maybe I'm getting old, maybe I'm seeking more control, maybe the world has come needlessly complex, but there's a certain appeal about returning to more manageable days, where it was possible to fit a software and hardware system in your brain, and your brain was swimming with ideas on how to use a limited system rather than drowning in layers of complexity.
I liked playing the Fallout games, I grew up with 8 bit machines, so I'm all in at using a post-apocalyptic scenario as a setup for a though experiment tech stack. In a "ha ha only serious" manner.
But on the other hand, real preppers scare the effing s out out of me.
> Some Forth enthusiasts like the language for itself. They think that its easier to express much of the logic of a program in this language. Many books and tutorials are touting this. [...] That's not my opinion.
In my opinion, it's not that FORTH code is hard to read, but that FORTH gives the programmer so much freedom that every program becomes its own microcosm of DSLs.
"That being said, I don't consider it unreasonable to not believe that collapse is likely to happen by 2030, so please, don't feel attacked by my beliefs."
Triple negative? Quadruple negative? (If we count the second "don't".)
Compare something like:
"I consider it reasonable to believe that collapse is unlikely to happen by 2030, so please, don't feel attacked by my beliefs."
It is almost as if the grammatical structure reflects the life perspective of the author.
I don't really want to participate in developing this silly argument, but people making such comments should always consider the possibility that there is a reason for somebody to say things the way they do, and that your "simplification" in fact loses the point of what author is trying to say.
So let me translate for you.
He doesn't "consider it reasonable to believe that collapse is unlikely to happen by 2030", in fact he believes that given the importance of the matter, he is better to assume the scenario, in which his project will turn out to be life-saving. But if it doesn't seem likely to you, that collapse will happen before 2030, and you don't believe the evidence supporting that claim, he wouldn't call you silly (unreasonable) for that, so we (he and you) can work on the project together even if our forecasts are different, don't worry about that too much.
This is the perfect comment - pedantic, snarky, and more focused on the superficial features of the article than the content, mixed in with some grammatical pseudo-psychology. Check, check, check, and check! This is peak tediousness. Well done!
There's a difference between "x is not negative" and "x is positive" (¬(x < 0) ⇏ (x > 0)). Why shouldn't a similar subtlety (or a larger one) exist in prosaic language?
Yes, and 2 is not negative, and 3 is not negative, and 4 is not negative, etc.
That is the point. This style of communication is indirect and ambiguous. This is just negative followed by negative followed by negative, etc.
Just say what you mean. In the affirmative. Overuse of negatives is the functional equivalent of "spaghetti code" in written communication. Not easy to follow.
Anyway, some readers missed the point of the comment. It is not every day that one sees so many negatives in one sentence. Most however got the point, and the commenter who crafted a version of the sentence with even more negatives I thought was hilarious.
“not unreasonable” expands to “You could provide reasons that seem valid, even if I don't agree that they support your conclusion.”
“reasonable” expands to something more like “The reasons you have provided support your conclusion.”
“reasonable” can work in this case, but it doesn't state as clearly that the speaker disagrees with your conclusions.
In a more general sense the “not un-” pattern is a marker for something that is qualitatively similar to the corresponding simple positive attribute (e.g. “reasonable” or “popular”) but not to the extent of the category of things fitting that simple positive attribute. That is, category “reasonable” is a strict subset of category “not unreasonable”.
The point of the comment was not questioning the meaning of the sentence. The phrase "not unreasonable" is quite common. With some effort, we can decpher the meaning. The point was that there are other ways to express that meaning, using fewer negatives.
For example, something like this:
"I don't consider it unreasonable to believe that supply chains will survive to 2030, so please, don't feel attacked by my beliefs."
- The concept of pushing/popping things onto/off the stack is a relatively straightforward programming model when done consistently
One of the old competitors to the likes of UEFI and uBoot is OpenFirmware (also known as OpenBoot), for which the primary UI is a Forth shell; OpenFirmware was the BIOS equivalent for Sun's SPARC workstations/servers (and still is for Oracle's/Fujistu's SPARC servers, last I checked) and most POWER hardware (including "New World" PowerPC Macs), among others. About the most delightful pre-boot environment I've used; it's a shame it didn't catch on in the x86 or ARM space.
In some ways it is even more abstraction-friendly than C because of how easy it is to manipulate functions or redefine its own interpreter/compiler. There was a nice example of a Tiny C compiler written in Forth by Marcel Hendrix posted to comp.lang.forth, with a Forth-defined VM as target. I posted a story about it at https://news.ycombinator.com/item?id=23455548
The big problem with Forth is software reliability: ad-hoc Forth code is hard to reason about in any generality. Languages like Factor show that dialects of Forth can be much better in this regard.
I got into Forth recently - using Gnu Forth - but got bogged down at my inability to do graphics. There are hardly any Forth programming videos of any kind online, besides the "101" kind, but I found one by a guy who managed to get graphics/windows going, I think with GForth. (Can't find it on youtube now.) Looked mega-complicated. So lately back into Tcl/Tk, which also satisfies my "bizarre, powerful and very cool language" itch but actually makes graphics/windows super-easy. I do think Forth is awesome though!
After getting into Forth I got into PostScript, which does have most of the Forth fun/freedom taken out of it. It's not usable for GUI-type programming is it?
This is (the now pedagogically famous) JonesForth. It's ~2000 lines of HEAVILY commented assembly, and at then you have everything you need to start writing a fully functional Forth+standard library (which is done in jonesforth.f in a ~1800 lines of heavily commented forth). That's not even as tiny as forth can be, and it gives you a lowish level language that is as/more modifiable and extensible as any Lisp.
There is basically no faster way to get from bare metal to a comfortable humane and interactive (so you can write more stuff) computing environment than writing a forth.
This is the kind of malarkey I'd get up to on a personal project of mine. Like, I wonder if I'd get farther if I'd written it in Forth/TinyScheme/Erlang? May as well completely overhaul it and find out!
Love this project. I wonder why not make it run on smartphones? I'd imagine that's the vast majority of computers out there today. Easier to find a working smartphone than retro consoles.
He explained that a bit here[1]. The project targets a world wherein existing computers have degraded to a point beyond repair and we have to scavenge parts that can be retrofitted. His reasoning is that under such conditions, parts that can be thru-hole soldered such as 8-bit z80 CPUs will be much more robust than surface mounted ARM chips.
CollapseOS? Meh. What we need is ApocalypsOS. Something to play Tetris on after having hauled a bucket of wellwater (and some wood) to the 23-rd floor apartment where I try to survive.
This. Pforth works really well and the ".S" word is straightforward, while pfe is not.
GForth is too difficult; altough you can code for the Game Boy with it, but in order to learn, pforth is more than enough.
Forth is Lisp, but with composition as the basic operator instead of application. The result, of course, is a wildly different language but one which inspired the same degree of fanaticism among its adherents in their search for simplicity and elegance.
I rather disagree with your first sentence. Forth is not Lisp, nor is Prolog or APL Lisp, and none of these languages are one short conceptual leap away from the others. Even a very rudimentary Forth can express concepts which have only hazy correspondences in a Lisp, like words which twiddle the return stack or yield a variable number of results on the parameter stack.
I find it misleading at best to casually intimate that Lisp is some kind of ur-language which exemplifies simplicity and thus lies at the root of any design space. Fans of Lisp are overly eager to stake claim upon ideas which do not belong to their language.
I don't mean to bite your head off about it; this is just a trope I find tremendously frustrating.
The core of Lisp is a small set of rules about abstraction and application using lists. Forth, at its core, can be formalized with the exact same set of rules (one to one correspondence) but with stacks and composition as the operator. The languages are very VERY different, but there is a similar bare bone formal framework at the bottom.
It’s only the newer “concatenative” languages that have explored this forms correspondence. See this essay by Jon Purdy (author of Kitten) on “why concatenative languages matter”:
This article talks about “linear lisp” with a stack and you basically end up with something forth-like. http://home.pipeline.com/~hbaker1/ForthStack.html Linear Logic and Permutation Stacks--The Forth Shall Be First
What Forth and Lisp have in common is that they're interactive language construction sets. You don't just write domain code the way you do in C. You invent a language that matches the domain and use that to solve the problem. Don't like the syntax? Change it. Don't like the conditional statement? Invent a new one. Don't like the compiler? Improve it.
These kinds of operations are behind the curtain in C but they're accessible to everybody in Forth and Lisp.
Collapse OS was initially written in assembly for the Z80, a microprocessor first launched in 1976. The fact that Forth is 6 years older than the Z80 seems interesting enough of a step further back in time to be hightlighted.
There has never been one Forth. In a world with N Forth programmers, one can expect roughly N implementations of Forth. Perhaps closer to 2*N. An ANS spec exists, but most Forth programmers would say it misses the point.
Forth is a collection of ideas and philosophies toward programming as much as it is a language. Two stacks and a dictionary. Threaded code (direct, indirect, token, subroutine, etc...). Collapsed abstractions. No sealed black boxes. Tight factoring.
Well, shit... I've been a lil frustrated by a lack of a clear resource and consequentially haven't gotten far on a couple half-hearted attempts to learn it. I can just make it up myself? A little late in the weekend for a new project, but I'm suddenly interested again.
This is a slightly off topic question for you, do you think the attributes you named are required for having a “Forth” or are they just the most common implementation characteristics? My current pet language project certainly looks Forth-like, with different semantics and a modal dependent type system, but is not threaded nor does it have the traditional stacks and dictionary. I don’t know if I’d call it a Forth, but maybe it is?
"The dictionary" can be implemented in many ways; simple or clever, namespaced or not. A language without any form of local or global name binding would presumably need quotations ala Joy/Factor, which means your stacks probably have to store objects rather than raw numbers. This tips the scale in the direction of a higher-level language.
The stacks are likewise a description of semantics rather than implementation. Some Forths keep the top few stack elements in registers to reduce overhead. If the return stack isn't user-accessible (r> >r etc.) then, again, you're straying closer to a higher-level functional language than a Forth.
There are many kinds of threaded code. For example, subroutine-threaded code uses native subroutine call instructions in word bodies and does away with the inner interpreter. Threaded code is a natural consequence of disentangling conventional "activation records" into two stacks which grow and shrink independently. You could have a Forth without threaded code, I suppose.
Some Forths attempt to "seal off" or otherwise obscure some parts of their own internal workings from "user programs". This is more common among dialects which try to adhere to ANS specs, like GForth. This isn't a total anathema to Forthiness, but it tends to introduce additional complexity. If a word is useful for implementing a forth kernel, why couldn't it be useful in implementing other functionality, too?
If you're building something higher-level which vaguely resembles Forth, it's probably better to describe it as a concatenative language. A dependent type system doesn't sound very Forthy, imo.
I don’t think you’re wrong in your assessment. But just for fun I will point out that I probably assumed a bit to far when I stated no dictionary, there is a name binding mechanisms for ‘words’, but I have a quote form as well. Also, while I would say the language is functional, it is also intended to be low level, ideally, the base of the language is a dependently typed pseudo-assembly language, but the modal type system and homoiconic syntax enable building words that are semi-macros. It is, at this point, basically my attempt to blend Forth’s philosophical leanings with category/type theory. But thanks for the reply, outside perspectives are always great for me to hear.
"You could have a Forth without threaded code, I suppose."
VFX Forth from MPE UK is a native code generator. It evaluates source code and emits inline code or calls depending what you tell the compiler to do. It can expand everything to inline if you tell it but your code would get much bigger.
This is the state of the art for Forth compilers today.
Homemade systems and older systems use threaded code.
Not really. Function definitions look different, and we can declare variables in the middle of function bodies, but those are surface-level changes. The language itself hasn't changed appreciably.
Forth is one of the last languages on my list. Whenever I read something like this article, I want to learn it. The advice at the end of the article seems interesting. Does anyone have any resources that might help with this?
A Forth shibboleth is that you have to implement your own in order to understand it. Try reading the assembly, then forth source, for jonesforth: https://github.com/AlexandreAbreu/jonesforth
For anyone stumbling over this and having trouble to build it, I had to figure out two things:
1. When building on x86_64 Debian, I had to install linux-libc-dev:i386 to get /usr/include/i386-linux-gnu/asm/unistd.h. I also had to adapt the include path to have it actually found.
2. If you see a segfault when running the executable, you likely need to remove -Wl,-Ttext,0 from the build command. At least on my system, it fixes the segfaults and now jonesforth runs as expected.
Not the developer, just a forth enthusiast. But I'd imagine it's the ability to rapidly bootstrap on other architectures. Readability is just a bonus on top of that.
The focus of CollapseOS is, well, an OS that can be used after society has collapsed, something running on scavenged chips on hand-soldered boards (in the worst case scenario).
The original logic was the Z80 is still pretty prevalent, so it was thought to be a good choice to base the OS on.
Turns out that a Forth interpreter/compiler is incredibly easy to write (just a few hundred bytes of a assembler, a few thousand at the upper end), so by using Forth they hugely expand the range of scavengeable chips.
If the author wants their code to survive the apocalypse, they should generate Turing Machines that have a simple shim interpreter for low-power architectures. The benefit of this is you can build an interpreter for a TM in wood or other simple materials. Semi-Thue systems also fit.
Definitely. Or mechanical logic gates. But the surface area would very quickly become an issue for such a project, and you'd need to assume only a certain few tools.
> Forth doesn't elegantly describe complex algorithms… However, this mental pain does makes you question your need for complexity… This is something that I think few developers are familiar with and is hard to describe with words. It needs to be experienced.
Nice framing, that it’s not an intellectual argument you can make to justify the benefits of drawbacks (i.e. an infamous red flag of stockholm syndrome), so you have to point to the experience of the thing itself (“you had to be there”).
Or there are experiences which have hard to describe intangible qualities that are hard to put logically. Not sure why you have to reach for stockholm syndrome here.
my comment was way off the mark, didn’t realized it sounded sarcastic. i was instead trying to appreciate his approach to the intangible—pointing to direct experience to avoid stockholm criticism.
I'll agree with the article's author - there is a quality to a Forth-style solution that just doesn't compare to other languages' styles or overall structure.
When I first heard of Forth and started trying out some of my own code, I was surprised by the initial effort it took me to adjust to writing small words vs C style functions.
I then started building on the pieces I first wrote, and it took very little code to cover my needs.
So yes, there's a very different experience, and it does take adjustment for anyone only familiar with function-style code. And it is not just what I've described, there's a whole different thought pattern involved.
Forth written "flat" with shallow data stack usage is assembly by another name, a wrapper for load-and-store; Forth that does everything with intricate stack manipulation is a source of endless puzzles since it quickly goes write-only.
But, like any good language with meta expression capabilities, you can work your way up towards complex constructs in Forth, and write words that enforce whatever conceptual boundaries are needed and thus check your work and use the system as a compiler. That's what breaks Forth away from being a simple macro-assembler. But you have to write it in that direction, and design the system that makes sense for the task from the very beginning. That falls into an unacceptable trade-off really easily in our current world, where the majority of developers are relatively inexperienced consumers of innumerable dependencies.