One interesting thing about this is that it serves as a kind of simplified demonstration of the kind of computation that must be occurring or could occur with actual cellular biology. DNA really seems somewhat like a tape in a Turing machine (although technically a read-only one). But the entire system does compute a result which is an organism.
You might be interested to learn that DNA is not actually read-only. Besides viruses like HIV and transposable elements which permanently splice themselves in to the host's DNA, antibodies gain specificity through irreversible self-editing of the lymphocyte's genome via a similar (and perhaps, evolutionarily related) mechanism.
Biology uses only global variables. All processes can write to all others. We wouldn't do this in software because the human mind can't handle it (leading to bugs), but evolution has no such constraint.
As a metaphor that really doesn’t work. Biology is incredibly hype about modularization and separation of concerns. The human body rather goes out of its way to prevent arbitrary changes, both exogenous and endogenous
No. It doesn’t change the DNA. It only uses the existing RNA messaging system to tell the body to product antibodies. What you are suggesting is like expecting Ethernet to change the source code.
Then you’ll be pleased to know that some crazy people worked to build an entire computer architecture using metapixels just to play Tetris: https://codegolf.stackexchange.com/a/142673
I’ve got to imagine there’s some sort of GoL compiler that takes more writable code and compiles it to a board state, given we know about a whole set of machines that do specific things (gates and whatnot)
There is a rather roundabout route to compiling software to game of life cells that is pretty interesting. It is of course completely impractical but the idea that it could be done is intriguing.
This makes me wonder. If cells were occasionally activated in random positions, what sorts of modifications to the architecture would be necessary to make the system robust and still able to carry out its computations with a reasonable probability of success? Would that even be possible? In other words, what kinds of modifications to this system would make it more like a natural biological system that has to cope with noise and interference from its environment?
Cells activating in random positions seems like a high bar. How robust are you to a rock suddenly appearing inside your body? (e.g. inside your heart?)
A lower bar might be being robust to individual spaceships/gliders coming in from outside. But even there a collection of gliders is probably going to break through.
Fascinating question. I suppose it depends on the size of the rock? Anything big "enough" would cause real problems, but a single molecule of rock would probably not matter too much. Presumably cosmic rays are triggering single neuron molecules all the time, but because there is a lot of redundancy this does not propagate into real activity.
I think it would be cool to have a structure with a line where, no matter where vertically (within a large range of vertical positions) a single glider crossed the line to the right , and no matter which of the 4(?) phases the glider has, signals would be sent somewhere indicating generally where the glider crossed, and the structure eventually returning to how it was (other than the signals it sent out) .
Is such a structure possible? If so, how short can its recovery time be?
How good can the resolution of how specifically it detects the locations of the gliders be?
We know that GoL is Turing complete, and if the field is initialized with random noise, then for any fixed finite pattern, as the board size goes to infinity, the probability that that pattern appears somewhere in the noise approaches 1. (of course, I'm talking about enormous board sizes, possibly with many many more cells than there are protons in the visible universe, not talking about anything resembling practicality.) If intelligence and agency is computable (which, it seems like it should be), "any fixed finite pattern" would include structures which, for some amount of time, before they are destroyed by surrounding noise, would simulate an intelligent agent.
If there are structures which, when surrounded by stuff initialized with noise, has a high probability of being able to withstand this noise, and then proceed to clear out the noise in order to e.g. make copy of itself, or just to grow, then we would expect the fraction of an infinite board containing such patterns (or things derived from them) to grow over time.
But, whether such structures can exist in GoL, in part depends, I think, on whether any large structures can withstand noise (or, having a high chance of withstanding it).
(I am defining "structures" in a way where a structure is allowed to include as part of it a large empty region (of any fixed size) on its periphery. This should assist in withstanding the noise, because it limits what things the core part of the structure could be faced with, to things which can travel a distance)
The Stackexchange thread on the GoL quasi-computer (which is underlying the interpreter in the TFA) mentions that the metapixels have borders that swallow gliders from other metapixels. However, those gliders are apparently used to communicate the state of the neighbouring metapixels, so presumably they don't disappear without effects. Also the location of metapixels relative to each other is probably fixed.
Yep, restricting things to coming from the outside might be more of a realistic challenge. I think I had a more academic version of the question in mind. Seems like we already have tech that is designed to handle a certain degree of pure randomness. For example, we have error-checked RAM.
Is there any hardware designed to just run cellular automata?
I wonder if it could actually be a very efficient form of calculation, because cells are almost bits, but seem to possess more power than bits. For instance this Lisp in GoL. Could it run faster than lisps on "bit processors" if it ran on special purpose hardware?
So instead of 64-bit processors we might have "64-cell processors" ?
GoL implementations for big systems don't evaluate the cellular automata step-by-step. They implement HashLife - https://en.wikipedia.org/wiki/Hashlife . The first example on that page shows position 6.3+ octillion for a Turing machine, computed in under 30 seconds.
If the special purpose hardware ran at 1 THz, it would take about 4 quadrillion seconds to get to the same point, which is a bit over 200 million years.
Special-purpose hardware for cellular automata would not evaluate it "step-by-step" either, but in parallel. Every cell could evaluate its next state independently of others.
We could think of some kind of version of RAM where each 'cell' would implement this behavior.
Perhaps similar optimizations like HashLife could be developed for such special purpose hardware as well.
I implemented Hashlife for a different CA rule set earlier this year.
Hashlife's data structure is a quad tree embedded in a hash table. Most Hashlife instances only spend a few short moments simulating cellular automata at the "ground level"— the real work involves combining the future of the children of a quad tree node into the future of the node. That might be optimizable, but it feels extremely application-specific.
"evaluate its next state" is what I meant by "step-by-step".
There could certainly be a dedicated hardware implementation of HashLife. It would not be based on the idea "cells are almost bits", nor would it be faster than a direct CPU implementation of Lisp.
Hi, I'm the author of this project. HashLife actually does just that, by memoizing and reusing past occurrences and outcomes of the same pattern. Since one cell can only interact with its 8 neighbors in one timestep, there is a limit to the distance that a fixed-size pattern can interfere with in a fixed amount of time, analogous to the speed of light in physics. (It is in fact called the speed of light for cellular automatons as well, as described in https://en.wikipedia.org/wiki/Speed_of_light_(cellular_autom... .) Therefore, by remembering the outcomes of a certain pattern in a certain timeframe, you can jump to that amount of time in one step by using the memoized pattern. More details are explained in this article: https://www.drdobbs.com/jvm/an-algorithm-for-compressing-spa...
By the way, with all the passion for Lisp, simple but Turing-complete systems, and programming that I put into this project, I'm very happy that other people have enjoyed it too. Thanks to everyone for checking it out!
Very interesting project. What I don't quite get is if you can calculate the state after 100 steps in one go what would make you select exactly100? Why not select 1007 or 10 million etc. ?
Thanks! That would be since the skippable timestep width is bounded by the size of the size of the patterns that you've memorized. Say we have a square pattern of size W. The outcome of this pattern within this region in 1 timestep is completely determined by its surrounding square region of size W+2c, with c as the speed of light (pixels per timestep), since patterns beyond that can never interfere with it unless its interference travels beyond the speed of light. For two timesteps the surrounding region would be of size W+4c, since more pixels can interfere with it within that amount of time. Therefore, If you memorize a pattern of size W+4c, you can predict the outcome of the inner square of size N after 2 timesteps, or the inner square of size W+2c after 1 timestep, etc. This sets an upper bound to the number of timesteps you can skip, depending on the sizes of the patterns that you have memorized. So if you've memorized a pattern of size W+200c, you can predict the outcome of the inner square of size c after 100 timesteps, but predicting the 101st timestep would require information of a region beyond the memorized pattern.
Shameless plug: Have some fun over possible setups of cellular-automata (which conway's game of life is a subset of): http://aperocky.com/cellular-automata/
It's simple, small, clean, and Turing Complete. Most others show "interesting behaviour" (for some definition thereof) but are not Turing Complete (though some are).
I've not found a CA that's Turing Complete and as small and easy to understand (which is not to say there isn't one).
Wow, thanks for those links. I am well acquainted with Lisp, but I never stumbled upon these. The video was great too.
I have played with Wasp Lisp which is very cool and the Wasp VM. And checkout how it can spawn drone nodes on different machines[1,2]. I think C and Lisp together are amazing. I have been using them for decades, but J and APL entered my life over 8 years ago, and I am hooked! Checkout a rework of the famous APL GoL demo. It's much easier to understand than the original[3]
Thanks for the links. Stumbled upon Otus Lisp / Owl-Lisp while looking for a sh/lua replacement for OpenWrt - the small footprint and architecture independend binary format as well as the FFI for .so files are looking great for that. Here is a complete documentation for the language: https://haltp.org/posts/owl.html that applies to both.
The one block for me is that I work on Windows, Linux, and Mac, and I don't like using Cygwin or MinGW with Windows. I am using Corman Lisp on my Windows machine for Windows-specific stuff and Jane before I heard of Owl and Otus Lisp. I do like Wasp Lisp's VM and C FFI too. Are you sticking with Owl/Otus?
How come the GoL runtimes are only ~300 times longer than the VarLife runtimes? Is it because the metapixels can be more efficiently simulated than running the game of life rules? Or because of sparsity of cells or some-such thing?
You can simulate the metapixels singificantly faster using Hashlife by caching the results for previously seen patterns. There's a pretty interesting dr dobbs article describing the core idea [1].
I still can't get over how much interest the game of life maintains since it's invention. It might just be my favorite "field" of CS at this point. It's such a satisfying intersection of maths, coding, "biology", and "biological engineering".
One that Conway didn't like much, saying (roughly) that it had all been explored and proved and other work of his was more interesting - https://www.youtube.com/watch?v=E8kUJL04ELA in this Numberphile interview with him.
Yes, and both have my sympathy. Sometimes, a musician revisits their earlier work and finds new things to explore. Joni Mitchell and Ryuchi Sakamoto come to mind.
But other times, a musician feels that although their earlier work may be new and exciting to you, they are finding other things more engaging today and tire of answering questions about the old work with the same rote, trite answers. Over and over again.
For someone constantly seeking fresh ground to cover, this attitude is an asset.
p.s. In this discussion about comedy, Louis CK talks about a tip he got from George Carlin, namely to throw the old material out after a year and start over again, forcing yourself to find new things to say about the Universe.
In the same discussion, Jerry Seinfeld explains that some of his material is ten years old, and he is still finding ways to sharpen and refine it. There is clearly no one right way about this.
If you're Thom Yorke, you use that hatred to drive you to build your magnum opus. Though admittedly, I don't know how one could iterate on Game of Life. It's simplicity is part of what makes it so great.
Well kinda. The Bends was next after Pablo Honey, which contained "Creep", the song Radiohead famously learned to hate.
The "magnum opus" is typically said to be OK Computer, their 3rd album, 2 after Pablo. While it was certainly a departure, the band generally said they wanted a total artistic change in direction.
Interesting though is the song "Iron Lung" on The Bends, which is a song about "Creep" where they refer to it as, among other things, a "total waste of time".
Totally. It's amazing how much complexity arises from something so simple. Just like in the game itself :). We lost someone special when we lost John Conway.
In this case they're running lisp running in VarLife running in the game of life.
This Varlife is simulated using the OTCA metapixel, which was designed to run the game of life inside the game of life, though it is more general and can therefore also be used to simulate VarLife.
Step after that, bitcoin but it's on DNA strands and new coins are generated every time the host cell reproduces. The maximum number of coins is reached when the biosphere is reduced to an amorphous grey goo.