Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Life Creation T-0 and Launching [video] (youtube.com)
65 points by akkartik on Jan 5, 2023 | hide | past | favorite | 51 comments



Hi folks. Thanks to akkartik for posting this and thanks to everybody who took a look. I do apologize that documentation is so scarce and scattered. To go a bit deeper, for hacker types I recommend https://ojs.aaai.org/index.php/AAAI/article/view/9802 as a fast and readable high level Q&A.


I found the Q&A pretty readable and does provide a decent basic overview of the broad strokes. Are there any deeper 'dives' on best-effort computing that you would recommend? I don't have any outstanding objections to the general ideas proposed, but as you have pointed out, the paradigm as a whole goes against almost all of the presently accepted practices. I also noted some superficial similarity to Chuck Moore's colorForth work and the 'multi-core' chip he was working with/on.


> I don't have any outstanding objections to the general ideas proposed, but as you have pointed out, the paradigm as a whole goes against almost all of the presently accepted practices.

Indeed, as a professor in fault tolerance, distributed system, and parallel programming I can confirm that almost nobody work on this. This work is quite fascinating and complete with operational hardware/software combo. Never seen that before! My university has done work like: "Cellular automata based S-boxes", nothing as ambitious as a post-Neumann paradigm.

Hopefully this work leads to a breakthrough on the ease of asynchronous software development. Academia never seriously looks at this due to the difficulty of coding for exotic parallel async architectures. Numerous people dedicated their lives to making coding easy. Centuries of human efforts and we now have Javascript, 1+ million people know how to code in that. Efficient coding using VHDL? Async FPGA stuff? 2D grid tiles? Not many.


Thanks for your thoughts and for taking a look. It's increasingly clear we're entering a time of architectural change, and I'm trying to help scout out lands ahead. It drives me crazy that after all these years, as far as I know, dynamic reconfigurability in FPGAs is still so limited and coarse. I like the look of GraphCore though.


I've been interested in the broad topic of programming language design/theory for a few years, after being inspired by the article "C is not a Low-Level Language" by David Chisnall. It really pushed me to consider designing for a future architecture. However, after seeing this thread yesterday, reading some papers and decks, and watching a few videos, I am going to add designing for a post-Von Neumann architecture to my explorations.

My theoretical and design work has been focused on concurrent modal languages (with nondeterminism as a basic side effect often), and I am going to be hopeful that I can find a useful common ground with the type of thoughts you've thrown up for me.


There was Sutherlands' Fleet a couple of years ago.

Errmm time flies. More than a decade ago.


I'm sorry but I'm just not sure I understand the purpose of this.


Can you ask a specific question, say, based on the Q&A?


The T2 Tile Project homepage is at https://t2tile.com/

"The T2 Tile project is an attempt to build the world's first indefinitely scalable computational stack. First, we suspend the idea that we must be bound to an architecture based on correct and efficient deterministic hardware and software. Instead, much like the physical world around us, we look to robustness as a foundational requirement, building living systems as vessels for digital computation that is firstly robust, then as correct as possible, and finally, as efficient as necessary."

Unfortunately, it seems to be devoid of textual documentation, instead comprising a long series of videos.


From what I can see, there are two (three?) things being interwoven here in a way that I find quite perplexing.

First, there is a "T2 Architecture" which appears to be an attempt to build a new computer architecture from scratch, including (I think) dedicated hardware that can be meshed together.

Second, there is some sort of cellular automaton (I think?) that operates on a grid, and a pattern has been produced which results in that pattern reproducing.

Thirdly (?), there is the rule-based grid system on which #2 is being built.

I gather that these are supposed to be related somehow, but I'm not sure exactly how -- is this something where there is a simple substrate with simple rules (built on or perhaps the purpose of the T2 architecture?) with a layer of reproducing programs ("organisms") on top of it, with the goal of allowing (eventually) the ecosystem defined by a population of those "organisms" to perform computations on behalf of a user?

It does not appear to be Conway's Game of Life, or at least nobody is saying that. I think a reproducing pattern would be a pretty significant accomplishment in itself. If this is some other rule-based grid system, then it seems like it's worth studying that on its own, independent of the underlying machine architecture?

I wish some of these videos would just come out and say it, because for the life of me I can't tell if they (the architecture and the system and the "organism") are related or just two things happening at the same time.


It is pretty confusing but what you got is about right. It's about building a whole new computational stack - hardware, software, and systems - that ditches the traditional requirement of deterministic hardware, and replaces that with a requirement for 'indefinitely scalable' hardware.

So it absolutely cannot be Conway's Game of Life - though people often think it is - since GoL assumes deterministic execution (plus other issues).

The basic architecture is called the Movable Feast Machine (MFM) and yes, that is a 2D cellular automata engine with a bunch of unusual details (asynchronous, large R/W neighborhood, ..).

Looking downward, the T2 tiles are specific prototype hardware that implements the MFM. They could be replaced by some other tile that does the same, like x86 could be replaced with ARM given suitable software mods above.

Looking upwards from the MFM, some specific cellular automaton - called a 'physics' - is implemented in a custom language called 'ulam'. The physics in the T-0 video involves some 208 ulam classes: Some deal with the 2D diamond grids, some deal with the 1D linked lists within the diamond grids, and all sorts of infrastructure classes and so on.

Then on top of all that, a subset of those classes represent instructions for a 1D 'assembly language' with operations like 'extend an arm one step', 'deposit a processor node', and so forth. The 'Ancestor 1312' organism/structure/pattern demonstrated in the video is encoded in a 1D chain of those instructions that is loaded (step by step, using MFM events) into an empty diamond grid at the beginning.


Thanks so much for the reply! This clears a lot up, and I'm happy to see that my basic understand was not too far off.

I don't completely buy into the notion that this needs to be soup-to-nuts; that the T2 / MFM / Ulam stack is a necessarily the right way forward.

But I think that what I see as the core insight -- that determinism and synchronicity is the deep rot at the core of our understanding of how to build distributed systems -- is true beyond all reasonable doubt. The idea of evolving (literally or figuratively) self-healing systems and components is I think at the core of the future of computer science.

And the dirty secret of all distributed systems of significant scale is that they have already escaped the attempt to confine them -- operational complexity has become a matter of botany, but without any insights or tools to help us deal with the actual complexity, but instead applying layers of attempts to reduce complexity that usually just end up reducing the legibility of the system.


As I understood it: The T2 tiles are "normal" microcomputers, that run a special software stack implementing the cellular automata. The tiles communicate over the connections, and essentially just act like a bigger automata grid. But the interesting part is that there is no global synchronization, and this architecture could be scaled indefinitely wide, by just adding more tiles. The impressive/hard part is creating the asynchronous automata, where each cell can only see a fixed amount of neighbors, and creating something from that.

Edit: I think this video explained the general idea quite well: https://www.youtube.com/watch?v=helScS3coAE


> I think a reproducing pattern would be a pretty significant accomplishment in itself.

Reproducing patterns are relatively common now e.g. [0], but the whole game here is artificial life, which means different things to different people, but generally self-reproduction (even with mutation) is not considered sufficient as it does not advance in complexity as life systems seem to.

0 https://archive.is/9HnBg


From reading this, it appears that the pattern is not self-replicating in the sense that it can spawn a generation of descendants. Rather it just makes a copy of itself and then dies.

The pattern in the video in this article, on the other hand, can produce multiple viable offspring (although not always, as there is non-determinism). I am not aware of any such patterns in the Game of Life.


My understanding is that self-replication (whether singular as in the above article from ten years ago or multiple) in alife is a solved problem, but that it (being based in deterministic processes with pseudo-random seeding) does not lead to the continuation of the formation of diverse forms of life.

The unique piece here is I believe the increased access to indeterminate state or stochasticity. I can’t speak specifically to whether those patterns exist in the game of life.


Alife is a huge field, and systems as old as Tierra demonstrate self-replication. The concept by itself is not revolutionary, but I am not aware of any examples specifically in Conway's game of life.

The question for me is more whether simpler Turing-complete rule systems can accomplish this on space-scales and time-scales that can give us insights into the nature of emergent behavior in evolved systems.


Excellent point! It would be great if the author could clarify the distinction (or not) between hardware architecture and the emerging life thing going on. I am sure its there somewhere but would be great to see it in the intro video without having to navigate over hours of footage to find out.


I've been following Dave's progress since the very start, so it's odd to see it here at HN! The project is incredible. He's building a whole new computational stack, from bottom-to-top. It's robust-first, indefinitely-scalable, and a huge inspiration to me. I wouldn't be working in tech right now if it wasn't for the T2 Tile project!


I'm very interested in this area, but I've never been able to get a handle on what's going on in this particular project. There's no coherent explanation of the technical aspects, no documentation, hours and hours of rambling videos, the repository linked from the site is an unstructured grab-bag that revealed nothing enlightening after 15 minutes of rooting around in it. There's also a lot of far-fetched, hubristic ideas and mumbo-jumbo that doesn't pass the sniff test for me. I get whiffs of Terry Davis-style outsider art.

On the other hand, he seems to have an academic pedigree and has presented at serious conferences, so it might be that there's something interesting here that I'm missing. I just can't tell - if anyone else is more enlightened, I'd be interested to hear about it.


I can understand the outsider art take.. I'm exploring a style of computing that undercuts so many deeply-embedded design assumptions (e.g., CPU, RAM, deterministic execution) that it's a big lift even for people who really want to get it.

It's a bit outdated, but FWIW https://direct.mit.edu/artl/article/22/4/431/2851/The-ulam-P... is reasonably coherent and approachable, I think..


This is brilliant, and needed.

Do you see opportunity in quantum computing given the more direct access to indeterminate state?


I'm a quantum skeptic when it comes to delivering supraclassical power in full systems at scale, and I expect biochemical computing machines will be likely useful sooner. But yeah, if quantum can one day deliver 'nearly locally deterministic' cycles in volume, I want to have an architecture that can use them.


The MFM and ULAM repositories are reasonably legit, providing the simulation engine and the compiler respectively. FWIW, there are Ubuntu packages at ppa:ackley/mfm. I apologize if you found the T2Demos repository, yeah, that's a permanent WIP mess.


A couple of hours of rabbit hole burrowing later, this has to be the most interesting thing on Hackernews in a very long time.


How exactly is the T2 architecture "indefinitely" scalable? You're adding various computers next to each other, fine. You might also claim that you have some generic cellular automata that in the future would somehow correlate with a dynamically scales according to some computation. This doesn't provide generic compute for arbitrary problems and your limits are still based on the cellular automata running on classical computers. You can claim all you want about dynamically scaling computation that's implemented fundamentally via some cellular automata controller, it doesn't make it true.


It's indefinitely scalable exactly in the sense that a machine on this architecture can be made bigger as long as you can keep providing power, cooling, real estate, and tiles. There is no head node, no unique tile addresses, no global communications, no global clocking, and no pretense of deterministic execution. In that sense it's quite different than traditional cellular automata. Figuring out how to do useful work in this architecture is the research challenge.


What aspect(s) in the premise of this interesting/exotic system suggests that useful work may be done by/with it?


The most basic aspect is: Natural life does useful work, suggesting that artificial life might also.

I'm like if you want a spreadsheet, fine, use a von Neumann machine. But if you want do inherently robust system control, that has a chance of doing something sensible even in situations that were neither programmed in nor trained upon, what you want is an overprovisioned system that is intrinsically aware of its deployment in space, and is constantly repairing and rebuilding itself.. and this video is another baby step on that road.


Can you quantitatively say that some cellular automata compute framework would give a broad or more accurate output than a typical linear compute framework? I fail to see how embedding some complex computation into a cellular automata framework with over-provisioned resources gives unique compute insight and it almost seems synonymous with some ML auto-scaler or some ML controller that dynamically scales compute when needed.

What specific problems are not tractable via traditional autoscaling methods that cellular automata can compute more efficiently or accurately? I understand you think stochastic type/life type computations are better suited for this, but that would be more of a hunch than verifiable proof.


I accept that lots of folks don't and won't get this, but one specific problem I think is intractable via traditional means is: Actual computer security.


I think if you can write some pseudocode that summarizes the computation, it would help. I think I and others have seen the Github repo, but generally don't know where to start to analyze what it's doing.

For example, for computer security, if you write a stochastic algorithm pseudocode such that the cellular automata are essentially doing a "search" for something, an that the cellular automata replicate and scale for the purpose of this stochastic search, I think that would help people understand your computational model better. At least for me!


Would it be fair to say that the next challenge would be to develop a new kind of programming language architecture/concept that would take advantage of a t2-tile system? Otherwise, what is the strictest bottleneck?


We've got the ulam programming language custom tailored for the MFM, and the SPLAT spatial programming language built on top of ulam. I expect we'll want more languages or language features as we scale up, but we need to earn our way to them with design experience. I'd love to build a T3 tile, probably FPGA-based, using lessons learned, that could provide perhaps 10x or 100x the average event rate of the T2s. And get them out into researchers and hackers in quantity.


Any chance of re-purposing PiZeros for this? That might be a softer approach that can be tried relatively cheaply.


Maybe! One main challenge is the (at least) six-way local intertile communications. The T2s use BeagleBone Greens, which have two PRUs that I slice three ways each to do packet transfers. The RP2040 anyway has two 'PIO' instances that seem similar.

But I'm unsure that any redo at that scale would improve delivered performance that much.

If I had it to do over I think I'd've tried to put an ethernet router chip on each tile and basically do backplane ethernet between tiles.

But I dream of LVDS serdes between tiles with low-level packet stuff handled in FPGA fabric..


Are you aware of Jecel de Assumpção? If not you really should hit him up it sounds like the two of you would have a lot to talk about.


I wasn't, thanks. "SiliconSqueak", nice!


You're welcome, he's here on HN as well ('jecel').


Shifting contexts this much tends to shift existing intractable problems into new classes of tractable (and still intractable) problems. Answering this will require a lot of people with differing domain expertise discovering it and trying it out. This is a potential (I’m inclined to say inevitable) revolution, once all the pieces are in place.


Yay thanks this!


More, non-video background on this project and the person behind it:

https://www.cs.unm.edu/~ackley/


I have no clue what this is. Anyone care to explain?


I've spent the last half hour skimming through some of these videos and I'm only sure of three things:

- This guy is either mad, a genius, or both.

- I am not smart enough to figure out which it is.

- I need a swarm of monitors on my wall.


Video is by David Ackley (https://www.cs.unm.edu/~ackley/#rh-is):

> I do research, development, and advocacy of robust-first and best-effort computing on indefinitely scalable computer architectures. As of August 2018 I am an emeritus professor of Computer Science at the University of New Mexico. My academic degrees are from Tufts and Carnegie Mellon. Prior work has involved neural networks and machine learning, evolutionary algorithms and artificial life, and biological approaches to security, architecture, and models of computation.

Probably most notable around HN for his Movable Feast Machine, which has been posted here a few times: https://movablefeastmachine.org The goal of that is an indefinitely scalable computer architecture which is very, very robust.


Hmmm, looks like one of those game-of-life-implemented-in-game-of-life things, set up in a way which simulates actual living cells on the 2nd level, including cell division.


Looks like an art installation of the game of life, I think


Game of life rely on computers as an environment to devellop, which might not be there in 1 century.

A real world 'thing' (whose very nature is still up to debate) : https://en.wikipedia.org/wiki/Xenobot


I can't find their project Wiki (mentioned in one of his introductory videos). Anyone?



Got it, tks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: