When I was learning to program, I tried to make a toy artificial life evolution simulation. Particle organisms on a 2D plane had 'dna' which was a list of heritable traits, like size, speed, number of offspring. Bigger organisms could eat smaller organisms, but they burn energy faster. 0 energy = death. When two organisms of opposite gender collided and had sufficient energy, they'd give some of their energy split among the offspring, with each offspring's 'dna' values set to one of the parent's +/- 5%.
As I was developing this, I hadn't figured out how I wanted to do food yet, so as an easy first step, I just had a constant amount of energy that was split amongst all organisms on the screen. Lots of little dots buzzing around, was kind of neat but nothing too special. I left it to run overnight.
When I came back I was very surprised: previously i was running at about 30FPS - now it was running at about 4 seconds per frame. The screen was filled with dense expanding circles of tiny slow organisms emanating from where organisms had mated and nothing else.
My simulation evolved to outsmart my simple food algorithm: when food is divided equally among all organisms, the best strategy is to use minimal energy and maximize offspring count. I had populated the world with a default offspring count of ~5 and they had evolved to the tens of thousands. The more offspring an organism had, the greater the amount of the energy pool would go to their offspring.
It was a very cool "Life, uh, finds a way" moment - that such a simple toy simulation of evolution was able to find an unanticipated optimal solution to the environment I created overnight was very humbling and gave me a lot of respect for the power of evolution.
I worked on a similar project. One of the heritable traits I had was a quantity of energy that would be passed on to a child. A mother and father organism contributed a random half of their stats to a new child and both parents deducted their caretaker energy and increased the child's energy by the same amount.
I had a lot of different graphs to show me stats as the simulation continued. One thing I noticed was that after a while of this simulation "average age" started to go way up.
At first, I was proud. I thought I had evolved creatures that could live indefinitely in my simulated environment. I kind of had - but it didn't work like I thought. At some point the creatures seemed to become immortal and all new creatures died off. I was monitoring "average age at death" which confirmed all the dying creatures were very young and "average generation count" which showed it stabilized midway through the simulation and then locked in place. They got to a place where new organisms died off and there were a bunch of immortal organisms running around.
I finally figured out what had gone wrong. The stats, including caretaker energy, could be randomly modified by a small random value up or down whenever a child was produced. Nothing prevented caretaker energy from going negative, and indeed, that's what would happen. The simulation would work for a while while only a small number of organisms had negative caretaker energy, but eventually these guys would take over and become the whole population. They could indefinitely sustain themselves by having children, but their children (spawned by two parents who passed on negative energy) would instantly die.
Decades ago I read about a simulation aiming to evolve creatures that could walk, or one day run. The fitness function that determined how many offspring you get was “maximum speed ever attained in your life”.
They let it run for a while and came back to find all the creatures had evolved into extremely tall, thin stalks that would tip over and never move again. The top would be moving very fast before it hit the ground.
Our own human intelligence could also seen this kind of side effect. The purpose of it being for us to be better at hunting other animals and gathering extra food. Instead, in just several hundred thousand years we’ve built a bunch of ‘buildings’ and we managed to throw the entire ecosystem out of balance.
There is a 2018 paper called "The Surprising Creativity of Digital Evolution" which is a collection of similar anecdotes: https://arxiv.org/abs/1803.03453
Thanks, great paper!
"Many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds."
My goal with this project was to be able to seed the world with a single proto-organism and have two distinct species - one predator and one prey - evolve to create a stable ecosystem.
I eventually localized food sources and added a bunch of additional rules, but was never able to realize this goal. I think for predator-prey relationships to evolve in my system it would have required sensory organs and methods to react to local environment.
Seems like ALiEn is able to simulate food chains with distinct species - and alas I don't have a CUDA GPU - but curious if they've been able to create an ecosystem where predators and prey can coexist in a balanced stable ecosystem. (In my experiments, it was very easy to get predator population explosion, all of the prey gets eaten, and then all of the predators die)
I think this probably happens sometimes in the real world. However, there usually aren't ecosystems with only two species. Most predators eat multiple prey species, so a predator may hunt the "easy" species to extinction leaving only the harder-to-hunt species remain, which causes the predator population to fall as only a subset of the predators are able to succeed in these harder conditions.
Cicadas could be another example of a strategy to deal with prey-decimation - by only emerging every N years, food sources have time time to regenerate between cycles. Similarly, many large predators are nomadic - so as they reduce prey availability in one area, they choose to look elsewhere, giving the prey in that area time to recover.
I think geography and terrain also helps a lot in the real world: prey is usually smaller than predators and thus has more hiding spots. Maybe I should have implemented a 'turtling' mode, where organisms could spend part of their time immobile and invulnerable, but also not gaining energy, as a way to prevent predation. I think sensory organs would still probably be necessary to make that strategy work.
Yeah, I was able to achieve short temporary equilibria, but in every case the predators would slowly die out (and re-evolve later, and die out again), or they'd be too successful and kill everything.
I too implemented GOL at some point (when I had a look at SDL) and for fun changed the (boolean) game field to integers that I mapped to grayscale (and later RGB). So instead of killing/giving birth to cells you just decrease/increase their integer value. The result looks like a spreading fungus (with the classic horizontal/vertical/diagonal patterns) which can be very chaotic when numbers start to overflow and underflow.
It's a really fun and engaging way to play with 2d graphics and simulation.
My dad, who was teaching me, wrote the canvas+js 2D visualization for me, while I built the engine that managed the state of the world. When you're learning asking for help from those who are more experienced than you is huge. Also, don't be shy about taking someone else's thing and modifying it until it does what you want it to do: these are educational projects, you don't really need to worry about licensing or those kinds of things. I learned a lot from 'hacking' in-browser web games before trying to write my own: cheat at the game, then add a button that improves Qol, then try to add a feature.
Try to simulate something that you're interested in! Everyone has their own interests, but I find these kinds of problems a lot of fun to work on.
When you're learning, it forces you to make reductive approximations and simplifications - you just can't do it the "right" way, so try to find a way to get something similar with something close to it. Trying to model a bunch of simplistic rules that replicate a phenomenon. Flocking/crowd/traffic behavior, spread of memes or viruses, growing plants, etc. - the sorts of problems were you have a bunch of tiny particle/cells that each have simple behavior but they can interact with each other are very rewarding to get working because simple rules can produce complex system behavior.
>The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.
He is, but sometimes I go cross-eyed reading his books. He likes to explore some crazy topics, which makes for great reading but sometimes confusing reading too.
Pick up Axiomatic, try the first story (The Infinite Assassin) and see if it grabs you. It's got the best parts of something like Snow Crash, which drops you in without much exposition, and lets the visuals/action lead. Learning to Be Me from the same collection really stood out to me, it's a unique take on the "uploading your consciousness" trope.
I've written a lot of software against GPUs, albiet some years back. The main challenge was that many of the best libraries had CUDA support (or CuDNN support on top) but not support for other GPU lines' frameworks.
Getting CUDA to work well is hard. Not hard on your laptop, not hard on a particular machine but hard to work everywhere when you dont know the target environment beforehand -- there are different OSs, different OS versions, different versions of CUDA, different cards with different capabilities. But we did get it to work fairly widely across client machines.
The same effort needs to be put into getting things to work for other manufacturers, except a layer deeper since now you're not even standardized on CUDA. Many companies just dont make the investment. Our startup didn't, because we wouldn't find people who could make it work cost effectively.
What I really wish is that the other manufacturers would themselves test popular frameworks against a matrix of cards under different operating systems and versions. We see some of that, for example, with the effort of getting TensorFlow to run on Apple's m1 and metal. I just dont see a random startup (e.g., mine w/ 12 employees) being able to achieve this.
For example, if I know from the manufacturer that I could get TensorFlow X to work on GPU Y on {Centos N, Ubuntu 18/20}, I would gladly expand support to those GPUs. But sometimes you dont know if it is even possible and you spin your wheels for days or weeks -- and if the marketshare for the card is limited, the business justification for the effort is hard to make. The manufacturers can address this issue.
Many organizations writing GPU-compliant software are not actually "writing CUDA" but they are either using key libraries which area using CUDA (e.g., TensorFlow) or it is a layer deeper (e.g., I use a deep learning library, the deep learning library using CuDNN, CuDNN uses CUDA.)
Other orgs are using something written in another language that compiles into CUDA.
Either way, to replace CUDA, that middle component needs to be replaced by someone and ideally it should be the card manufacturers themselves (IMHO.) I cant imagine any small/medium organization having sufficient engineering time to write the middle component and keep them up to date with the slew of new GPUs, OS updates, or new GPU features -- unless it is their core business.
Technically it is entirely viable. Vulkan/OpenGL compute shaders offer more or less a 1:1 equivalent to every CUDA features.
It is more of an usability issue. CUDA has been design to be a GPGPU API from the get go and, therefor, tend to be "easier" to use. OpenCL could have been a better replacement, but the API was really not on par with CUDA when it comes to usability. SYCL looks like finally a good answer by the Khronos group but it is so late. You already have a lot of people who know how to use CUDA, a lot of learning resources, etc ...
Nvidia docker runtime has been more recent. There were some issues with k8s usage and Nvidia docker runtime -- you couldnt use a non-integer number of allocations (e.g., cant split allocation of a GPU).
That said, NVIDIA Docker Runtime is awesome now -- however, all this underscores further how much further behind the non-NVIDIA stack is!
OpenCL certainly has the potential to be a universal API but support for it is surprisingly spotty given its age.
For proprietary implementations, Intel appears to have the broadest and most consistent support. Nvidia skipped OpenCL 2.x for some technical reason (IIUC). AMD is a complete mess, for some reason not bothering (!!!) to roll out ROCm support for their two most recent generations of consumer GPUs.
In open source "Linux only" land, Mesa mostly supports OpenCL 1.2 (https://mesamatrix.net/#OpenCL) at this point. So if you're targeting Linux specifically then that's something at least.
Good luck shipping an actual product using OpenCL that will "just work" across a wide variety of hardware and driver versions. POCL and CLVK are both experimental but might manage this "some day". In the mean time, resign yourself to writing Vulkan compute shaders. (Then realize that even those will only run on Apple devices via MoltenVK, and despair at the state of GPGPU standardization efforts.)
OpenCL feels pretty stagnant. Showstopping bugs staying open for years. Card support is incredibly spotty. Feature support isn't even near parity with CUDA.
This despite v3.0 being released just last year... And completely breaking the API.
A simple artifical life / cellular automaton framework would be a great demo for portable compute shaders. I'm looking at this as a potential starting point in my compute-shader-101 project. If someone is interested in coding something up, please get in touch.
Yeah, that looks like probably the most promising stack for the future, but there are certainly rough edges today. See [8] for a possible starting point (a pull into that repo or a link to your own would both be fine here).
OpenCL is sadly stagnant. Vulkan is a good choice but not itself portable. There are frameworks such as wgpu that run compute shaders (among other things) portably across a range of GPU hardware.
In what way is Vulkan not portable? It runs on all operating systems (Windows 7+, Linux, Android, and Apple via MoltenVK) and all GPUs (AMD GCN, nVidia Kepler, Intel), and shaders (compute and rendering) are to my knowledge standardized in the portable SPIR-V bytecode.
WGPU is more portable, since it can use not only Vulkan but also other APIs like OpenGL and Direct3D 11, but Vulkan is already very highly portable for almost everyone with a computer modern enough to run anything related to GPU compute.
It's kinda portable, but I've had not-great experiences with MoltenVK - piet-gpu doesn't work on it, for reasons I haven't dug into. It may be practical for some people to write Vulkan-only code.
Vulkan is supported on basically all modern platforms except for Apple operating systems, Apple refuses to support open graphics APIs on their platform and there's nothing anyone can do about it - this isn't a Vulkan problem. Even OpenGL is deprecated and support hasn't been updated for years, and that's basically the most open graphics API in existence.
You basically complain about Vulkan not being portable enough because Apple made their ownTM Vulken-like API instead of actually supporting Vulkan. And some other people made a subset of Vulkan working on top of that.
Why don't you complain about Apple not supporting Vulkan instead?
Nowaday I think it would be SYCL. It use the same kind of "same source" API that CUDA propose and is portable. Technically it can even use a CUDA backend.
Also Intel. Being Nvidia-only is not very good from an accessibility point-of-view. It means that only ML researchers and about 60% of gamers can run this.
No they don't. Also optix isn't a renderer, it just traces rays and runs shaders on the ray hits on nvidia cards. Memory limitations and immature renderers hinder gpu rendering. The makers of gpu renderers want you to think it's what most companies use, but it is not.
Also Hollywood is a city and most computer animation is not done there. The big movie studios aren't even in Hollywood except for paramount.
Octane is exactly the type of thing I'm talking about. This is not what film is rendered with. It is mostly combinations of PRman, Arnold or proprietary renderers, all software.
I don't know where you are getting "nvidia hate", studios that use linux usually use nvidia, mostly because of the drivers.
None of this changes that optix is not a renderer.
the difference between current AMD and Nvidia GPUs isn't even that large if viewed from price/performance ratio...
Comparing cards at similar price has AMD having slightly less performance while having significantly more GDDR memory.
i still use an RTX3080 though, thankfully got one before the current craze started
The difference between AMD and Nvidia is _huge_ when you look at software support and drivers and etc. Part of this is network effects and part of it is just AMD itself. But the hard reality is I'd never buy AMD for compute, even if in specs it were better.
Just as a random anecdote, I grabbed an AMD 5700xt around when those came out (for gaming). Since I had it sitting around between gaming sessions, I figured I'd try to use it for some compute, for Go AI training. For _1.5 years_ there existed a showstopping bug with this, it just could not function for all of that time. They _still_ do not support this card in their ROCm library platform last I checked. The focus and support from AMD is just not there.
He was a professor of mine in grad school, he also did visual effects for The Last Starfighter, and the early work on character recognition in the Apple Newton. Cool dude.
Continuing the thread of other individuals who've done interesting work in this area, I've always been a huge fan of Jeffrey Ventrella's "Gene Pool": http://www.swimbots.com/genepool/
Yes, cool dude indeed! Glad you mention him, I was thinking of him when I saw this article. He visited the group where I was doing my PhD many years ago and also gave a fantastic talk on evolution, learning, and artificial life -- I was quite impressed. He has some cool stuff in his website too (http://shinyverse.org/larryy/).
This is amazing. My question is whether there are emergent structures in a long-running sandbox environment? The videos that were posted appeared to have quite complex structures but it was unclear whether they were designed or if they "evolved" from earlier more-basic structures. Would be curious to get the author's take.
I wrote a (much less fancy) cellular automata program I called "evol" [0]. It simulates organisms on a flat grid. They have opcodes which are randomly permutated from time to time. If they can collect enough energy, they split; if they lose too much, they die. Having more opcodes costs more energy. There is no hinting or designing; everything starts with a simple "MOVE_RANDOM".
If you leave the program running long enough, they do actually evolve different behavior. Specifically, they will learn to recognize that there are other lifeforms in a direction and then move the opposite direction, reducing competition over the fixed amount of energy in a cell. You can actually see the population density rise when this happens. Since the grid wraps, you will generally get them "flowing" in one direction, cooperatively.
The world is simple and boring and it doesn't have graphics. Also, since the naive "Dna"/opcodes I chose use branching and random number generation, it's very slow and can't be simulated on a GPU.
Fun project nevertheless. The last few months, I've been slowly rewriting it in Rust and adding more stuff like terrain height. Haven't published the Rust version yet as it's incomplete—got hung up on the poor state of its terminal libraries.
Very cool. Interesting to hear that they actually managed to evolve. Would be curious to see what happens when they can eat each-other. Though I recognize that might be significantly more complicated
One of the YouTube videos claims that they are self-replicating structures that were "evolved" in another simulation. So possibly the appearance of being designed comes from the fact that they were selected from the best of whatever was produced by that other simulation and placed together for a video.
Not a biologist but I understand that isolation is an important factor of diversity and by default, this simulation wouldn't have that. So it makes sense to evolve in different areas and put them back into the same area.
Have you ever seen http://boxcar2d.com/? It requires Flash so it probably doesn't work anymore, but it used genetic algorithms to "design" a 2d car to travel over bumpy terrain.
Not sure if this is just for fun or for research. Artificial Life is/was a field of ressearch for a while, papers were written, books were published, [1][2] etc. The field sought to study biology and the complexity of living things by experimenting with simulations of the real thing(s).
This reminds me of what IMHO is best use of artificial life in a game, Unnatural Selection. In the game you had to select and breed creatures to go against other enemy creatures. [3][4]
Then there's Core War from 1984 [1]. 11 years ago I computationally evolved a warrior (a program competing for virtual resources) and submitted it to the nano hill [2], it's still ranked top 20 to this day. Every few months the hill emails me the stats of someone trying to beat us with a new warrior :)
There is an old artificial life simulator (darwinbots http://wiki.darwinbots.com/w/Main_Page ) that is inspired by grobots-like programming games. Each organism is driven by its own code that can mutate randomly at each reproduction.
I've been trying to produce a web version of it. This is where I got so far (before more or less desisting):
Pretty cool! I see some rare behaviors like maybe there is a weak magnetic property and certain combinations of particles are more prone to it than others? Trying to rationalize this behavior I see after about a minute where some "molecules" seem to start trailing others.
Not really. If we're in a simulation that just begs the question - what is the "real" universe that the simulation is running in. It pushes the question of the nature of our universe up a level where we have zero visibility. No more satisfying that "where was god before he created the universe?"
The mathematical universe sidesteps this problem. If there is a concise and complete model of the universe, that is sufficient for it to exist. A simulation might also be considered a mathematical model, and it would exist in the same way even if nothing ever runs the simulation. So I guess maybe it could be a simulation, but we mustn't ask what it runs on, but what is the program?
> The mathematical universe sidesteps this problem. If there is a concise and complete model of the universe, that is sufficient for it to exist.
This then leads to how does math exist instead of nothing? Math is a concept, and if concepts exist then that is not "nothing".
Many people confuse "nothing" with the vacuum of space and particles appearing out of nowhere. In this case, we have something (space, vacuums, and particles), not nothing.
Because nothing is precisely what does exist.
But nothing implies something, so my working theory is that nothing's implied opposite something is itself the first thing, then some cellular automata like progression results from similar logical self-reference and down the line our physics (and the entirety of every logical permutation of information n-dimensionally) results from that.
A similar conception I've heard is that its like something and nothing, at the beginning of time, made a bet whether there'd be something or nothing, but the act of making the bet was already something, rigging it in something's favor. Nothing thought that was bullshit and tried to call it so, and they've been battling it out ever since.
Put another way, nothing has absolutely no properties - including the property of being nothing, or empty. If an empty nothing lacks the property of being empty, or nothing, then something must arise.
I'm working on writing a paper along those lines. I do believe that the answer to "why there is something rather than nothing?" may be: actually nothing is the only thing that exist, but its instability creates our apparent reality trough a self-referential observer-observed reality loop. I would love to chat, use my research email.
Does science fiction exists? Or Pokemon? In case your view is that they don't, you may argue similarly that math is a human made construct (which happen to work well to describe our universe, but it may be just survivorship bias as we use in physics only the math that works. For instance we discard imaginary solutions to classical motion equations).
I do believe it is the right view, math is a man made "language" inspired by physics which is more fundamental.
If the universe will never leak any information about its origins, then those origins cannot affect us in any way, ever. This doesn't make any such hypothesis less likely, but it makes them irrelevant to us.
If we are in a simulation and this simulation obeys similar constraints to our computational models, we can test hypotheses on the basis of information theory. Or possibly find error-correction codes encoded in string theory as some quantum physicists have suggested.
A lot of physics seems unnecessarily expensive to compute. Quantum mechanics suggest we either have nonlocality or exponential blowup, both of which cause simulation challenges. With just classical physics you don’t need to deal with that.
On the other hand, there are a lot of things that make physics tractable to compute, such as the +++- metric tensor and other factors forbidding causality violations. A universe with closed timelike curves becomes very expensive to compute because you usually have to use implicit solvers that are slow and might not even converge, corresponding to various time travel paradoxes.
Everyone saying it would be computationally expensive to simulate our universe is failing to put their mind outside of the box of our universe. Imagine for a moment that there's a universe which compares to our universe similarly to how our world compares to one inside of Conway's Game of life.
Granted, this scenario doesn't provide us with anything we can take action on, but the idea that we're in a simulation at all doesn't, either.
Some self-replicating "creature" in Conway's Game of Life could rowhammer the machine it runs in such that the creature (or a copy of it) now exists outside the Game and is able to replicate across the machine and maybe even across the network. If it takes control of a robot, you could argue it's "escaped" its simulation and now exists in our physical world.
The odds of all that happening without it being prevented are all but zero.
We have a better chance that one of the simulators grows attached and—against protocol—decides to uplift us from the simulation into a form where we can directly communicate with them.
So you only have to assume physics totally different from ours, and we can’t observe it. Isn’t that a bit of a weak point? And what would be the point of this simulation that has been running for billions of years?
>you only have to assume physics totally different from ours
You don't have to assume anything is true if you don't want to, but if you want to consider whether we're living in a simulation, it's probably worth considering.
>And what would be the point of this simulation that has been running for billions of years?
First, it's billions of years in our time. Second, what's the point of Conway's Game of Life?
> You don't have to assume anything is true if you don't want to, but if you want to consider whether we're living in a simulation, it's probably worth considering.
Which, for me, is a dead end. It's the same as assuming there's a God, except the moral implications are worse.
> First, it's billions of years in our time.
So, if billions of years of our time fly by like your average simulation run in "their" universe, the simulation can't be very meaningful to them. And it makes the distance between our and "their" physics even larger.
> what's the point of Conway's Game of Life?
None, and that's why nobody runs one with 10^120 cells for billions of years. And if somebody did, the result would be incomprehensible. The gap between us and our creators must then be incomprehensible for us. All this is so outlandish, that the word "likely" shouldn't be anywhere near this discussion.
> And what would be the point of this simulation that has been running for billions of years?
There is zero evidence for or against the simulation hypothesis, so why would some random person on HN be able to have the answer to this question even if we are in a simulation or even if we simply assume that we are?
Even if it's easy, simple simulations would still dominate the space of all possible simulations if the resources of the simulators are finite. So simpler simulations are more likely. (https://osf.io/ca8se , disclaimer I'm the author)
> A lot of physics seems unnecessarily expensive to compute.
When you make a simple simulation of rigid bodies with classical physics you often get numerically unstable results - bodies jerking against each other, slowly passing through, etc. One common way to solve this is to introduce a "frozen" state. When objects are close enough to be at rest with balanced forces - you mark them as frozen and don't compute them every frame to save computing power. You only unfreeze them when some other unfrozen object interacts with them.
Additionally hierarchical space indexing algorithms are often used to avoid n^2 comparisons calculating what interacts and what doesn't. And these algorithms often use heuristics and hashing functions with collisions to subdivide the problem, which might result in objects becoming unfrozen without actually touching each other.
The result from inside this simulation would be weird, nonlinear, nonlocal and look a little like wave function collapse (if particle A whose coordinates hashed through this weird function are the same as those of particle B happens to unfreeze - the particle B unfreezes as well despite not interacting in any way). And this would be probably considered "hard to compute" compared to the simple equations the system developer wanted to simulate.
Example that might be more relatable for scientists - it's much easier and cheaper computationally to make a numerical simulation for 3-body problem than to make an analytic simulation of it. But describing this numerical simulation behavior in terms of physical equations requires much more complex model than the equations that you wanted to compute in the first place. You have to include implementation details like floating point accuracy, overflows, etc. And if you go far enough you have to add the possibility of space ray hitting a memory cell in the computer that runs your simulation.
I'm not saying this is the reason QM is weird - I don't understand QM well enough to form valid hypotheses ;), but I'm saying we might be mistaking the intention of The Developer with the compromises (s)he made to get there. If you take any imperfect implementation of a simple model and treat it as perfect - the model becomes much more complex.
The physics of the simulator would have to be totally different to support a simulation with exponential computational costs. You probably couldn’t have anything like conservation of energy. Polynomial overhead would feel much more plausible.
Consider the physics of the simulator is literally the physics of our current universe. It need not be running on a binary substrate, the computation platform could just be the mass of the universe over time.
Nope, long distance entanglement collapse breaks this. You either need exponential blowup to simulate all possible eigenvalues or you need superluminal coordination.
There isn’t actually an observed/unobserved distinction in physics. Unless you mean the simulation is specifically targeting humans, which is a vastly more complicated proposal.
> Unless you mean the simulation is specifically targeting humans, which is a vastly more complicated proposal.
It's also the most likely proposal (with current understanding of universe).
Axis of Evil (Cosmology) calls into question the Copernicus views of the universe. Essentially saying our solar system is somehow back at the center of the universe.
If WE are the subject of the simulation, it's likely everything our instruments observe are like the sky on the Truman show - not there, just phantoms of what we would expect to be there with what the simulation wants us to know about physics.
There's a max speed the speed of light, what if this is the max processing ability of the computer we're running on. What if we're not on a computer at all but some sort of wetware computer system that grows as it needs to, and never runs out of resources?
What if the speed of light in the parent sim is 500x bigger for them, or ours is like a centimeter in comparison.
A dream is a simulation, we could all be dream creatures to some huge extra-dimensional being. Not everything pre-supposes human technology.
I've seen literal "glitches" in reality, so it's pretty easy for me to believe that reality isn't something completely set in stone. For others it challenges everything they believe in, for that I say open your mind.
Donald Hoffman believes that what we see is like what someone in a VR headset sees, outside the VR headset who knows what that world is like, but in this one -- everything except math (which he believes is universal and extra-universal), is made to fit this universe. Physics, science, all of it is unique only inside the headset. There could be many headsets with different settings running parallel (parallel worlds/universes), maybe the speed of light is faster in one than the other, maybe gravity works different, etc... So many things in our understanding are really like "settings" like size of a planck's constant, pie, speed of light, etc. Almost reads like a config file.
I mean if you buy into a "God" being, if computing is a thing which we have it so why wouldn't God? Wouldn't it even make more sense for him to just code up a simulation? I mean it's gotta be a lot less demanding than building a whole universe from nothing.
Yes, any evidence of complexity (assuming that simpler universes are more likely) is evidence against the simulation hypothesis: The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization
https://osf.io/ca8se
(I'm the author :) )
complexity and simplicity seems pretty biased to human understanding?
Just as a thought experiment, I'd propose that our universe and the human experience is incredibly simple. Humans were only given a limited number of senses so that the simulation can be run in this "low fidelity". Compared to the thousands of senses a level or two up. We also are simulated in a simple linear time model, only able to experience a single time at once, greatly reducing the complexity and fidelity needed. Same for the number of dimensions we are able to sense.
Yes, you need to remain inside the reference class of human simulations, so in a sense there is a bias in where do you want to draw the line of what's a human simulation. But once you do it, the result in not ambiguous.
Running an ai simulation on an 8 bit nintendo is going to be a lot more complicated, and difficult than running one on a 512^e38 bit (pulled out of ass) 100kth gen Radeon GPU that won't be developed for 1000 years from now...
In a universe where time itself could be fluid, where it could be easy to reverse events, rewrite events, etc - making quantum computers work even way better than we ever could because we're limited by causality.
I mean the people beyond this universe could have 50 senses, like a sense of how far up or down they are, or how much water they can breathe in before they need oxygen if o2 is even a thing, or a sense of time so they can go back/forward through time. If they have 50 senses, our 5 sounds like "nothing" to simulate.
It's all a matter of perspective, I'm sure an ant feels like they keep pretty busy and nothing could possibly simulate their colonies, but I'm sure that would be pretty easy.
It is actually possible :) (with some assumption on the distribution of the simulations)
Complex sims are less likely, so the likelihood of increasing optional complexity of our sim should be slim, for instance interstellar travel. It's still unsolved how much unlikely, but if you have a large enough increase in complexity (say interstellar travel over billions of light years) you will hit sims which are unlikely enough.
For folks who like this sort of thing, I will once again make my monthly plug for folks to check out "The Evolution of Cooperation", by Robert Axelrod.
Also, "The Selfish Gene". Super fun read. Also, there are a bunch of really interesting videos made to demonstrate concepts like the evolution of altruism on youtube: https://www.youtube.com/watch?v=goePYJ74Ydg
Bought it now! I really likes Steven Levy's "Artificial Life" as a light introduction to this world. Sadly, the book isn't too far out of date despite being 30 years old now.
I think you need to read the first few chapters of After Virtue to appreciate the concept behind creating things for people to enjoy directly; it's a form of art, essentially reducing the overgrown calculator known as a "computer" to a beautiful vase holding a flower arrangement.
This is a sort of test playground for marketing, brands, and so on since the programs occupy a no man's land between games, academic research, toys, entertainment, and programming.
It also satisfies the self-feeding dogfood condition: similar to games such as RoboWar, it is difficult to resist the temptation to experiment at the simulation level, a phenomenon that could be described as the MFTL effect.
Reminds me of ParticleLife (for example [1]), which has much simpler, but more random rules. There also emerge lots of interesting organism, but I rarely see replicators.
I used this as inpiration and to learn about Unity ECS and made a 3D version with WebGL support [2]; native builds obviously have much better performance. But it is all CPU-only. Anyway. What I found very interesting is dynamically switching between 2d and 3d. Most organisms survive the dimensional increase or reduction and just reconfigure itself to a similar structure.
I don't think you could really call this a cellular automata, as that's defined by the cellular-neighbourhood-processing update rule. To put it another way, this looks like a 'vector' simulation (or automata) compared to GoL's 'raster' update.
There are certainly a lot of other fascinating cellar automata though! Even within 2-state-2d-totalistic (the class GoL is from) there's loads to see and lots of surprises! Well worth exploring! (there's an app called 'golly' that's good for that, and it's cousin 'ready' does related (also 'raster') 'reaction-diffusion' simulations.)
In no way comparable in sophistication, but I did find making an n-body simulation to be unexpectedly profound.
My universe started as a uniform random distribution of small stationary objects, the only rules that existed were gravity (F=gm1m2/r^2) and inertia (F=ma).
Mass started clumping together, orbiting each other, eventually forming a relatively stable arrangement of what we recognize as stars orbited by planets orbited by moons.
With two simple rules to govern my universe, an emergent order had occurred that mirrored my reality.
Definitely going to play around with this. I'd love to see examples where you have a ton of at first non-usable energy spread out in the world, with some "hot spot" of energy in a corner with some setup allowing for evolving mechanisms. Seeing mechanisms form that are able to utilize the spread-out energy would be really fascinating.
Reading through the comments, the number of folks who simply can't get this running on their system seems to be fairly large.
The GPU compute ecosystem is truly in a very sorry state, and NVidia is very much to blame for this: in their quest to get a stranglehold on the market, they've reached a point where things don't even work reliably on their own products.
WebGL seems to be the only kind-of-robust way to do portable GPU code now, if you don't have encyclopedic experience of deploying native GPU apps. Or the time, $$ budget and opportunity cost budget to engage in multiplatform and testing fixing.
Are there any similar projects that would help newbies like me to learn a bit about any real area of life sciences, e.g., biochemistry, cell biology, neuroscience in a fun and engaging way? This project looks like tons of fun, but I am too ignorant to judge whether it would teach me anything applicable beyond the scope of this particular bit of software.
This reminds me of the awesome Scriptbots project by Karpathy (of Tesla self driving fame) from 10 years ago, which I spent countless hours playing with: https://sites.google.com/site/scriptbotsevo/
This is going to be a random comment, but the thing that struck me most was how close this person's Github username is to my own! github.com/chrxr vs github.com/chrxh. Feels bizarre. And seeing their actual name it appears there username has the same relationship to their real name as my own username.
Didn't work on my AMD 3700x + 64GB + GTX 1070.
Windows 10 is updated and nvidia drivers too.
Got only a black screen after clicking "play"
:(
Tested both 2.52 and 2.47 versions.
Nope. Appears to be broken for Nvidia 10 series. I'm on Intel with a 1070ti + another user reports issues on the 1080ti (see similar post in this thread; reported to author here: https://github.com/chrxh/alien/issues/21).
Same problem with my 1070ti (also Pascal architecture). When first started can pan, zoom, edit, etc. But, as soon as Run is hit, rendering completely breaks: scroll bars indicate zoom is working, but display never updates. In addition, program hangs on exit (one CPU is pegged at 100%).
Have updated to current CUDA (11.3.1) and current NVidia driver (466.77) with no luck.
Could someone explain the obsessive devotion to doing all Artificial "Life" research in terms of Cellular Automata? If you could supply the mathematical reasoning for this, I would be very interested in hearing the answer.
And preferably an answer that goes beyond just saying that Von Neumann used Cellular Automata in his ALife research.
Also, if anyone knows of alternative methods to CA in studying the properties of life then I would also be interested in learning of these.
All in all, I find these procedurally generated art pieces to be rather underwhelming in any serious attempt or study of what artifical life is/can be.
> digital organisms and evolution
This is a claim without definition of what a non-biological organism even is. Could we just claim that any CA, any program is "living" while it is running?
I would love to see some formality before claims are made in this area.
EDIT: After watching the "Planet Gaia" video [0], I feel even more like the excitement about this is no different than the excitement for a video game and not for actual scientific progress. Cool code and cool visuals. Very little in the way of understanding life better.
I think many artificial life simulations end up underwhelming because life is incredibly complex, and so it's very hard to simulate at scale. This ALiEn is perhaps the most advanced one I've seen, and it looks like even still they take some shortcuts (like copypasting interesting organisms from previous simulations together to create interesting interactions).
What you see as a criticism of this line of research I think is actually its reason: Life is arguably the most interesting thing in the universe, and if we can create it digitally it will surprise us. Evolution yields insights and solutions that you cannot predict. If we can synthesize what the minimal set of key properties are necessary for artificial lifeforms to create interesting unexpected outcomes, it helps us clarify the definition of what a non-biological organism could be.
I'm personally fascinated by the idea of autonomous digital agents that exist and self replicate while trying to earn cryptocurrency, which is used to pay for the hosting costs of themselves and their progeny. I think we are about two decades away from this being realized, but in the future, software services could self assemble, replicate imperfectly and evolve to please humans without any humans writing additional code: we'd just have to code a profitable LUCA, create suitable 'nests' and pay the organisms that please us. "What is life" is debatable, but IMO this would be a valid digital lifeform.
But, this is a very unaddressed point. Why focus on "simulation" when mathematical formalisms & theories could be potentially even more useful? Especially when most "simulations" are running on some arbitrary set of hard-coded assumptions?
> What you see as a criticism of this line of research
To clarify, I was in no way criticizing ALife research. Quite the opposite. I am actually trying to help ensure it does not get stuck in a rut.
Ah. We'll speaking personally, mathematical formalisms & theories sound very intimidating, whereas CA-type simulations are so approachable many are 'fun toys' that kids can enjoy playing with.
A mathematically formal approach does sound potentially more useful, but I'd have no idea how to approach that sort of problem. I speculate that the venn diagram of people who want to work on these types of problems and also have the depth of formal math understanding to actually achieve it is a small handful of people who have plenty of other interesting problems to work on.
Or maybe someone has done this work successfully, but the depth of knowledge required to understand it has prevented wider awareness?
> Plenty of A-life research doesn't use cellular automata as a model.
While I would like to believe you on that, one link does not seem sufficient to support the word "plenty" when the ratio of ALife projects built around CAs to not is extremely high.
It's not grid based. But that seems like a rather pedantic differentiation to make. It does quite literally utilize the concept of "cells". [0]
I find it interesting that the project seems to have hard-coded emergence with the concept of "tokens".
So, I am much less intrigued by the simulation examples when most of what we are seeing is just a procedurally-generated video game with pre-defined game rules. Much of it is not truly emergent.
Again, it's a "oh, that's cool" kind of factor, but a far cry from contributing to anything in the way of "artificial life" research.
As I was developing this, I hadn't figured out how I wanted to do food yet, so as an easy first step, I just had a constant amount of energy that was split amongst all organisms on the screen. Lots of little dots buzzing around, was kind of neat but nothing too special. I left it to run overnight.
When I came back I was very surprised: previously i was running at about 30FPS - now it was running at about 4 seconds per frame. The screen was filled with dense expanding circles of tiny slow organisms emanating from where organisms had mated and nothing else.
My simulation evolved to outsmart my simple food algorithm: when food is divided equally among all organisms, the best strategy is to use minimal energy and maximize offspring count. I had populated the world with a default offspring count of ~5 and they had evolved to the tens of thousands. The more offspring an organism had, the greater the amount of the energy pool would go to their offspring.
It was a very cool "Life, uh, finds a way" moment - that such a simple toy simulation of evolution was able to find an unanticipated optimal solution to the environment I created overnight was very humbling and gave me a lot of respect for the power of evolution.