Hacker News new | past | comments | ask | show | jobs | submit login

This makes me wonder. If cells were occasionally activated in random positions, what sorts of modifications to the architecture would be necessary to make the system robust and still able to carry out its computations with a reasonable probability of success? Would that even be possible? In other words, what kinds of modifications to this system would make it more like a natural biological system that has to cope with noise and interference from its environment?



I wrote about the state of the art for this here: https://www.lesswrong.com/posts/mL8KdftNGBScmBcBg/optimizati...


Cells activating in random positions seems like a high bar. How robust are you to a rock suddenly appearing inside your body? (e.g. inside your heart?)

A lower bar might be being robust to individual spaceships/gliders coming in from outside. But even there a collection of gliders is probably going to break through.


Fascinating question. I suppose it depends on the size of the rock? Anything big "enough" would cause real problems, but a single molecule of rock would probably not matter too much. Presumably cosmic rays are triggering single neuron molecules all the time, but because there is a lot of redundancy this does not propagate into real activity.


I think it would be cool to have a structure with a line where, no matter where vertically (within a large range of vertical positions) a single glider crossed the line to the right , and no matter which of the 4(?) phases the glider has, signals would be sent somewhere indicating generally where the glider crossed, and the structure eventually returning to how it was (other than the signals it sent out) .

Is such a structure possible? If so, how short can its recovery time be?

How good can the resolution of how specifically it detects the locations of the gliders be?

We know that GoL is Turing complete, and if the field is initialized with random noise, then for any fixed finite pattern, as the board size goes to infinity, the probability that that pattern appears somewhere in the noise approaches 1. (of course, I'm talking about enormous board sizes, possibly with many many more cells than there are protons in the visible universe, not talking about anything resembling practicality.) If intelligence and agency is computable (which, it seems like it should be), "any fixed finite pattern" would include structures which, for some amount of time, before they are destroyed by surrounding noise, would simulate an intelligent agent.

If there are structures which, when surrounded by stuff initialized with noise, has a high probability of being able to withstand this noise, and then proceed to clear out the noise in order to e.g. make copy of itself, or just to grow, then we would expect the fraction of an infinite board containing such patterns (or things derived from them) to grow over time.

But, whether such structures can exist in GoL, in part depends, I think, on whether any large structures can withstand noise (or, having a high chance of withstanding it).

(I am defining "structures" in a way where a structure is allowed to include as part of it a large empty region (of any fixed size) on its periphery. This should assist in withstanding the noise, because it limits what things the core part of the structure could be faced with, to things which can travel a distance)


It's beginning to look like such patterns do exist. See this comment and the preceding thread: https://www.conwaylife.com/forums/viewtopic.php?p=137171&sid...


Oh! Very nice! Thank you!


The Stackexchange thread on the GoL quasi-computer (which is underlying the interpreter in the TFA) mentions that the metapixels have borders that swallow gliders from other metapixels. However, those gliders are apparently used to communicate the state of the neighbouring metapixels, so presumably they don't disappear without effects. Also the location of metapixels relative to each other is probably fixed.


Yep, restricting things to coming from the outside might be more of a realistic challenge. I think I had a more academic version of the question in mind. Seems like we already have tech that is designed to handle a certain degree of pure randomness. For example, we have error-checked RAM.


Dave Ackley explores this aspect here: https://www.youtube.com/watch?v=oXiqMGhn9rk




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: