Reminds me of an old article about evolutionary circuit design. The computer was tasked with creating a osscilator using physical hardware. It created a really complex and unconventional design that no-one understood, but it worked, only not work outside of the lab. As it turned out the algorithm had designed it in a way that it used the radio noise from the computer it was running on as a source. It had effectively made an antenna.
This is the origin of that story. From The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors
"It seems that some circuits had amplified radio signals present in the air that were stable enough over the 2 ms sampling period to give good fitness scores. These signals were generated by nearby PCs in the laboratory where the experiments took place."
There have been quite a few papers on this probem. If you don't get your simulator AND your cost function right ... fun things happen. Things you probably didn't intend. Last year someone made a compilation paper about a series of fun techniques algorithms have used to satisfy the demands of their creators ... while not solving the problem. It was featured in popular mechanics.
The way to solve it is something engineers hate for some reason. You explicitly design (and simulate) VERY bad hardware. What's bad hardware ? A camera that has a noise floor of 30% it's measurements. Yes, even in low light conditions (also: noise floor must vary a lot between runs of the algorithm). An actuator that goes the right way 90% of the time, and the result of a particular voltage on the motor varies by 20-30%. And in 10% of cases, it's just entirely stuck, without giving any feedback abou tthat.
The lesson is that what engineers always do, open-loop designs (I send voltage, motor moves), can be incredibly outperformed in control, accuracy, resiliency, and more by much worse hardware closed-loop designs (I send voltage, motor moves, I check how it moves, I change voltage).
And yet somehow people seem to have incredible issues trusting such systems. For instance, autopilots are mostly open-loop designs. That's like a pilot flying a plane with his eyes glued shut, and no sense of balance (ie. he HAS to trust one instrument, and have no way to verify that, say, the plane actually goes up when they pull the stick. So if it's keeling over backwards or something, they'd just keep pulling the stick right up to the point they hit the decor). They implicitly trust the plane does what the autopilot orders it to do. If for some reason it doesn't ... it's not going to end well.
The issue is that closed-loop designs are much harder to write. The solution to that of course is to not write them, learn them. An autopilot that flies a plane, and when the plane breaks, rapidly learns how to fly a broken plane rather than just trusting it's (now inaccurate) model and killing everyone doing that.
Cf. "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities"
> Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.
They used a FPGA for voice detection. It was fascinating they didn't understand how it worked, and it wasn't a universal design because it depended on manufacturing variation.
I remember reading this years ago, and always found this part by far the most interesting:
> A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.
> It appears that evolution made use of some physical property of these cells--possibly a capacitive effect or electromagnetic inductance--to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.
This article is what made me decide to major in CS. I very much remember reading it in my high school library during the final few days before graduation. Every time I see it mentioned somewhere I get that feeling . . . not quite nostalgia, but a reminder of why I love what I do.
Thanks for sharing that. This was an absolutely fascinating read. I am assuming there are many contemporary projects that replicated this workflow with modern FPGA, right? Anything on GitHub? I have a TinyFPGA sitting here, I could try stuff on that.
I read it in a Dutch magazine, but it could very well have been based on the article you're referring to. It was a lot shorter, but 1998 seems the right time.
There's a reason that conventional design uses models that abstract away the particulars -- in this case the evolved solution isn't robust to process variation in the FPGA. It's really easy for optimization to exploit irrelevant details to give you designs that aren't very useful. I'm no skeptic when it comes to EAs -- I did my dissertation using them, and I've subsequently written one of my own -- but you have to have realistic expectations for them. They get a lot less magical when you look at them up close.
Are you aware of anyone who has recreated the process, except with a large pool of different FPGAs from different manufacturers and running designs on a random one each time?
I'd imagine with even a moderate pool (~10?) the variation falls away and you're left with a robust design, although the process will take significantly longer.
At that point, why not use a HDL simulator? I think that would be the best approach anyway. The constraint that the circuit can only rely on behavior that can be described by a HDL is a good thing.
I'm getting well out of my domain here, but I'm wondering if there's some "common non-idealness" in FPGAs to be exploited which isn't accurately captured in simulators.
Things like this:
> A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.
> It appears that evolution made use of some physical property of these cells--possibly a capacitive effect or electromagnetic inductance--to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.
which could be implemented in a more general manner and don't rely on the peculiarities of a single board. It may not prove to be robust, but in a pure research sense I'd love to see it.
As dnautics alludes in a sibling commment, the effect you're talking about is subtly floorplan dependent, and that's going to vary from manufacturer to manufacturer. I don't think it's completely impossible that there's a common non-idealness to FPGAs, but I also don't think that the common non-idealness is going to involve using disconnected cells as some kind of antenna.
Seems like an entertaining story but for that to work, the computer would have to have physical access to either each version of the circuit it designed or have the radiation-conditions in the lab programmed into its fitness functions. Neither of these conditions seem at all likely. Any evolutionary circuit design would involve the evolution of properties as modeled by some equations (as the antenna design process does) rather physical manipulation of circuits.
I suspect you're remembering some entertaining "how AI could hypothetically escape" scenario instead. Sure, maybe a "real AI" is going to escape but an evolutionary algorithm, which is just several loops around fitness and mutation functions, probably isn't going be coming up with such novel approaches.
An FPGA is a physical circuit that can be reconfigured via software on the fly, it should be possible to run an evolutive algorithm by running every iteration on actual hardware... so it should also be possible for the algorithm to optimize for amplifying some local interference.
Could be, it was an article in a young scientist magazine maybe 20 years back or so. As I remember they iterated over actual hardware versions using modular passive components (resistors, capacitors, coils) much like you would find in a school lab.