Hacker News new | past | comments | ask | show | jobs | submit login
Evolved antenna (wikipedia.org)
192 points by zilic on Nov 15, 2018 | hide | past | favorite | 67 comments



Reminds me of an old article about evolutionary circuit design. The computer was tasked with creating a osscilator using physical hardware. It created a really complex and unconventional design that no-one understood, but it worked, only not work outside of the lab. As it turned out the algorithm had designed it in a way that it used the radio noise from the computer it was running on as a source. It had effectively made an antenna.


This is the origin of that story. From The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors

"It seems that some circuits had amplified radio signals present in the air that were stable enough over the 2 ms sampling period to give good fitness scores. These signals were generated by nearby PCs in the laboratory where the experiments took place."

https://people.duke.edu/~ng46/topics/evolved-radio.pdf


There have been quite a few papers on this probem. If you don't get your simulator AND your cost function right ... fun things happen. Things you probably didn't intend. Last year someone made a compilation paper about a series of fun techniques algorithms have used to satisfy the demands of their creators ... while not solving the problem. It was featured in popular mechanics.

https://www.popularmechanics.com/technology/robots/a19445627...

The way to solve it is something engineers hate for some reason. You explicitly design (and simulate) VERY bad hardware. What's bad hardware ? A camera that has a noise floor of 30% it's measurements. Yes, even in low light conditions (also: noise floor must vary a lot between runs of the algorithm). An actuator that goes the right way 90% of the time, and the result of a particular voltage on the motor varies by 20-30%. And in 10% of cases, it's just entirely stuck, without giving any feedback abou tthat.

https://arxiv.org/pdf/1804.10332.pdf

Or for the more visually inclined: https://www.youtube.com/watch?v=lUZUr7jxoqM

The lesson is that what engineers always do, open-loop designs (I send voltage, motor moves), can be incredibly outperformed in control, accuracy, resiliency, and more by much worse hardware closed-loop designs (I send voltage, motor moves, I check how it moves, I change voltage).

And yet somehow people seem to have incredible issues trusting such systems. For instance, autopilots are mostly open-loop designs. That's like a pilot flying a plane with his eyes glued shut, and no sense of balance (ie. he HAS to trust one instrument, and have no way to verify that, say, the plane actually goes up when they pull the stick. So if it's keeling over backwards or something, they'd just keep pulling the stick right up to the point they hit the decor). They implicitly trust the plane does what the autopilot orders it to do. If for some reason it doesn't ... it's not going to end well.

The issue is that closed-loop designs are much harder to write. The solution to that of course is to not write them, learn them. An autopilot that flies a plane, and when the plane breaks, rapidly learns how to fly a broken plane rather than just trusting it's (now inaccurate) model and killing everyone doing that.


Cf. "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities"

https://arxiv.org/abs/1803.03453v1

> Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.


Was it Creatures From Primordial Silicon?

http://www.netscrap.com/netscrap_detail.cfm?scrap_id=73

They used a FPGA for voice detection. It was fascinating they didn't understand how it worked, and it wasn't a universal design because it depended on manufacturing variation.


I remember reading this years ago, and always found this part by far the most interesting:

> A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.

> It appears that evolution made use of some physical property of these cells--possibly a capacitive effect or electromagnetic inductance--to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.


I read about the same research in an article in Discover Magazine: http://discovermagazine.com/1998/jun/evolvingaconscio1453


This article is what made me decide to major in CS. I very much remember reading it in my high school library during the final few days before graduation. Every time I see it mentioned somewhere I get that feeling . . . not quite nostalgia, but a reminder of why I love what I do.


Thanks for sharing that. This was an absolutely fascinating read. I am assuming there are many contemporary projects that replicated this workflow with modern FPGA, right? Anything on GitHub? I have a TinyFPGA sitting here, I could try stuff on that.


I read it in a Dutch magazine, but it could very well have been based on the article you're referring to. It was a lot shorter, but 1998 seems the right time.


maybe this?

https://www.damninteresting.com/on-the-origin-of-circuits/

ive re-read that piece dozens of times, the implications are fascinating.


There's a reason that conventional design uses models that abstract away the particulars -- in this case the evolved solution isn't robust to process variation in the FPGA. It's really easy for optimization to exploit irrelevant details to give you designs that aren't very useful. I'm no skeptic when it comes to EAs -- I did my dissertation using them, and I've subsequently written one of my own -- but you have to have realistic expectations for them. They get a lot less magical when you look at them up close.


Are you aware of anyone who has recreated the process, except with a large pool of different FPGAs from different manufacturers and running designs on a random one each time?

I'd imagine with even a moderate pool (~10?) the variation falls away and you're left with a robust design, although the process will take significantly longer.


apparently it is not possible because contemporary FPGAs do not give you low-level access to modify the floorplan.


How low-level do you need? Vivado lets you place individual LUTs where you want, and to some extent dictate the routing.


They don't force -sign the bitstream. iCE40 is also afaik reversed.


At that point, why not use a HDL simulator? I think that would be the best approach anyway. The constraint that the circuit can only rely on behavior that can be described by a HDL is a good thing.


I'm getting well out of my domain here, but I'm wondering if there's some "common non-idealness" in FPGAs to be exploited which isn't accurately captured in simulators.

Things like this:

> A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.

> It appears that evolution made use of some physical property of these cells--possibly a capacitive effect or electromagnetic inductance--to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution.

which could be implemented in a more general manner and don't rely on the peculiarities of a single board. It may not prove to be robust, but in a pure research sense I'd love to see it.


As dnautics alludes in a sibling commment, the effect you're talking about is subtly floorplan dependent, and that's going to vary from manufacturer to manufacturer. I don't think it's completely impossible that there's a common non-idealness to FPGAs, but I also don't think that the common non-idealness is going to involve using disconnected cells as some kind of antenna.


It was also in this list:

https://news.ycombinator.com/item?id=18415031

> Genetic algorithm is supposed to configure a circuit into an oscillator, but instead makes a radio to pick up signals from neighboring computers


Adrian Thomson’s work


That is super fascinating. Do you happen to have a link?


Not the article the root comment is referring to. This one has GA creating timing circuits by exploiting subtle flaws in FPGAs: https://www.damninteresting.com/on-the-origin-of-circuits/


It was in a magazine I read in my youht 20 years or so back. I'll try to find it online if it's there.


Seems like an entertaining story but for that to work, the computer would have to have physical access to either each version of the circuit it designed or have the radiation-conditions in the lab programmed into its fitness functions. Neither of these conditions seem at all likely. Any evolutionary circuit design would involve the evolution of properties as modeled by some equations (as the antenna design process does) rather physical manipulation of circuits.

I suspect you're remembering some entertaining "how AI could hypothetically escape" scenario instead. Sure, maybe a "real AI" is going to escape but an evolutionary algorithm, which is just several loops around fitness and mutation functions, probably isn't going be coming up with such novel approaches.


An FPGA is a physical circuit that can be reconfigured via software on the fly, it should be possible to run an evolutive algorithm by running every iteration on actual hardware... so it should also be possible for the algorithm to optimize for amplifying some local interference.


The one I'm familiar with linked in another comment) used FPGAs, which the algorithm could control the circuit design of.


Could be, it was an article in a young scientist magazine maybe 20 years back or so. As I remember they iterated over actual hardware versions using modular passive components (resistors, capacitors, coils) much like you would find in a school lab.


Interesting to see this antenna here. I heard the creator speak at a local IEEE event a few years ago. Quite remarkable. Nobody would intuit an antenna design like that. Great demonstration of the power of GA.

Antenna optimization software has changed antenna design completely. Before wide-spread computer simulation, there was an awful lot of antenna range cut-and-try. Antennas are much better now. Learning to drive the modeling software is still a huge amount of work, but the results are well worth it.

An example from the world of amateur radio: the only popular HF tri-band beam to survive from the pre-simulation days is the KT-34 and big brother KT-34XA, originally from KLM. I was talking to Mike Stahl about it (The M in KLM, also one of the M's in M-squared) and he said he spent months going up and down towers at an antenna range. Pretty much all other hand-tuned competitors from that era have fallen by the wayside, the new simulation-verified designs being much better.

Mike is a great hands-on antenna designer -- when the Stanford dish was new he designed several feed horns for it.


Another good example of this is fractal antenna design that allowed significant efficiency/space improvements in mobile phones and allowed one physical antenna to work for multiple frequencies quite well.

https://en.wikipedia.org/wiki/Fractal_antenna


One of my high-school science-fair projects was a fractal high voltage insulator. The idea was that since high voltage follows the surface of an object, why not give it a lot of surface to run out of energy on? I failed of course - the ceramic was too hard to work (I dulled so many saw blades). And I didn't have an oven to bake it at the required temperature profile, nor a high-voltage lab to test my designs. But a cool idea anyway.


You want a shere/distance/minimize maximum curvature.


An unanswered question was whether the voltage would just jump across the edges of the fractal or follow the surface of the ceramic. Which would likely vary with humidity. I'm sure that General Electric & Siemens test in all kinds of weather conditions - another thing I couldn't do. ;)


...there was an awful lot of antenna range cut-and-try.

My mother's beach house still has a shed full of various aluminium pieces, many of them part of a range of semi- or fully-built customised Yagi designs, the legacy of years of my father's cut-and-try attempts to optimise very marginal UHF television reception.


Are the software radiation simulations accurate? Are they based on solving complex physics formulas?


Well..... for some definition of accurate.

For large antennas with thin structures, the typical solver is "Method of Moments". NEC2 was developed by the government and is public domain. It is quite popular, and does some things well but it is also easy to stumble into modeling bugs/deficiencies, and isn't much use above high UHF. But is very useful if you know how not to step in the bugs, and is free. NEC4 falls under ITAR, the last I heard. So it isn't particularly hard to get a license, but you have to clear ITAR.

Microwave structures are more often done with a finite-element model, as I understand it.

Both rely on numerical approximations to Maxwell's equations. At least for MoM, each element cheats the boundary conditions a bit in order to make the problem tractable. With a fine enough grid, you get a good enough answer.

Another friend that has started two antenna companies is an NEC4 guru. I asked him: "How can I tell if my model has a small enough grid?" Him: "Keep reducing the mesh until the answer stops changing. When it stops, you had enough elements in the previous try." Antenna modelling is a bit of an art, I'm not expert, just hack a few as a hobby.


Sounds like there's some manual steps to create the simulation. I'm wondering how these genetic algorithms can automatically feed into the simulation and get back a single number to drive the GA fitness function.


Well, I used a Yagi optimizer at one point. With that system, the antenna was parameterized. I specified certain fixed values, like overall boom length and nominal element diameter, and total number of elements. The element lengths and spacings were free for optimization. Fitness function was a scoring of front-to-back ratio, front-to-side ratio, and max allowable SWR over a frequency range of interest, with a goal of maximum forward gain. The optimizer tuned the free variables.

For different applications the various scoring measurements would be adjusted differently. (Sometimes you care greatly about minimizing back and side lobes, other times not so much, for instance.)


That's the exact same trick you use to determine model granularity when using finite element analysis for construction purposes. It is actually quite surprising how coarse models can be and still give useful answers.


They're typically done by splitting the antenna up into infinitesimal pieces, then summing up all the effects. Like a 3D integral.


Took a few clicks to find the referenced paper as PDF,

So here it is:

https://www.researchgate.net/profile/Greg_Hornby/publication...


Skipper: "Professor, why did our rescue radio fail?"

Professor: "Hmmm, it looks like somebody straightened out the antenna. It had an unusual shape for a reason."

Skipper: "Giillligan?! Do you know anything about this antenna?"

Gilligan: "Uh, I'm sorry guys, the antenna looked all bent up, so I straightened it. See how nice it looks now?"

(They both bop Gilligan on the noggin.)


I accidentally "evolved" a microwave antenna in my microwave oven, once.

I put some hematite sand on the top of a glass plate that was suspended about 5 inches from the bottom of the microwave oven. I wanted to see what kind of pattern the standing waves would heat the hematite to (this microwave didn't have an RF stirring fan).

Instead of some standing wave pattern, I ended up with a fractal like antenna that grew from a molten blob in the middle. The initial molten blob extended into arms of molten material, with each arm necessarily extending in the direction that maximized RF absorption, causing additional material to melt. It grew brighter and brighter as it extended, reached some peak, then some of the arms shorted and it dimmed. Then the plate shattered from the heat.

I wrote some software that converted the picture into antenna elements for some free antenna simulation package I found at the time, but it couldn't support enough elements to get close to the shape of the antenna and would just crash beyond 10 or so.

I'm not sure how useful of an antenna it would be, since it was maximized to generate heat, but it was a neat kitchen experiment.


I would love to see pictures or a video of this in action!


I don't have a video, and I can't find the original picture, but here's a very low resolution version: https://pbs.twimg.com/profile_images/378800000340843432/c1f1...


Sounds like a Star Trek episode: a creature evolving to avoid weapons, morphing into something unstoppable.


If you’re interested in generative design beyond antenna construction, check out what Autodesk has been doing: https://www.autodesk.com/solutions/generative-design


And then there is phased arrays, a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas.

Pure math (matrix of delays per element). Go crazy :)

https://en.wikipedia.org/wiki/Phased_array


The same thing is used in ultrasounds (except with sound) which is how they can "scan" without having any moving parts. Super cool.


They say this is only used rarely. I’m curious why people wouldn’t design every antenna this way


Because most antennas don't have weird enough requirements so standard designs work fine. For stuff down here on Earth we deal with a lot of the same problem so the problem has generally been solved to a reasonable approximation and there's additional constraints like shape and space. Standard designs are also just cheaper because they're made by the thousands and machines exist to make them in massive quantities.


Talking less about antennas but more about the overall concept, most evolutionary algorithms don't take manufactoring into account at all, so they tend to be a pain to actually build.


Because that would be unnecessary. This is only worth it when there are "unusual radiation patterns" (from the wiki article)


I do a lot of antenna design. There are far too many variables for optimization, and also size/geometry constraints. That's not to say you wouldn't start with a base design and use an optimizer (which could be genetic). There was a company commercializing this exact algorithm several year ago. I don't think it went anywhere.


Genetic optimisers don't tend to do well over a few hundred parameters, but stochastic gradient descent does well up to billions of parameters as long as you have sufficient compute and ram. Obviously your fitness function needs to be differentiable with respect to the parameters, which most simulators can't yet do.


Evolved antennas are good for certain purposes, but for highly directional links, the laws of physics say that there isn't much better than some variation on a human + CAD + Matlab designed parabolic reflector (whether it's center feed, compact cassegrain, elliptical offset feed, etc).

Take a look at the dishes used for 71-86 GHz band mmwave links for instance.


"Better" when the goal is a very narrow beam pattern and minimal side- and back-lobes. That isn't always the goal.

For instance, putting your entire FCC city-of-license within your grade A signal contour from X miles away at a bearing of B. Or, in one case I am aware of, covering your puny city of license in one direction, and incidentally getting good coverage into the much bigger city off the back of the beam so that your sales department can get some traction selling advertising.


Not my field, only related to it, but looking at the ERI antenna catalog, they resemble human designed variations on yagi, dipole and log periodic antennas...

https://www.eriinc.com/catalog/antennas/


Or using a phased array that can generate practically any shape of the signal.


> This sophisticated procedure

evolutionary algorithms are anything but sophisticated. i love them, but sophisticated they are not.


The results are sophisticated.


Evolutionary algorithms are just a variation of pseudo-random search.


Sophisticated is (also) a synonym of convulted.


All of us are products of evolution. Either you are sophisticated or convoluted. Choose.


not in the article (or the linked): Over the lifetime of the spacecraft, Did it perform as well or better than the conventional? Does anyone know?


Probably worth revisiting with Q-learning


I <3 NASA




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: