I'm a modeler who works with experimenters. Their general attitude is that unvalidated models are about as useless as unexamined measurements.
In that context, this sounds like a nice piece of work-- it integrates the data analysis and model validation steps.
However, it's also clear that the algorithms were fed pertinent data from a meaningful physical setup. In other words, good experimental design carried a lot more of the burden of the discovery, than the computer algorithm that sorted through the data.
I spend a lot of time modeling and have used genetic algorithms and similar paradigms in the past. They are great -- if you can somewhat narrow down the relevant parameters. If you can't, the search space is insanely large.
This is very neat, but for some reason I'm not super impressed. I don't mean to be glib, but genetic algorithms have been producing impressive results for a while. And if you feed them a huge amount of raw experimental data, I have no doubt and I am not the least bit surprised they can discover and fit equations to it.
I suppose there is a possibility that for some more complicated things formulas don't converge and thus a brute force search would not be able to optimize a solution. However for most thing, I do expect solutions derive from other solutions and things converge.
Still, often the most brilliant things seem obvious in hindsight, kudos to the team.
The article seemed to say that their system derived the law of conservation of energy from pendulum data. That seems pretty impressive to me, but maybe that's because I don't know enough physics to see how it's an obvious conclusion?
Hm. They were looking for conservation laws, threw out all trivial equations and kept the next set they found. Weren't they in fact building an engine to find what they knew was there? Isn't this just a program to fit coefficients? When did they stop running data thru their engine - when it found the "right" equations?
Not at all. Coefficient fitting is when you know what output you want, and you have a lot of inputs, and you have some idea of the functional form. In this case, they didn't say "I want to predict the position of the pendulum based on time given these initial conditions." They just said "here is a bunch of data" and it came back with useful equations.
Computers are fantastic at deductive logic - taking a set of data and performing operations according to a ruleset to get something useful out of it. But they've been terrible at forming the rules themselves up until now.
If we can give computers inductive abilities then that's worldchanging.
Given crude initial conditions and some indication of what variables to consider
The only induction done in the whole process was done by the researchers in this step - otherwise the algorithm wouldn't have had anything to "evolve" to.
Quite right. I would say that if the inductive capacities of this system can be generalized beyond finding natural physical laws then this is a real start for Hard AI.
I don't understand why computers need to figure out laws of the universe. There are so many dying concerns already on this planet - which can be solved by a man machine symbiosis. Personally I think such ideas are cool, but are not of great/immediate use. It reminds me of the machine built to find the answer to the question "What is this life about?" in "hitchhiker's guide to the galaxy". 42!
In that context, this sounds like a nice piece of work-- it integrates the data analysis and model validation steps.
However, it's also clear that the algorithms were fed pertinent data from a meaningful physical setup. In other words, good experimental design carried a lot more of the burden of the discovery, than the computer algorithm that sorted through the data.