Hacker News new | past | comments | ask | show | jobs | submit login

100%

I talked about this in my paper. Incredibly, a paper by the top GP researchers had come out the year before (iirc) talking about these very problems. They offered a set of standard benchmarks they hoped the field would adopt, so naturally I did, but I haven't paid attention since I graduated so I don't know if it actually took hold.




Benchmarks. That’s one of the big problems in the field; since it’s so hard to get people to agree on what we’re even trying to accomplish with an evolutionary search heuristic, it’s difficult to measure performance objectively.


Yup, methods and reproducibility as you have also mentioned. My work focused on Symbolic Regression, the problem, which I tried to separate from Genetic Programming, the algorithm. The literature refers to SR as the algorithm, but I think that is incorrect. I wrote a deterministic algorithm Prioritized Grammar Enumeration and put my code & data on github. https://github.com/verdverm/pypge (pdf there too, don't tell ACM)


Thanks, I’ll have a look!


Determinism is interesting; I’ve often wondered how much benefit we actually obtain from pseudo-randomness, versus it being a way of coping with the fact that we don’t really have a convincing account of why GAs work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: