>> Pluribus is also unusual because it costs far less to train and run than other recent AI systems for benchmark games. Some experts in the field have worried that future AI research will be dominated by large teams with access to millions of dollars in computing resources. We believe Pluribus is powerful evidence that novel approaches that require only modest resources can drive cutting-edge AI research.
That's the best part in all of this. I'm not convinced by the claim the authors repeatedly make, that this technique will translate well to real-world problems. But I'm hoping that there is going to be more of this kind of result, singalling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.
In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently.
Yes! I work for a company that does just this: pull big gears on limited data and try to generalise across groups of things to get intelligent results even on small data. In many ways, it absolutely feels like the future.
Does "Bayesian methods" mean anything specific? Parts of the core algorithms were written before I joined, and they are very improvised in the dog-in-a-lab-coat way. I haven't analysed them to see how closely they follow Bayes theorem and how strictly they define conjugate probabilities etc. (we are also heavily using simple empirical distributions), but the general idea of updating priors with new evidence is what it builds on, yes. I have a hard time imagining doing things any other way and still getting quality results, but that is probably a reflection on my shortcomings rather than a technical fact.
Science article: https://science.sciencemag.org/content/early/2019/07/10/scie...