Hacker News new | past | comments | ask | show | jobs | submit login

As someone who has been into AI since the late-70s, I am impressed with the overall rate of progress in the last 10 years. The whole field was stuck from the mid-1980s to about 2000. This is called the "AI Winter", and it really sucked.

AI used to be tiny - 20-50 people at CMU, Stanford, and MIT, and a few smaller groups elsewhere. All the '80s startups went bust. Now there are real applications that work and are widely deployed. Many classic problems, such as reading text and handwriting, have been solved and the solutions widely deployed commercially. Speech recognition is getting good and is deployed widely. Face recognition works. Automatic driving is working experimentally. There are now hundreds of thousands of people doing AI-type research.

Each new idea in AI has had a ceiling. Machine learning has a ceiling, but it's one high enough that the technology is good for something and generates enough revenue to finance its own R&D.

Machine learning is basically a form of high-dimensional multivariate optimization in possibly bumpy spaces. That's a hard problem, but considerable progress has been made, and the massive compute power necessary is now available. This is great. It's not everything, but it's real progress.

We're going to need another big idea to get beyond the ceiling of machine learning. No idea what that idea is.




I’m an outsider, but what’s the recent “big idea”? It seems to be throwing more compute power and larger training sets at essentially an old technique. This has led to a big improvement in performance, but I don’t see the big conceptual breakthrough.


I think that Bayesian Program Learning as pioneered by Lake et-al is a big idea.

I think that GAN's probably qualify - although you can see that emerging in the SAB series of conferences in the 90's if you read the papers.

On the other hand I do see a lot of small innovations that are enabling many people to create incremental improvements and applications. I feel that that the exploration of the field has been very weak and our overall knowledge is limited and not widely shared. Perhaps the improvements like MCMC search for bayesian reasoning, causality, counterfactuals, GPU's and TPU's and FPGA's and the access to very large data sets for training, forward training and so on will be the actual breakthrough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: