> but a migration of research interest away from neural nets seemed increasingly promising, and today, the migration seems largely complete.
What are you talking about? Deep learning is one of the hottest areas of research today, and a lot of it has to do with neural networks. NN's are the state of the art in several domains. Case in point: http://image-net.org/challenges/LSVRC/2014/results. All of the top entries use convolutional networks; in fact, almost all of the entries do.
The fact that the loss function represented by a neural network can be highly nonconvex is what makes them so effective in the domains in which they are used. See this presentation by Yann LeCun for more info: http://www.cs.nyu.edu/~yann/talks/lecun-20071207-nonconvex.p...
"ML theory has essentially never moved beyond convex models, the same way control theory has not really moved beyond linear systems. Often, the price we pay for insisting on convexity is an unbearable increase in the size of the model, or the scaling properties of the optimization algorithm ... This is not by choice: nonconvex models simply work better.
Have you tried acoustic modeling in speech with a convex loss? ... To learn hierarchical representations (low-level features, mid- level representations, high-level concepts....), we need “deep architectures”. These inevitably lead to non-convex loss functions."
This isn't to say that NN's are going to solve all our problems, but to say that there has been a shift in interest away from NN's is absurd.
Parent might be living in the recent past. There was a migration away from NNs in the 90s/early 00s, then Hinton and other people brought it back to life...with a vengeance :)
Exactly. The history of NN is full of ups and downs and it's becoming increasingly popular again the form of Deep Learning thanks to increasing cloud processing power and advancements by Hinton and others. Most to of the traditional criticism of NN is related to shallow nets. But deeper and far more complex structures like those in the animal brains are not explored enough.
What are you talking about? Deep learning is one of the hottest areas of research today, and a lot of it has to do with neural networks. NN's are the state of the art in several domains. Case in point: http://image-net.org/challenges/LSVRC/2014/results. All of the top entries use convolutional networks; in fact, almost all of the entries do.
The fact that the loss function represented by a neural network can be highly nonconvex is what makes them so effective in the domains in which they are used. See this presentation by Yann LeCun for more info: http://www.cs.nyu.edu/~yann/talks/lecun-20071207-nonconvex.p...
"ML theory has essentially never moved beyond convex models, the same way control theory has not really moved beyond linear systems. Often, the price we pay for insisting on convexity is an unbearable increase in the size of the model, or the scaling properties of the optimization algorithm ... This is not by choice: nonconvex models simply work better. Have you tried acoustic modeling in speech with a convex loss? ... To learn hierarchical representations (low-level features, mid- level representations, high-level concepts....), we need “deep architectures”. These inevitably lead to non-convex loss functions."
This isn't to say that NN's are going to solve all our problems, but to say that there has been a shift in interest away from NN's is absurd.