Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand the comparison with the neural nets of the 90s and deep learning today. There was _human intelligence_ operating on the transition between these two technologies. How does evolution compare to this?

(honest question, because I probably didn't get the angle and problem space thing)




It's not about the design of the network, it's about the training. Optimizing the weights of a DNN is a non convex optimization problem solved using gradient based search for weights that minimize a loss function. If you can only search in 10 dimensions, it's harder to move in the correct direction. Once you're searching in 1000 directions, you can usually find a path that gets you to the right place. Think of it this way: there may exist a 10 dimensional solution, but it requires learning a very dense solution.


Ok, that made sense.


The question is not about how we got from old neural nets to new neural nets.

The question is about how a search works within old neural nets, vs. how a search works within new neural nets.


The single largest difference is simply the size of the networks. We are tossing billions of times more processing power at the same problems you see progress by simply change some constants.

PS: Of course there are also software / algorithm changes, but often same problem 1,000,000,000x the processing power just works.


It sounds like @fpgaminer is saying that when humans created something similar (the neural nets of the 90's) it had issues (local maxima) until we increased the complexity (the number of variables that the system could/did respond to) in a way that started approaching the complexity of evolution.


Thank you.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: