Hacker News new | past | comments | ask | show | jobs | submit login

You have to be a little careful here. Neural networks are not "very general computation techniques." A dot product and a rectified linear function (or some other function of choice) are not "general computation techniques" in the sense you seem to use. They are a very specific set of operations. And the fact that you can show that two layers of these operations is a universal approximator is a red herring: decision trees and k nearest neighbors are also universal approximators.



Sure but I thought it was clear that I wasn't speaking in a precise or mathematical way. I was referring to what seems to be amazingly general abilities of certain successful deep learning-ish systems. And potentially also possibly amazingly general capabilities of some supposed imagined system a little bit like like those in terms the AD versus backpropagation.

Do you really think these abilities are not or cannot be as general as I imagine? I just see these systems like the Deep Mind projects reading pixels in 2d and 3d, and having this low-level sensory grounding. It seems likely to me that we will see very exciting and general further results and then down the line similar things on perhaps much smaller computers with the non-NN efficiencies. Do you think this is impossible or unlikely?


If you cannot describe something in a way that would allow for a precise formulation, then I don't know what you're trying to say. This is the difference between something that is implementable / testable, and something that's just a fantasy.

As far as I can tell, your definition of "general computing" is: "it works, so it must be general." While this is ironically indeed the attitude of a lot of people in deep learning, this is just faulty logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: