Hacker News new | past | comments | ask | show | jobs | submit login

Mark my words, in the next couple of years we'll see custom silicon that massively improves performance per watt for DNNs by using fixed point, quantization, and saturation arithmetic. The gain in performance per watt will be at least an order of magnitude. This will make DNNs worthwhile in a lot more classification problems where currently they are simply too slow.



Mark my words, DNNs are not really the most efficient structure for predicting model due to the "distributed" representation that makes them good predictors, but also makes them hard to train and resource-consuming to apply. In few years DNNs will be replaced by more efficient models.


Any pointers to more approaches for more efficient alternatives?


Sorry, nothing definite yet. Any sort of tree-based computation which makes early pruning decision is more efficient than full matrix multiplication, even if matrix is sparse. Then, high-order computation like power is again more efficient than simple linear model. Those two directions are quite probable.


Reminds me of the "Expert Systems" from the 1980s. They saw similar levels of (inflation-adjusted) hype too. Decision trees have had their heyday; wonder if they'll be back in fashion soon.


> high-order computation like power

What is power?


You do realize you came up with this text using a deep biological neural network, right?


Please don't compare the complexity of the brain to artificial neural networks. The complexity of a single neuron is far more sophisticated than we use for machine learning. Many people are starting to call the machine learning "neurons" as "units" to prevent this comparison.


Sure, but they aren't necessary the most efficient model possible


Seems pretty efficient to me, at least per watt.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: