Hacker News new | past | comments | ask | show | jobs | submit login

Slightly unrelated question, has there been any effort into hardware acceleration of such networks? How amenable are modern machine learning algorithms to hardware acceleration?



The GPU is pretty well optimized for the sort of operations an RNN needs.

There were a few efforts to make actual silicon neurons, plus the whole nueromorphic movement, but they were generally less than what people were expecting, slow, and difficult to interface with.


I've seen some work that attempts to recreate the "spiky" neural networks (e.g. neurons that fire when their inputs pass a threshold), intended to mimic the biochemistry of real neurons.

That work seems to spin their contribution as reducing the power required to evaluate the neural network though. If I recall correctly, the accuracy of those models for everyday tasks is typically much much lower than usual ANNs, and they're a pain to train. So, still not very common.


That is exactly what I made circa 2008. I used the izhekevich model for spiking. It was certainly faster on the GPU (2000x), but, yeah, getting the network to converge on anything was terrible. Debugging it was fun/awful though.

1:"Hey, do you see the first squiggle with the two fuzzes after."

2:"Next to Beaker's eyebrows?"

The low power work seems to have been aiming to be a rough filter, rather than a full system. Still fun to use.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: