Hacker News new | past | comments | ask | show | jobs | submit login

It absolutely is. DeepMind reported that 1 second of audio generation takes about 90 minutes to generate.



Assuming it's computation bound, it's a factor of 5400 (~13 doublings in CPU power required to get to real-time, assuming no algorithmic improvements).


If I'm not mistaken, it seems that the current limitation is that it needs to be produced sequentially for a dependent sequence of audio, perhaps some independent sentences can be run simultaneously using copies of the net assuming no memory limitations. I wonder if it's already possible to create an auidobook for instance in reasonable time.


Do they mention it was CPU trained? I assumed GPU. If it was CPU trained, I wonder what the operations keeping it off the GPU were?


Google has special neural net ASICs now.


Google never stated they use those to train models as far as I know. It seems that they are primarily used to spare energy when deploying trained models at scale.


Theres no reason they couldn't use them to train, as long as they can account for the lower precision operations. I think it would be much cheaper to train on them, at that scale anyway.


Afaik the Google TPU does inference only, at 8 bits. I don't think it's possible to train a neural network at 8 bit precision at this point in time. FP16 works for training though, and is twice as fast as FP32 on certain nvidia chips


Backpropagation can work with any precision, as long as you use stochastic rounding (so that the rounding errors are not correlated.) Without stochastic rounding even 16 bits will have rounding error bias.

http://arxiv.org/abs/1412.7024


OK. I was going by this - https://petewarden.com/2016/05/03/how-to-quantize-neural-net...

I haven't seen 8bit training implemented in any (public) frameworks yet - that's not to say it's not possible. If it works then that's great, especially for specialised hardware.


That doesn't imply they can run WaveNet yet - for inference this net is sort of worst-case serial. Their TPU ASIC is almost certainly highly parallel, like a GPU - actually has to be that way for energy efficiency (which is it's claimed benefit).

Wavenet actually looks like it could possibly have been designed to run on CPUs in production, at least after they can further optimize it some. Sampling is super slow right now because it requires an enormous number of tiny dependent TF ops and thus kernels that have huge overhead for tiny amounts of work. A custom implementation could probably circumvent that by evaluating all the layers sequentially in local cache on a fast CPU.

Or they just designed it without much concern for production plausibility yet.


I'm not sure how this algorithm is serial. The neural net layers still involve huge convolutions that can all be done in parallel.


Building an ASIC for it would be another option to speed things up on the computation side.


Was that in the paper? I was looking for a source for it last night but couldn't come up with it


Why would a honest researcher mention downsides of his work in a paper. No, it was on twitter https://www.reddit.com/r/MachineLearning/comments/51sr9t/dee...


https://news.ycombinator.com/item?id=12463263

Looks like the source deleted their tweet.


Can we just use 90 cores?


Unfortunately no, see Amdahl's Law.

https://en.wikipedia.org/wiki/Amdahl%27s_law


If we did, it is not likely that the strong scaling is perfect.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: