Hacker News new | past | comments | ask | show | jobs | submit login

Holy cr*p on a cracker!

150 layers? It boggles the mind.

How do you even start propagating over 150 layers? Do you assign specific functions / targets to some of the inner layers?




Deep Residual Learning for Image Recognition https://arxiv.org/abs/1512.03385

Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

And some good answers here: https://www.quora.com/How-does-deep-residual-learning-work


Also, very deep NN without residuals: https://arxiv.org/abs/1605.07648


Highway layers http://arxiv.org/abs/1505.00387 help with this propagation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: