Hacker News new | past | comments | ask | show | jobs | submit login

Better training techniques. Some noteworthy advancements:

1.Dropout and its variations. Widely used in both vision and NLP

2.BatchNormalization and its variations.

3.Inception Style Cell.

4.Residual/Skip connections.

5.Better optimizers RMSProp/Adam.

The bigger news is actually the paradigm shift. Representation learning with gradient descent swarms the whole ML field, and becomes the new norm. End-to-end learning is vastly accepted and preferred.

As to GAN, it is very exciting in research, and has the potential to make itself a bigger deal than the previous listed advancements combined, under the condition we can make it works on sequence as well as on images, for now, it doesn't make a practical impact in applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: