Hacker News new | past | comments | ask | show | jobs | submit login

Deep Learning is heavily used in Kaggle competitions when there is image data.

Even in non-image data competitions, a deep neural network will often be one of the models chosen to ensemble. They generally perform slightly worse than XGBoost models, but have the advantage that they often aren't closey correlated, which helps fighting overfitting when tuning ensembling hyperparamters.




For image competitions you are right. Neural networks are often in winning teams ensembles, but they require a lot more work than something like xgboost (gradient-boosted decision trees). For a dataset that isn't image processing or NLP, xgboost is in general much more widely used than neural nets. Neural nets suffer from the amount of computing resources and knowledge needed to apply them, though given infinite knowledge and computing power they are probably on par with or better than xgboost. And if you need to analyze an image they are great.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: