> GBTs win over NN for most tasks in practice, although they don't get much hype
I've always thought GBDTs get too much hype. As a data scientist, it seems like everyone wants to immediately throw a random forest or GBDT at the problem without knowing anything else about it.
Yeah, I think in the data science community, GBDTs are appropriately-hyped, since their dominant performance on Kaggle has been well known for some time now. In addition to that, GBDTs are so easy to run; taken together, it's probably always correct to just run a GBDT as one of the first things you do after you've got the data wrangled. Of course, as a phd-in-training data scientist, I feel disappointed (either in myself or in the task) if I can't think of a more interesting and better performing method than a GBDT :)
I've always thought GBDTs get too much hype. As a data scientist, it seems like everyone wants to immediately throw a random forest or GBDT at the problem without knowing anything else about it.