Thousands to millions. There are approximations that work very well with millions of examples.
Neural networks are empirically outperformed by gradient boosted trees (look at Kaggle competitions) on most practical tasks except for image, sound, and video problems.
Neural networks can be very slow on large datasets. Training can often take days or weeks, even with a GPU. GBTs and GP approximations are faster.
Neural networks are empirically outperformed by gradient boosted trees (look at Kaggle competitions) on most practical tasks except for image, sound, and video problems.
Neural networks can be very slow on large datasets. Training can often take days or weeks, even with a GPU. GBTs and GP approximations are faster.