Depends on use case. Hybrid approaches have been dominating the M-Competitions, but there are generally small percentage differences in variance of statistical models vs machine learning models.
At the end of the day, if training or doing inference on the ML model is massively more costly in time or compute, you'll iterate much less with it.
I also think it's a dead end to try to have foundation models for "time series" - it's a class of data! Like when people tried to have foundation models for any general graph type.
You could make foundation models for data within that type - eg. meteorological time series, or social network graphs. But for the abstract class type it seems like a dead end.
is there a ranking of the methods that actually work on benchmark datasets? Hybrid, "ML" or old stats? I remember eamonnkeogh doing this on r/ML a few years ago.
what about neuralprophet came after prophet? some companies like mixpanel mentioned in their documentation that they are using prophet for forecasting/anomaly detection