Hacker News new | past | comments | ask | show | jobs | submit login

They're great because they immediately go to the best solution. One epoch is all you need, no matter the size of the SVM.

They suck because they're limited to one layer.

But there's a good case to be made that machine learning introductions should always be done like this : linear classifier (when that works) -> SVM (when that works) -> NN -> Deep learning.




I contend that all real work should be done this way. And don't forget things like bayes' nets and random forests, which get no love.

in other words, KISS.


>___< I only know Bayes Nets and Random Forest.

I love tree base algorithm they are so good in many context compare to neural network. You try doing that in the medical field where there are very little data since it's so costly to do r&d on human.

Also with Bayesian Network you can at least know how to explain things. Neural Network is a magic black box.

---

edit: also regression but meh I think if you know random forest then you should know regression. Other wise you don't really know random forest.


> I love tree base algorithm they are so good in many context

random forest is delightful in that the algorithm has very few parameters, the default values of parameters are generally okay, and it generally tends to do something reasonable when you throw it at "real world data" with missing data / categorical variables / useless noise features in the input / etc.

Provided a single decision tree does not overfit to then a random forest wont overfit either.


You missed one of the most important advantages of an ensemble tree model: since each tree is grown independently of another , you can have full parallelization during training.


To go even simpler, KNN also works really well if you can get a good weighting for your inputs and measurement noise is low. And KNN works with online/soft-RT datasets as well, where constant learning is required.


I have done a lot of work with classification algorithms, and KNN doesn't get nearly the love it deserves. It rarely turns in exceptional performance, but when used with mahalinobis distance it is extremely robust.


> But there's a good case to be made that machine learning introductions should always be done like this : linear classifier (when that works) -> SVM (when that works) -> NN -> Deep learning.

Do you know of an ML introduction course/book/site that follow this order?


You can get Linear -> SVM -> Neural Networks on Andrew Ng's Machine Learning course at Coursera CS229A. You could then go advanced with CS229 from Stanford "Academic Earth".


I feel like people are cutting out gradient boosting. For me, GBM usually perform better and have faster training time than SVM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: