Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s a pedagogical reason to teach things with a kind of reductive definition. It makes a lot of sense.

I remember getting cornered by somebody in a statistics class and interrogated about whether I thought neural networks were statistical techniques. In that situation I’ll only answer yes, they are statistical techniques. As far as I can tell, a big chunk of what we do with machine learning is create complicated models with a large number of parameters. We’re not creating something other than statistical models. We just draw a kind of cultural line between traditional statistics and machine learning techniques.

Now that I think about it, maybe if you asked me about the line between traditional statistics and machine learning, I would say that in traditional statistics, you can understand the parameters.




> in traditional statistics, you can understand the parameters.

I also think that this is the key differentiator between ML and stats.

Statistical models can be understood formally, which means that not only we know how each parameter affects the predictions, we also know what their estimation uncertainties are, under which assumptions, and how to check that these assumptions are satisfied. Usually, we value these models not only because they're predictive but also because they're interpretable.

In ML there is neither the luxury nor the interest in doing this, all we want is something that predicts as well as possible.

So the difference is not the model itself but what you want to get out of it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: