Hacker News new | past | comments | ask | show | jobs | submit login

The main difference, and reason why your analogy isn't entirely accurate, is that while with games you can see when things aren't working, when using ML or stats you will _always_ get a number. Whether or not that number is meaningful requires some amount of domain knowledge a lot of the time. I have a degree in stats and someone at work who does not was trying to use these frameworks to analyse log files. When I had a look at it, his results were showing that they were statistically significant, but the data didn't look anything like a linear relationship and fitting it to a regression wasn't a valid move. That's a simplistic example but even in the relatively simple realm of linear regression there are more difficult traps to spot, like heterostedasticity or error normality.



I agree that one shouldn't apply ML in a commercial context without understanding it. But I think that's true about almost anything. I can't think of a technology I use for which I don't have a corresponding "novices did it all wrong" story.

But here we're talking about a series of intro videos and the appropriate pedagogical approach. It really could be that ML has more subtle failure modes than programming, although I'm suspicious; I remember a lot of my novice C issues where the program did happen to appear to work, at least for short periods, even though my code was terrible. But if it is, I think the trick there isn't to prescribe a heavier dose of theory, it's to get people to experience problems like you describe in a way where they can quickly detect and learn from them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: