I first played Video Games, then modded them, then wrote my own.
For me that's a natural evolution and I find it hard to accept that in certain fields you are supposed to get in contact with the academical background first before jumping in to actual usage. It's like having to understand the detailed physics of an internal combustion engine before driving or before repairing the car.
I would agree that there's plenty of theory that isn't necessary to know, but anyone using a machine learning algorithm should at least understand what the mathematical machinery can and can't do.
A big part of this this knowing how best to fit a model to the data, which usually requires knowledge of mathematical obscurities to avoid things like overfitting, local optima, etc. It's not glamorous stuff though, so it usually isn't brought up in presentations like this one.
I agree people should know that eventually, but I'm suspicious that they really need to know it first, or even early on.
My early coding years were spent typing in BASIC game programs from magazines (wumpus hunters represent!), tweaking them, and later making up my own. There was a lot of theory that I could have benefited from, but I never had the motivation to learn it until later, when learning the theory solved problems I had actually experienced.
The main difference, and reason why your analogy isn't entirely accurate, is that while with games you can see when things aren't working, when using ML or stats you will _always_ get a number. Whether or not that number is meaningful requires some amount of domain knowledge a lot of the time. I have a degree in stats and someone at work who does not was trying to use these frameworks to analyse log files. When I had a look at it, his results were showing that they were statistically significant, but the data didn't look anything like a linear relationship and fitting it to a regression wasn't a valid move. That's a simplistic example but even in the relatively simple realm of linear regression there are more difficult traps to spot, like heterostedasticity or error normality.
I agree that one shouldn't apply ML in a commercial context without understanding it. But I think that's true about almost anything. I can't think of a technology I use for which I don't have a corresponding "novices did it all wrong" story.
But here we're talking about a series of intro videos and the appropriate pedagogical approach. It really could be that ML has more subtle failure modes than programming, although I'm suspicious; I remember a lot of my novice C issues where the program did happen to appear to work, at least for short periods, even though my code was terrible. But if it is, I think the trick there isn't to prescribe a heavier dose of theory, it's to get people to experience problems like you describe in a way where they can quickly detect and learn from them.
Good point, I was thinking more about people using ML in a professional capacity. It's interesting to think about how best to teach it, and it totally seems reasonable to save the math for later in some cases. Another interesting challenge is that ML fails in less obvious ways than coding in general, and maybe in some way, intuition for that is something to be taught early.
I also recommend Welch Labs’ Neural Network Demystified-series[0]. It is a combination of the Coursera course and the YouTube videos. It get’s into some of the math, while still keeping it basic.
This is so false. Applying machine learning to a real world problem requires correct intuition and the ability to quantify tradeoffs mathematically. This is developed by understanding the math behind the model and what the tradeoffs are.
Yes, we need to quantify tradeoffs between models mathematically, but that does not not require knowledge of the mathematics behind the models themselves. With cross validation, I can estimate the effectiveness of many black box models, without looking inside them. This step is called error estimation, and comes before model selection.
I can arrive at a pretty good model by a combination of correct methodology and brute force. It is this methodology that makes up much more of the overall picture. You could give me a black box, a rough range of parameters it takes, and I can tell you how likely it is to work well. This approach doesn't scale well to bigger problems, but I doubt tackling Big Data problems is the intention behind this course.
Tuning parameters and selecting features needs (a) understanding of the model(s) used and (b) an understanding of the data.
'Brute forcing' these steps can grow exponentially in time (eg. feature selection out of n features takes 2^n combinations) and makes your approach not only very inefficient but also doesn't predict if you have a good model. Your approach makes sensitivity analysis makes very very hard.
Don't worry, as with anything there's a certain subset of people who actually know the underlying principles behind a subject, and for some reason feel threatened when those principles are abstracted away, as if their knowledge is now wasted. But that's the natural progression of things. Sorry.
It's funny it happens in a community of programmers though, where half of the tools that are used everyday are blackboxes that few really understand. Like the computer itself.
And its completely fine to be the developer who uses pre-made algorithmic block for their specific problem. However you will always be several years behind the current state of art.
For example deep-learning really revolutionized the state of the art in image recognition in 2012 by winning academic competitions. It took about 3-5 years for those deep learning algorithms to get productized into packages like tensorflow, with high production tutorials and videos, so it was accessible to non-academics.
I don't think people that know the underlying principles of machine learning are threatened (Thats sounds like pretty insecure world view on your part). They operate in a different context where you want to push the state of the art in machine learning algorithms, instead of just applying existing best-practices to your specific problem.
>However you will always be several years behind the current state of art
I agree with your post, but 99.9% of people who will be applying ML via black-box algorithm in the next decade won't be participating in, or at all concerned with, the state-of-the-art. In the same way that most of us aren't concerned about state-of-the-art chip design.
I can do a regression analysis with a couple clicks in excel. I need little knowledge beyond how to interpret results. Sure, the underlying data might violate some assumptions, but it's rare (and there are tools for that). And let's face it, the most popular applications by amateurs will be marketing related, not cancer-curing related.
Actually a regression analysis is a great example of something people often use incorrectly.
I have a degree in stats and someone at work who is self taught from a 'use the tools' perspective was trying to use these frameworks to analyse some log file patterns. When I had a look at it, his results were showing that they were statistically significant, but the data didn't look anything like a linear relationship and fitting it to a regression wasn't a valid move. That's a simplistic example but even in the relatively simple realm of linear regression there are more difficult traps to spot, like heterostedasticity or error normality.
But nothing you've said is complicated enough that in can't be explained through simple instructions or conquered through better tools. This is besides the fact that a little bias in the estimation isn't the end of the world if you're only trying to figure out who clicks ads, and not doing medical research.
Believe me, I run into the same issues as well, having to state "You can't do that..." when I watch co-workers try to apply even simple tests. I just think we draw the cut-off line at different skill-levels.
Basically: "instructions", that become more simple over time. There are some nuances to, say, R^2. But the concept that it's "how much variance is explained by the model" isn't difficult to comprehend...or apply.
Let me clarify that I'm not saying it's unimportant to understand the underlying mathematics behind these processes. After all, someone has to design these things so that the layman can actually apply them. What I, and it seems others, are arguing is that it isn't necessary to have a deep understanding of the algorithms to get insight from their usage. Some creative person creates the tool, and other creative people figure out its best uses. They are rarely the same people.
I'll add: I'm not sure why you're down-voted. This community seems to be developing those bad habits of disagree = down vote.
That's a good point, there has to be a line drawn at some point, and that line probably depends on the user. It seems like documentation and communication are important for making making that boundary a bit softer too. E.g. looking at the mathematical definition of R2 isn't as immediately clear as describing it as "variance explained".
It is in fact physically impossible to fully understand a modern desktop PC. Just the amount of lines of code involved makes it impossible to ever read them all, even if you had them. And that's just the software side. I doubt anyone can fully comprehend in detail everything that happens in a modern CPU, let alone the entire hardware system.
Computers are like cities, they can be managed effectively only by dealing with aggregates and abstractions. It's impossible for someone to know every tile in the sidewalk, but it is possible for them to effectively manage sidewalk repair if they have the right abstractions.
it may not be possible to conceptualize all parts of a system simultaneously but that doesn't mean nobody can fully understand the system. A layer of abstraction isn't about not having to understand what's beneath it; it's a tool to aid in understanding.
No, i literally mean there's too much information involved to review in a human lifetime. You can at one point understand any part of it, but a human life is too short to at one point understand every part of it.
If you're just using the algorithm to suggest items customers may be interested in buying, the math might not be important because nobody loses much when a mistake is made. Sometimes the stakes are higher and it's important to know what's really going on. If you're using an algorithm to decide who to give a loan to, all those "unimportant" small details might turn out to be very important once it's too late.