Hacker News new | past | comments | ask | show | jobs | submit login

I completely disagree.

Yes, we need to quantify tradeoffs between models mathematically, but that does not not require knowledge of the mathematics behind the models themselves. With cross validation, I can estimate the effectiveness of many black box models, without looking inside them. This step is called error estimation, and comes before model selection.

I can arrive at a pretty good model by a combination of correct methodology and brute force. It is this methodology that makes up much more of the overall picture. You could give me a black box, a rough range of parameters it takes, and I can tell you how likely it is to work well. This approach doesn't scale well to bigger problems, but I doubt tackling Big Data problems is the intention behind this course.




And I disagree with that.

Tuning parameters and selecting features needs (a) understanding of the model(s) used and (b) an understanding of the data.

'Brute forcing' these steps can grow exponentially in time (eg. feature selection out of n features takes 2^n combinations) and makes your approach not only very inefficient but also doesn't predict if you have a good model. Your approach makes sensitivity analysis makes very very hard.


>I completely disagree

Don't worry, as with anything there's a certain subset of people who actually know the underlying principles behind a subject, and for some reason feel threatened when those principles are abstracted away, as if their knowledge is now wasted. But that's the natural progression of things. Sorry.

It's funny it happens in a community of programmers though, where half of the tools that are used everyday are blackboxes that few really understand. Like the computer itself.


And its completely fine to be the developer who uses pre-made algorithmic block for their specific problem. However you will always be several years behind the current state of art.

For example deep-learning really revolutionized the state of the art in image recognition in 2012 by winning academic competitions. It took about 3-5 years for those deep learning algorithms to get productized into packages like tensorflow, with high production tutorials and videos, so it was accessible to non-academics.

I don't think people that know the underlying principles of machine learning are threatened (Thats sounds like pretty insecure world view on your part). They operate in a different context where you want to push the state of the art in machine learning algorithms, instead of just applying existing best-practices to your specific problem.


>However you will always be several years behind the current state of art

I agree with your post, but 99.9% of people who will be applying ML via black-box algorithm in the next decade won't be participating in, or at all concerned with, the state-of-the-art. In the same way that most of us aren't concerned about state-of-the-art chip design.

I can do a regression analysis with a couple clicks in excel. I need little knowledge beyond how to interpret results. Sure, the underlying data might violate some assumptions, but it's rare (and there are tools for that). And let's face it, the most popular applications by amateurs will be marketing related, not cancer-curing related.


Actually a regression analysis is a great example of something people often use incorrectly.

I have a degree in stats and someone at work who is self taught from a 'use the tools' perspective was trying to use these frameworks to analyse some log file patterns. When I had a look at it, his results were showing that they were statistically significant, but the data didn't look anything like a linear relationship and fitting it to a regression wasn't a valid move. That's a simplistic example but even in the relatively simple realm of linear regression there are more difficult traps to spot, like heterostedasticity or error normality.


>like heterostedasticity or error normality.

But nothing you've said is complicated enough that in can't be explained through simple instructions or conquered through better tools. This is besides the fact that a little bias in the estimation isn't the end of the world if you're only trying to figure out who clicks ads, and not doing medical research.

Believe me, I run into the same issues as well, having to state "You can't do that..." when I watch co-workers try to apply even simple tests. I just think we draw the cut-off line at different skill-levels.


Once you treat a model as a black box, how can you hope to interpret it?


>how can you hope to interpret it?

Basically: "instructions", that become more simple over time. There are some nuances to, say, R^2. But the concept that it's "how much variance is explained by the model" isn't difficult to comprehend...or apply.

Let me clarify that I'm not saying it's unimportant to understand the underlying mathematics behind these processes. After all, someone has to design these things so that the layman can actually apply them. What I, and it seems others, are arguing is that it isn't necessary to have a deep understanding of the algorithms to get insight from their usage. Some creative person creates the tool, and other creative people figure out its best uses. They are rarely the same people.

I'll add: I'm not sure why you're down-voted. This community seems to be developing those bad habits of disagree = down vote.


That's a good point, there has to be a line drawn at some point, and that line probably depends on the user. It seems like documentation and communication are important for making making that boundary a bit softer too. E.g. looking at the mathematical definition of R2 isn't as immediately clear as describing it as "variance explained".


It is in fact physically impossible to fully understand a modern desktop PC. Just the amount of lines of code involved makes it impossible to ever read them all, even if you had them. And that's just the software side. I doubt anyone can fully comprehend in detail everything that happens in a modern CPU, let alone the entire hardware system.

Computers are like cities, they can be managed effectively only by dealing with aggregates and abstractions. It's impossible for someone to know every tile in the sidewalk, but it is possible for them to effectively manage sidewalk repair if they have the right abstractions.


it may not be possible to conceptualize all parts of a system simultaneously but that doesn't mean nobody can fully understand the system. A layer of abstraction isn't about not having to understand what's beneath it; it's a tool to aid in understanding.


No, i literally mean there's too much information involved to review in a human lifetime. You can at one point understand any part of it, but a human life is too short to at one point understand every part of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: