Hacker News new | past | comments | ask | show | jobs | submit login

> There are too many problems with the AV industry to detail here [...] The biggest, however, is that supervised machine learning doesn’t live up to the hype [...] It’s widely understood that the hardest part of building AI is how it deals with situations that happen uncommonly, i.e. edge cases. In fact, the better your model, the harder it is to find robust data sets of novel edge cases. Additionally, the better your model, the more accurate the data you need to improve it. Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems

This is exactly the problem with data hungry machine learning approaches, specifically deep learning (and that’s without even mentioning the compute resources necessary to learn). The only way to circumvent that is plausibly apply better inductive biases, and fundamentally rethink what the field considers important.




I think the obvious problem is that induction (which is what learning from data is), is simply only one tool in the huge space that is intelligence, and it will never be enough to emulate the skill of a human driver, which is more or less what is necessary for autonomy in an open environment.


The "induction" that machine learning algorithms do also isn't the same as the induction that humans preform. We induce new concepts from experience (data) -- The description itself assumes consciousness in both "concepts" and "experience".

Thinking of computers as getting more "intelligent" like humans is a category error -- computers are dumb matter configured in an intelligent way by actual intelligence (humans) to preform certain tasks for us. We get better at telling them how to preform certain tasks (software), but there's no reason to think we're moving along some continuum of intelligence towards us.


> We induce new concepts from experience (data)

That's the premise of deep learning - inducing high level concepts from experience, without manual feature engineering.

DL models are good at induction, what they can't handle is generalisation (being accurate outside of distribution). And self driving has a long tail, that's why it's so hard.


> and it will never be enough to emulate the skill of a human driver

It may never be enough to beat the best human drivers, but it only needs to beat most human drivers to be worth it. We're not that far off.

Your objection reminds me of the skeptics of spell check and grammar check. In principle, a perfect spell and grammar checker would also need general AI to fully understand a language and what you're intending to express. Fortunately, imperfect spell and grammar checkers are all that most of us need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: