Hacker News new | past | comments | ask | show | jobs | submit login

> not with traditional feedforward networks (LeNet, etc.)

I'd argue they are implicitly doing this.

> You can't run the classifier and find tires, then wheels, and then a car;

Why can't you run a classifier for tires, one for wheels, one for cars, then combine their outputs for a final classifier maybe based on a decision tree? You can train all the networks at the same time and it will give you a probability distributions for all 4 outputs (tires, wheels, cars, blended). What am I missing?




That would just be your opinion. It has not been shown. It's still an open research question over what neural nets are actually learning in their intermediate layers.

You're going to need large amounts of fine-grained labeled data for each category. You've also just manually determined some sort of (brittle) object ontology. What if there are only 3 tires? What if there are four tires on the road but no car? All sorts of edge cases, and all you've done is train a classifier for cars, not actually solved driving in any meaningful way.


Doesn't scale. You don't have N brains to compose every representation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: