Hacker News new | past | comments | ask | show | jobs | submit login

Classification is not the right metric to use here. Lidar doesn't classify the objects it's looking it, it just tells you the direction and distance.

Cameras can also gauge distance pretty effectively from parallax. Either using multiple cameras, or from the motion of the vehicle itself, or both. From this it should be possible to gauge where obstacles are and drive safely.

But NNs give the possibility of gathering much more information from recognizing objects. Information that Lidar systems don't have.




> But NNs give the possibility of gathering much more information from recognizing objects. Information that Lidar systems don't have.

Wouldn't you feed both the depth map from the lidar and imagery from the cameras into the neural network? I imagine that a variety of different sensors as input would make it easier to do classification. As an analogy, someone who has lost their sense of smell might have a harder time telling the difference between a clean sock and a dirty sock than I would.

Please let me know if I'm wrong here, but I assume that the depth information that can be derived from parallax is not a superset of what you get from lidar (I'm thinking about low light, glare, objects with complicated geometries, similar-colored objects obscuring each other, etc).


You could do that. The advantage of using purely cameras is they are cheaper and simpler, and don't stop functioning during bad weather.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: