Hacker News new | past | comments | ask | show | jobs | submit login

So, both Nvidia and Tesla are working on self-driving cars based on the sensory data mainly from cameras mounted on the car, which are then run through X number of RNNs to generate models to operate on? While Google pursues their LIDAR-approach?

What other players are operating in this space? And what's their approach?





The LIDAR idea seems better to me. The car can build an accurate 3D model. I'm sure the artificial vision of the Tesla is good but it will probably get fooled too.


Correct me if I'm wrong, but I was under the impression that LIDAR-only doesn't work at all during heavy rain or snow. You'll probably always need some sensor fusion from a mix of optical/supersonic/radar/lidar.


Make sense to me. I wonder if a camera would be fooled by a Bugs Bunny-like tunnel painted on a wall.


Pretty sure NVIDIA is using LIDAR with sensor fusion..?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: