So, both Nvidia and Tesla are working on self-driving cars based on the sensory data mainly from cameras mounted on the car, which are then run through X number of RNNs to generate models to operate on? While Google pursues their LIDAR-approach?
What other players are operating in this space? And what's their approach?
The LIDAR idea seems better to me. The car can build an accurate 3D model. I'm sure the artificial vision of the Tesla is good but it will probably get fooled too.
Correct me if I'm wrong, but I was under the impression that LIDAR-only doesn't work at all during heavy rain or snow. You'll probably always need some sensor fusion from a mix of optical/supersonic/radar/lidar.
What other players are operating in this space? And what's their approach?