Hacker News new | past | comments | ask | show | jobs | submit login

This is the LIDAR sensor being used (virtually all SDC companies use this LIDAR because it's the most advanced out there for 360 degree 3D coverage):

http://velodynelidar.com/hdl-64e.html (note that this LIDAR is expensive - costs way more than the car it is mounted to)

Here's the manual for it - note the specs in the back:

http://www.velodynelidar.com/lidar/products/manual/HDL-64E%2...

It has 64 lasers, spread out over about 27 degrees - about 0.4 degrees per laser, from almost horizontal to an angle of 24 degrees or so down. Now take a look at where it is mounted on the car, and envision these laser beams spreading out and being spun in a circular conical area around the car.

Now - if you think about it - as the distance from the sensor increases, the beams are spread further apart. I'd be willing to bet that at about 200 feet or so away from the car, very few of the beams would hit a person and reflect back. Also - take a look at the reflectance data in the spec. Not bad...but imagine you are wearing a fuzzy black jacket on your top half. How much reflectance now?

What do you think the point cloud returned to the car is going to look like? Will it look like a human? Hard to say - but you feed that into a classifier algorithm, there's a possibility that it's not going to identify the blob as a "human" to slow down. Especially when you add some bags, a strange gait, plus the bicycle behind the person. All of this uncertainty adds up.

I am also willing to bet that only the LIDAR was used for collision detection (beyond the radar on the unit). Any cameras - even IR based - would likely only be used for lane keeping and following purposes, plus traffic sign identification. Maybe even "rear view of vehicle" detection. Ideally it would be used for "person/animal" identification and classification to - but again, given the camera sensor, and who knows what the IR sensor saw or didn't see, along with the weird lighting conditions - well, who knows how it would have classified that mix?

Lots of variables here - lots of "ifs" too. All we can do is speculate, because we don't have the raw data. Uber would do well to release the entire raw dataset from all the sensors to the community and others to look over and learn from.

Finally - I am not an expert on any of this; my only "qualifications" on this subject is having taken and passed a couple of Udacity MOOCs - specifically the "Self-Driving Car Engineer Nanodegree" program (2016-2017), and their "CS373" course (2012). Both courses were very enlightening and educational, but could only really be considered an introduction to this kind of tech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: