Indeed! To LIDAR, she was basically standing in the lane, with a bulky bicycle. To visible light, including the driver, who was apparently half asleep or watching the dashboard, she was in shadow until just before the collision.
So yes, LIDAR should have caught this. Easily. So something was clearly misconfigured. And even if the driver had been carefully watching the road, he probably wouldn't have seen her in time.
But I wonder, is there a LIDAR view on the dashboard?
Right, so assuming LIDAR caught her: I'd imagine the algorithm presumed that she wouldn't cross the center line till she did cross the line, I don't know what to think the algorithm would do there after?
Presumably the algorithm had a pretty good idea of where the lanes were, and if the LIDAR detected a non moving object in an adjacent lane and decided it was fine to ignore it because it presumed it was not going to start moving, that's a pretty broken algorithm.
I don't have the link handy, but I was reading a webpage yesterday (related but not about this crash) which showed Google's self driving car's "view" of a road scene - it's clearly painted different color boxes and identified pedestrians, bicycles, other cars - along with "fences" where it had determined it'd need to slow or stop based on all those objects.
Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public), it was faulty (but being used anyway, either because its self test is also faulty, or because the driuver/company ignored fault warnings) - or _perhaps_ Google's marketing material is faked and _everybodies_ self driving tech is inadequate?
> Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public)
I think this is a very good possibility considering that autonomous vehicles is the goal of the company and they're racing to get to that point before they run out of investment money. They have a lot of incentive to take short cuts or outright lie about their progress.
Looks like a velodyne 64 based laser. It is virtually impossible for those to not be able to see the the bicycle well in advanced. Uber had a serious issue here. Something like:
1. System was off
2. Point clouds were not being registered correctly (at all!)
3. It was actually in manual mode -- safety driver didn't realize or didnt react fast enough.
4. Planning module failed
4. Worst outcome in my opinion: Point cloud registered correctly, obstacle map generated correctly, system was on, planner spit out a path but the path took them through the bicycle.
The LIDAR data look pretty noisy, especially for distant objects. Could not they filter out the pedestrian thinking it is a bush or something like this?
I get your concern, but I would probably reserve the word inadequate. If this is the only situation you have to worry about a self driving care hitting and killing you in, and it's the only know data point at this time, some may consider that much more than adequate.
A website that "does something weird" when you use a single quote in your password... That _could_ be "the only situation you have to worry about". It is _way_ more often a sign of at least the whole category of SQLi bugs, and likely indicative that the devs are not aware of _any_ of the other categories of errors from the OWASP top 10 lists, and you should soon expect to find XSS, CSRF, insecure deserialisation, and pretty much every other common web security error.
If you had to bet on it - would you bet this incident is more likely to be indicative of a "person pushing a bicycle in the dark" bug, or that there's a whole category of "person with an object is not reliably recognised as a person" or "two recognised objects (bicycle and person) not in an expected place or moving in an expected fashion for either of them - gets ignored" bug?
And how much do you want to bet it's all being categorised by machine learning, so the people who built it cant even tell which kind of bug it is, or how it got it wrong, so they'll just add a few hundred bits of video of "people pushing bikes" data to the training set and a dozen or so of them to the testing set and say "we've fixed it!"
If this is the only data point then uber self driving cars are about 50 times more dangerous than average human drivers (see numbers quoted repeatedly elsewhere; uber has driven about 2 megamiles; human average is 100 megamiles between fatalities)
If that's your idea of adequate, you'd be safer just vowing to get drunk every time you drive from now on, since a modest BAC increases accident rates, but not by a factor of FIFTY!
I really don't bundle Tesla in with Waymo, Lyft, Toyota, Uber that are trying to build ground-up self driving cars. Are Tesla actively testing self-driving cars on public roads yet? Are their included sensors even up to the task? I didn't think they even have LiDAR?
True, but this seems to be a simple case of reacting to a person who steps in front of the car. Automatic braking technology exists on even cars that aren't considered "self-driving".
It’s that last possibility that’s horrifying above all others. The backlash either way is going to be terrible, but if these cars are just not up to the task at all, and have driven millions of miles on public roads... people will lose their minds. Self-driving tech will be banned for a very long time, public trust will go with it, and I can’t imagine when it would return.
This is going to sound bad, but I hope this is just Uber’s usual criminal incompetence and dishonesty, and not a broader problem with the technology. Of the possible outcomes, that would be the least awful. If it’s just Uber moving fast and killing someone, they’re done (no loss there), but the underlying technology has a future in our lifetimes. If not...
Waymo actively test edge cases like this both in their test environments in the desert and via simulation, they have teams dedicated to coming up with weird edge situations like this (pushed bicycle) where the system does not respond appropriately so that it can be improved. All of these situations are kept and built up into a suite of regression tests. https://www.theatlantic.com/technology/archive/2017/08/insid...
Not "center line", because this is a divided highway. So she had to cross two lanes from the median, in order to step in front of the Uber. "Human-sized object in the roadway" should have been enough to trigger braking, even if the trajectory wasn't clear.
Anything that is tracking an object moving on the road should be looking at the velocity of the scanned object as well as keeping track of some sort of difference from normal. I would think the car should know it's on a two lane one way road, realized an object was moving in one lane with some sort of velocity towards the path of the vehicle, and that perhaps something was not normal.
From the reports of cars running red lights and then this I would imagine they have an extremely high level of "risk" (what it takes for the car to take actions in order to avoid something/stop) that is acceptable.
What would be far worse than a hardware or sensor failure would be to learn that Uber is instead teaching its cabs to fly through the streets with abandon. Instead of having cars that drive like a nice, thoughtful citizen we'll have a bunch of vehicles zooming through the streets like a pissed of cabby in Russia.
> who was apparently half asleep or watching the dashboard
It is possible that a screen provided a clearer (somehow enhanced) view of the road, so I'm reserving judgment for now.
Of course using that screen could be a grave error if the screen relied on sensors that missed the victim. But if it appeared to be better than looking out of the windshield then that points to a process problem and not necessarily a safety driver inattention one.
He startles just before the collision, so anything he was watching on the dashboard arguably showed no more than the video that was released. But maybe the video camera had poor sensitivity at low light, and the driver could have seen her sooner, looking out of the windshield.
I'm not so sure that's just before the collision. The driver claimed that he didn't notice the pedestrian until he heard/felt the collision and it's not like the car hit a large object. I'm not convinced that he startled before the car hit the pedestrian.
Probably the LIDAR did catch it. Probably the algorithm (neural network) that takes in LIDAR data and outputs whether there is a object in front failed or gave a really low probability which was less than the threshold specified. This happens all the time with deep neural networks.
In any case technology is to blame and self driving should be banned until these issues are resolved.
How would this have helped at 40MPH? The user would have milliseconds to react and hit the brakes. The point is that the car is self-driving. If a user has to watch a video display and intervene for every edge case it's more dangerous than just driving yourself.
From the released video, if LIDAR was including the entire roadway (all three lanes) there would apparently have been at least four seconds warning.
In production, having a LIDAR display would be pointless. But for testing, it might be useful. But maybe better would be to tell drivers to keep their eyes on the road.
So yes, LIDAR should have caught this. Easily. So something was clearly misconfigured. And even if the driver had been carefully watching the road, he probably wouldn't have seen her in time.
But I wonder, is there a LIDAR view on the dashboard?