Hacker News new | past | comments | ask | show | jobs | submit login

The emergency braking system was likelt turned off because it had triggered too many false positives. (Abruptly braking too often also makes you a dangerous driver.)

So out of complete necessity, these cars need to be programmed to actually ignore pedestrians unlikely to cross into the path of the vehicle.

This is a tough problem because the AI has to identify the human (seems as if it did in this case) AND the intent. I've seen two recent videos of Teslas abruptly stopping for humans on the side of the road. One was actually a bus stop poster model and the other clearly had no intent of crossing the street and angry motioned the Tesla on.

Imagine driving by a group of joggers on the side of the road. Who can you pass safely? Who many stumble a bit into your path? Who will prepare to cross but first wait for you to pass by? Who will try to cross in front of your path? The micro-decisions and predictions made are very challenging to get right.




> The micro-decisions and predictions made are very challenging to get right.

Yeah, they really are. So... maybe they just shouldn't have self-driving cars until they can figure that stuff out?



This is where Waymo uses deep learning models that can predict the future behavior of other road users in real-time. They are hiring: https://waymo.com/joinus/1235933/


This is par for the course in AVs. It was probably the case for Uber too: hardcoding an "action suppression" heuristic on top of it is horrifically negligent IMO (I work on ML systems for an AV company)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: