Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if we're using the same term to describe different things since I'm not talking about the social stigma of owning an autonomous car.

I'm talking about social in terms of reading the intentions of other drivers (eg joining a busy motorway or deciding who goes first on a spot island when all lanes have stopped and waited for the other) and other hazards (eg pedestrians crossing the road without looking properly because they're distracted with young kids, chatting to friends, or tapping on their phone)

As a driver you developer an intuition for how other drivers would react in different situations. Sometimes that's based on the speed of a vehicle, the angle of the car or driving style (eg are they bumper to bumper with the car in front?). Sometimes it's based on profiling/prejudice about people who drive certain brands of cars (eg BMW owners do tend to be pushier drivers than perhaps someone in a Fiat 500 - obviously this isn't always the case but you might still be more cautious if you see someone approach a busy junction in a powerful car).

Another thing experienced drivers might react to is if they see a series of brief break lights ahead at roughly the same point but cars aren't swerving around an obstruction then that might suggest there's a speed trap.

There's even been times when I've been on a motorway and a car has suddenly swerved in front of me however I was already hovering over the break just in case as I was expecting it through a series of subtle clues I picked up from their driving style. I knew that they were about to change lanes without checking their mirror even before they committed to the manoeuvre themselves.

There are so many hints and cues that drivers pick up on from other drivers. So much non-verbal communication. And that was the hardest part to train an AI. Sure you can teach them to react to hazards as they happen but having them pre-empt hazards before they happen without having that algorithm react to every false positive is a whole over level of engineering. It's something that takes humans literally years of driving every day to get good at and we already come pre-programmed with more sophisticated hazard detection before we even sit our bum behind that steering wheel than AIs have currently.

So this is the social aspect I was referring to. And I'm sure autonomous cars will learn this in time too - or at least get a close enough approximation where they're "good enough" for general purpose driving.

edit: I should have added that one good thing about AI is at least it's reaction times are lower. That reduces the significance of pre-emptively spotting likely hazards somewhat.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: