Hacker News new | past | comments | ask | show | jobs | submit login

These are problems which are actually improved as more cars move to autonomous driving. A sensible government would mandate that manufacturers must upload anonymised data about each car in real time, which would effectively allow for cross-talk between cars. This could enable automatic remedies to the above situations (car X tells car Y "I've got more room behind me, let me reverse), but also prevent them in the first place: car X might pull over because it knows car Y is traversing that street. It would also help to prevent traffic jams which would be a nice orthogonal benefit.)



I see this pro cross-talk argument all the time... but wouldn't that leave open a big vulnerability for bad actors to send false data between cars and, if they should choose to, control multiple vehicles and cause collisions?


I would hope that the vehicles trust their own sensors before trusting the claims of other cars. The most a car should be able to lie convincingly about should be its intentions since they're unobservable. If car A says "I'm going to let you take this left" and car B tries taking the left and then car A tries to ram them, car B should ideally take evasive action (braking or speeding up) as it would from a human driver just randomly charging an intersection.

tl;dr: don't trust the client (or other drivers).


Those are emergency situations, though. I'm talking specifically about a model for cooperative decision making between AI wherein the AI has access to data from both sets of sensors (in its most limited example), and is therefore able to make a superior decision to if it had only its own data.

Simple scenario to illustrate this: car X enters a narrow bidirectional road which is lined with cars. There is enough room only for one car to pass at a time, safely. Car Y enters from the other end. Several more cars enter behind car Y (we'll call these Y1, Y2, Y3, Y4).

One car must reverse back down the road from the midway point, in order for any cars to pass through.

Car X and its driver cannot see what is behind car Y, but for car Y to reverse it must rely on Y1, Y2, Y3, and Y4 reversing. In order for this to happen, Y4 must first reverse despite not being able to see why it needs to reverse (either through meat or tech sensors).

The optimal solution is that sensor data reviewed in the aggregate by each car leads to a cooperative decision that car X should reverse until it can pull over, and allow the other cars to pass before proceeding.

In an emergency situation it's also possible to envisage scenarios in which shared data analysis and cooperative decision making are optimal. For example, consider cars X and Y now destined for a high speed, head-on collision. The right lane of the road (car X's right, car Y's left) is clear and the left lane (car X's left, car Y's right) is a deep trench. A primitive or non collaborative AI might suggest that both cars swerve to the same direction to avoid collision, which results in a collision of similar magnitude. A better solution than swerving the two cars into each other might be to swerve one into the ditch and one into the road. The optimal solution is likely to swerve only one car and hard stop the other, knowing that the other car is going to move its direction of travel significantly. This can only be done by giving the cars the ability to make decisions collaboratively.


Those are valid concerns. For the emergency scenario I have to wonder if it is possible to never trust another car to the point where it can induce a less optimal solution than an independent estimate could produce. For instance if the OtherCar says "Don't worry, I'll swerve into the ditch, just keep driving" and both MyCar and OtherCar keeps driving, we both die, so maybe MyCar says "he said he'd swerve, but I'm going to hard brake instead of driving" because impacting a ditch at speed x is roughly similar to getting hit while stopped at speed x and that's the worst case scenario for braking, whereas the both cars moving at speed x at each other is worse.

Not as tight an example as yours I'm afraid, but humans make cooperative decisions all the time where we use information and continue to be suspicious of it and hedge against lies. I think any realistic cooperative tech system where there are untrusted components needs to stay skeptical as well.


I completely understand what you mean and agree it's a valid concern.

I'm probably super naïve, but I think there's probably a way of digitally signing hardware components so that we can trust sensor data from other cars.


If cars upload their data, as well as data about other cars detected in their environment (e.g. positions of other detected cars), we could detect the suppliers of false or incorrect data by comparing data from multiple cars in each situation. It's not perfect, but could be a step in the right direction.


With this plan, couldn't the attacker just fake n cars, and thus your car would be determined to be the one that is uploading false information?


For one, sending false data != controlling other vehicles.

And on the other hand there is plenty of opportunity to include encryption with gov/mfn signed keys, crosscheck received data with sensor values, etc.

The benefits far outweigh the risks.


Yes. So should we not do it? Maybe the benefits outweigh the risks. Like almost every engineering decision ever, it's a trade off.


It can be signed. The real issue is going to be buying a car that has an upgrade for important people so the plebs must wait.


Also traffic would grind to a halt if the power went out on that block.


> I see this pro online banking argument all the time... but wouldn't that leave open a big vulnerability for bad actors to steal data and, if they should choose to, control multiple bank accounts and cause fraud?


It's part of the growing myth that is "self driving cars".

People seem to hand-wave and make up what these supposed future self driving cars will do - and worse, they hand-wave and assert they'll be objectively better at X than humans, without any evidence to backup the assertion.

Self driving cars are made by fallible humans using fallible programming languages and constructs. They can't possibly account for every situation or scenario - but people hand-wave and say it magically will.

Sure, one day you'll be able to sleep in the back seat of your car or read a book while it precisely weaves you between traffic only to navigate you right off a cliff. Or the neighbor's kid with a laser pointer prevents your car from turning into the driveway.

Driver-assisted cars are the real future.


> they'll be objectively better at X than humans, without any evidence to backup the assertion.

Google's road tested self driving car is already safer[1] than a human driving. Suggesting that a computer will be a more reliable processor of data and computer of maths than a human is not something which needs data to back it up. The ability of drivers is so variable and in the aggregate there are so many of them that it's almost self-evident that a self driving car which crosses a very low threshold for entry ("being road legal") will be better than a human.

> They can't possibly account for every situation or scenario - but people hand-wave and say it magically will.

Nobody is saying that they will any more than people argue that autopilot on a plane will. It's very plain to see that right now, as of this second, there is a self-driving car which is safer than a human driver. It is not yet legal to buy, but it doesn't change the fact that it's safer. It may be that a bug crops up which kills a few people. But that doesn't make it less safe, it makes the cause of death for some of the users different to "human error".

[1]http://bigthink.com/ideafeed/googles-self-driving-car-is-rid...


It doesn't have to be infallibly. It only has to be a single order of magnitude better than any human driver, and most people will start using it.

How many other sectors have abandoned human interference after computers surpassed human performance?


The important question then becomes - is society OK with bugs and shortcomings in software and hardware killing people? (this is based on the assumption that even driverless cars will not be perfect, some people will still die on the road)

So far, society seems to not be OK with this (as-in we'd rather a person do the killing, even if we think that killing was wrongful).

We aren't OK with autonomous robots having weapons, even though they might be objectively better at guarding prisoners, military bases, killing "bad guys" in bank robberies, etc. We freak out when a fatality occurs at an automotive plant, and those robots only pivot in place!

If society is going to agree we're all OK with a bug left by some short-sighted engineer being responsible for people's deaths - then OK. However, I wager people aren't really OK with this, most just haven't really considered this aspect yet.


A lot of the backlash against autonomous weapon systems is fed by the last 50 years of sci-fi movies showing what might happen (however unrealistic), self driving cars are a different thing and there isn't really an equivalence.

Sure there will be legal issues (in a crash who is responsible, the driver, the manufacturer or the programmers) but they will get resolved with time and case law.

The economic advantages to self driving cars are huge (unless you drive for a living but then progress is what it is), 35,000 people a year die on American roads an order of magnitude improvement would save ~32,000 lives a year (and that's just accidents resulting in fatalities, many many more experience life changing injuries), this generation of drivers might not like it but as the cars get better and better at driving themselves the next generation will hand over more and more of the responsibilities until a human driving a car manually on the road will look like an anachronism.

Also people aren't ever going to be happy with a bug in hardware or software killing someone but we are currently 'happy' with allowing tens of thousands of people to die from car accidents, if the motorcar had been invented in 2000 many people would have wanted to ban it immediately.

"You want to operate a 2500KG metal box at 40mph in proximity to people!? oh hell no!"


There is no reasonable argument for preferring that more people should die as long as the agents of their deaths are the kinds of biological organisms we're used to. What we happen to be already accustomed to has no relevance in determining what we ought to do in the future, except in trivial cases where the different alternatives don't lead to widely distinct numbers of casualties.


Why make it centralized? I generally am suspicious of "anonymized" datasets (in the abstract) as any usable data is probably enough to nonymize someone given a couple other pieces of data. Forgive me if you didn't mean a centralized system but I took upload and anonymize to suggest that.

In the case of cars some radio comms (uhf, wifi, or even bluetooth?) is probably sufficient since there is no reason for a car in New York cares about the opinions of a car in San Francisco. You'd probably even see performance gains under a distributed system since latency is effectively taken out of the equation (time of flight for local radio being effectively instantaneous).


Don't worry. It's not you, it's an anonymous person who leaves your house every morning and comes back after working in the same place you do. Nobody could ever associate that with you.


Oh good. I feel better now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: