“ if a car detects a sudden obstacle—say, a jackknifed truck—should it hit the truck and kill its own driver, or should it swerve onto a crowded sidewalk and kill pedestrians? A human driver might react randomly (if she has time to react at all), but the response of an autonomous vehicle would have to be programmed ahead of time. ”
I hate this logic. No, it didn’t need to be programmed ahead of time. It should try to avoid crashing. It shouldn’t try to solve a philosophical problem. We don’t want to require our cars to observe it’s surroundings and calculate number of things identified as humans to minimize expected loss of life. That’s difficult and error prone.
If you’re going down a street and something jumps in your way, make the program try not to hit anything. Ideally by braking, because you shouldn’t be going fast enough to not be able to do so based on your environment.
I agree with you that this example is so theoretical as to be useless. In addition, focusing on this masks some other more relevant ethical/legal questions.
1. If the car sees a police car flash it's lights for it to pull over, should it pull over unconditionally - not allowing the driver to over-ride?
2. If there is an Amber alert broadcast specifying the car, should it automatically try to alert the authorities as to it's location and/or flag down the nearest police vehicle?
3. Should a car be able to be programmed to automatically avoid going to places that would violate a restriction order on its owner or occupant.
4. If owner/occupant experiences an emergency (heart attack, impending child birth, etc), should the car have a button that will allow it to break traffic laws to try to get to the hospital faster. For example, at 2 AM on a deserted street, do you really need to wait the full time the light is red if you can see that there are no other vehicles?
We have the capability right now to make cars that enforce speed limits on their drivers.
We do, in fact, make cars with audible / visible / tactile alerts when the driver exceeds the speed limit for the road they are on, but we seem to have no appetite for buying cars that take the final step to actually automatically brake, never mind requiring all cars to do this.
I take this as a very strong indication that overriding driver preferences as in 1-3 will not happen any time soon.
Vehicles with telematics are anyway broadcasting their location. Pretty much all recent vehicles have telematics.
Most restriction orders aren't worded in a way that could easily be encoded beforehand; they could include language restricting autonomous vehicles...
For the emergency situation, if there is a button, it could just interact with the traffic control systems. If there's no traffic at the light, the light could let the car through.
I would not buy a car that acts against my own interests, or allows the government to deprive me of freedom (the first 3). I don't have a meaningful opinion on #4.
Should a car, generally prioritize the survival of its occupant or pedestrian? Not answering this is the same as answering it with 'surprise me'. And either answer is wrong.
I really don't think so. It's ignored because it essentially never happens. The odds that you'll ever be presented with such a scenario are so low that it's pointless to spend any time on it. The odds that you'll be presented with such a scenario and the obvious best answer is something other than "maximum effort braking" are even lower.
I think it gets traction with self-driving cars because people, stupidly, expect computers to make perfect decisions 100% of the time. This idea runs into a logical inconsistency when presented with a scenario in which there is no perfect decision. Rather than confront the fact that it's unreasonable to expect perfect decisions 100% of the time, people try to come up with a way to declare a perfect decision in a no-win scenario.
> I think it gets traction with self-driving cars because people, stupidly, expect computers to make perfect decisions 100% of the time
I don't think it's that people expect perfect decisions.
You are right that it essentially never happens with humans. But why doesn't it happen with humans? I think it is because if we are driving down a highway at high enough speed for this issue to arise, we probably aren't going to be aware of what is in our swerve path, and even if we are we probably don't have the time to recognize that we have to make a choice, nor the processing power to make such a choice.
Hence, there isn't really any need to consider this issue with human drivers because humans cannot make a decision in such a situation.
Self-driving cars, on the other, should have the sensors and the attention and the processing power to take into account everything to the side of the road in addition to what is on the road. With them, unlike with human drivers, it is actually possible for them to make a decision in these situations.
I think it is getting traction simply because with self-driving cars, unlike with human driven cars, it is actually a meaningful question. It's meaningful even if you assume that computers don't make perfect decisions--they at least have the time and data to make a decision, unlike humans.
This sort of thing can happen for humans in a way where they have time to make a choice. A stick throttle with brakes that can’t overcome it could get you into a situation like that, for example. You’re right that computers would be able to make that kind of decision in a much wider range of scenarios. However, computers will also be much likely to get into those scenarios in the first place. I’m not at all convinced that the net result is computers encountering situations where they can make a choice more often than humans do. I think both will be so rare they any effort spent on them would be better spent on avoiding crashes altogether.
We spend a hell of a lot of time in court deciding what was right and wrong, and when we should brake or swerve. But we have the category of 'mistake' that the program will not be able to hide behind.
Years down the line, a brand of self driving cars will market themselves as the safest care you can buy, exactly because they prioritize the occupants of the car.
That would be saying that it will aim for pedestrians. If someone designs a car with that intent, then yeah, it will either be regulated or sued out of existence.
(In this simple case) The vehicle currently traveling toward a stationary object. It has two possible courses of action (sensical action, perhaps would be better, as it could just continue at full speed, but I'll ignore that), but it can either A) attempt to stop before colliding with the object, or B) it can swerve around the object. Obviously, as stated, two possibilities are that if A the driver has a potential for an increased chance of death or injury and if B a third party has a possible increased chance of death or injury. Now, by shunting to A, that makes a statement of priorities: of all the unknowns, the unknowns about the driver (and passengers) and the states that what might result from A are acceptable to the alternative of B. If this were not so, then we could just as easily say that B is equal and acceptable, in that we can no more guarantee the diminshment of an increase in the potential for injury or death for one or the other party(ies) in either event, therefore, both scenarios are equal, unless they're not. And that most people are unwilling to weight B as as acceptable as A, and, say, flip a coin (or its electronic equivalent), it's clear that a judgement call is being made (based upon values) about what is to be prioritized in the situation (or equivalent situations). [The problem is people then take this specific instance of trolley-like problems and dismiss them as abstract construction, and then dismiss and equivalent problems, regardless of real-world applicability.]
And if the purpose of autonomous vehicles, as is often stated, is to reduce the numbers of deaths (40,100 in 2017 in the US) and injuries related to such scenarios and vehicles, we have to have at least a tacit understanding of that which we are implicitly prioritizing, if nothing else. However, if the point is just to apply an ideology in the form of a technology, until everything in our lives is computerized, and considered in terms of computation, or if (as I've talked about before on here) it's about removing the need for God and Man to make the choice, then avoiding such questions is the very (inherently not so) explicit goal, as that we can neither admit to either wanting to create the black box nor that it is a black box.
The only way to truly not make the decision is to actually not make it, that is, for the motor vehicle to never be traveling in the first place.
Humans are advised not to swerve around things, as a general rule, so I imagine similar logic works for self-driving cars. You're almost always better off -- for everyone concerned -- to dynamite the brakes and hope for the best. Swerving is a good way to lose control.
In any case, you lost me when you got to "the driver has a potential for an increased chance of death." How exactly do you propose a computer will calculate this? Even today's best supercomputers would take multiple orders of magnitude longer to model the odds than it would require to act on them, and even then it would just be a guess. Hell, we judge car safety today by ramming them into objects repeatedly to see how they perform on average, and it still has only a vague relationship to what might happen to you in an actual collision.
My point is that it is 100% unfeasible for the computer to model these probabilities, and I see no reason to think it will ever be feasible. Even without getting into the weird things like "what if the pedestrian is 99 years old, or a 20 year old pregnant woman?" the complexities are basically infinite.
So the only correct answer I see is for the computer to do exactly what we would expect a competent driver to do today: stop the car as quickly as possible. And assuming it's not overdriving it's sensors (something humans are not supposed to do either) then that should work damn near 100% of the time.
>My point is that it is 100% unfeasible for the computer to model these probabilities
I would agree.
But my point is that making the decision, and then trying to say we're not making a decision, is disingenuous. So the question becomes: disingenuous to what end? We are making statements about what is or isn't permissible. In the case of the standard human laws, we (generally) find it impermissible for someone in such a situation to save their own life by ending someone else's, in the same way that if A puts a gun to B's head and says shoot C or die, for B to shoot C would be regarded in most jurisdictions as murder, regardless of any part A plays in the scenario. And replicating these laws in machines is perfectly fine. They, like us, have no chance of fully understanding any of the consequences of anything either of us experience, so we can probably only operate in a pragmatic functional fashion. However, in building a machine that replicates what is permissible in this way, as already applied to humans, we must also admit that we are instantiating in hardware a set of rules to, in certain circumstances, kill and or injure people, one way or the other. But we're doing this for the purpose of, also, preventing such where impermissible (and perhaps, even, reducing such overall). Yet, it still remains that we must construct what will do that; we must enact in a very precalculated fashion a predefined, concrete expression of what is permissible, and therefore, what is not permissible. But we, and this is the whole reason, I will contend, at the heart of the argument, and why the debate is a contentious as it is, that is that we very much do not want to admit to the expression. And only by saying the question can't be answered, therefore, here's an answer, can we do that.
There is no choice. The car cannot calculate the probabilities, which is a prerequisite to being presented with a choice. No question, no choice, no decision. And nobody is going to try too hard to make it feasible, either, because the answer to that hypothetical question would be the same -- occupants and outsiders will both be safer if the car stops moving. So ... just stop the car.
The result of your choice, to brake or 'just stop' means there will be situations where passengers die to save real or theoretical pedestrians, as soon as this is established in the media the sales of these cars will plummet, and your decision will have repercussions increasing the number of human-accidents across the board.
> The car cannot calculate the probabilities
I think we may be talking about different things, probabilities are the foundation of self driving algorithms, there is never a 100% right place, just adjustments and corrections, calculations that lead to percentages that lead to choices. Maybe you are thinking about the older cruise control (like in some BMWs), they tried to anticipate things on the road and would brake faster than a human could, saved a few bumps, caused a few..
Fundamentally irrelevant. It should prioritize. Not crashing. If it is in a situation where the only possible outcomes involve crashing, it should try to avoid crashing anyway.
Think about how stupid it is to program trolley problem logic. It suggests you program the car to purposefully crash into something. We will be absolutely fine just trying not to crash.
That means the car is constantly thinking to itself (“should I give up and just choose what to hit instead?”). No thanks.
Aside from a plane dropping out of the sky onto the direct road ahead, would a sufficiently well implemented self driving car ever end up in such a scenario?
The most basic rule of driving is to adjust your distance & speed to the surroundings/conditions. You should always be able to come to a complete stop without hitting anything. Obviously we humans suck at taking everything into account, but would the same hold true for self driving cars?
Swerve (with 60% chance of survival) or break (with a 40% chance of survival). This may sound like sci-fi, but its simple math; if you know the distance of the hazard, coefficient of friction and speed. And you can crunch numbers really fast, these percentages are part of the decision process. And one puts the public at risk and the other puts the driver at risk. Now even if you had no more information and a utilitarian programmer decided to go with the numbers and swerve; swerving off-side will generally endanger the driver and near-side generally will endanger the public.
These conundrums are irrelevant anyway. Over one million people die every year on the roads. The instant AI becomes better than humans, it's a moral imperative to adopt it, no matter what it does in these rare and contrived circumstances.
If this is your motivation then you will want the car to always default to saving the driver over pedestrian. To do otherwise would discourage adoption.
Not sure i'm so comfortable with this; it will result in a number of people killed who never accepted this risk and might have been safe. In order to save a number of people who accepted risk in the first place.
And even if a car was implausibl in the same situation with the same options, it still of course could choose a response randomly or arbitrarily based on how the sorting algorithm happened to rank equal outcomes.
There is no such thing currently as reducing the risk of killing yourself or someone else with your car to 0. Braking distance and speed are insufficient variables to alter given people's preferences about the speed at which they'd wish to arrive at their destination. This will very likely also be the case when autonomous vehicles are deployed. There will be collisions, this is simply unfortunately a fact.
Given that there will be collisions, it seems prudent to have the car try to figure out what's better to collide with- the tree or the child. Given that there will be collisions, it also seems prudent to think carefully about other, more difficult moral dilemmas.
Refusing to code for an eventuality is itself a moral decision; inaction is an action that has an impact on the world.
There is no probability of impending crash for which it is moral to give up trying to not crash and favor choosing who to kill. Probability is never 1.
I agree that it's mostly a theoretical problem. But in the long term, if I imagine a world where we all cars are self driving, there would still be accidents from time to time, for things that can't be predicted: a rock falls on the road, an earthquake happens and destroys the road, etc.
This would be rare, but would still kill a few thousand people every year. Wouldn't it be natural then to tweak the algorithm to reduce the death toll? I can imagine the public demanding it, and the software engineers to start writing code that deals with the rare case where an accident is inevitable. It could start very simple (avoid large groups of people), but could get more advanced over time.
Is this whole "question" really just a mental exercise, like the trolley problem? Surely the more salient question is how much senseless death could be prevented by the adoption of self driving technology, isn't it? This 'conundrum' almost seems to be intended to undercut trust in what could surely be a massive improvement to vehicle safety.
> Surely the more salient question is how much senseless death could be prevented by the adoption of self driving technology, isn't it?
Absolutely. However, when answering that question, I don’t think it’s unreasonable to thoughtfully consider this stuff.
Clearly, even flawed autonomous driving— if widely adopted— would save many lives. But I can’t think of many things that would slow that adoption more than a public perception of “killer robots” roaming the streets.
If we seriously want wide adoption, it’ll be hard to avoid addressing the qualitative perceptual difference between a death caused by a human driver and one caused by a machine that we’ve engineered.
Speaking un-ethically, the person in the car likely has fewer rights due to agreeing to some or another Term of Service, whereas the people on the street likely have not agreed to such. It may be better to kill the person in the car.
I hate this logic. No, it didn’t need to be programmed ahead of time. It should try to avoid crashing. It shouldn’t try to solve a philosophical problem. We don’t want to require our cars to observe it’s surroundings and calculate number of things identified as humans to minimize expected loss of life. That’s difficult and error prone.
If you’re going down a street and something jumps in your way, make the program try not to hit anything. Ideally by braking, because you shouldn’t be going fast enough to not be able to do so based on your environment.