There's a false analogy here. Driverless elevators raised a concern in regard to the safety of the elevator cab's occupants in what was primarily a matter of private policy on private property. Essentially any public policy decision was about regulating the use of private property within private spaces and any citizen could opt out by taking the stairs.
On the other hand there are primary concerns with driverless cars in regard to the safety of non-occupants in the public commons and therefore it is primarily a matter of public policy regulating the use of private property in public spaces. The notion that opting out of public spaces is healthy for citizenship in general seems antithetical to a meaningful concept of citizenship in a liberal polity.
There are luddite concerns over self-driving cars. But there's an auto-industry counter tale of "Remembering when Tetra-Ethyl Lead in Gasoline was Considered a Good Thing."
Or the counter tale of "Remember when having all of the vehicles be driven 80+mph by drunk and/or sleep-deprived, distracted people filled with road-rage while trying to eat, put on makeup, text on their cellphone, and interact with passengers at the same time was considered a good thing?"
People were outraged over the Vietnam War, in which, 20 years of war, the U.S. lost 58,209 dead. Since 1930, we've lost about 30,000-54,000 each and every year to vehicular deaths. Even now that cars are a bit safer and deaths are down a bit, they're still quite high compared to an event where you expect death, and unlike a war, it doesn't end. If we lost 100+ people per day to plane crashes or a few thousand people a month to terrorism, (or to elevators for that matter), it would be a huge deal.
I'm surprised there hasn't been more of a push to get humans out from behind the wheel.
This so hard, I mentioned this in a post just last week. There's a 9/11 every month on US roads, yet we leave the number 1 safety improvement (computerized traffic) we could make to probably eradicate 90% to private business: pushing driverless cars. Now of course at the end of the day this change has to be driven by businesses, but government can and should help: R&D and prototyping (like a ton of big inventions, e.g. the internet) could and should be funded to a very large extent by the government. Commercialization should be entirely left to businesses, but even here governments can use regulation to introduce new safety standards to inform drivers of the safest options for driving at the very least. And you could even go so far as to employ a model akin to that of many governments which increase taxes on fuel inefficient cars (e.g. SUVs) and decrease them on fuel efficient cars as an environmental measure, but then apply the same on the basis of safety (driverless cars, if they score well on safety which they should over time, being cheaper). You can argue that such a move is big government, you can also argue that's simply the government forcing the market to price in externalities (purely from a financial perspective, road death and injury increasing costs of healthcare on society, decreasing productivity, but also things like congestion and fuel efficiency can be priced in, which driverless cars should do better in) that currently are ignored. It'd still be entirely market driven, but the market reflects reality better. Once it picks up individual states could choose to make certain sections of roads (e.g. one lane) or sections of their city, entirely driverless car only at certain hours of the day, to accelerate the process. And once you go driverless you'll essentially open up what 'public transportation' really means. In a sharing economy of vehicles which need no driver, anyone's car essentially becomes a tiny bus-service. I'd expect the cost of transportation to drop quite significantly.
Absolutely. I think they can do more. The Challenge was relatively (mostly because there wasn't much else) significant 10 years ago and had relatively small ($1m for the winner, per year) attached to them. Not quite the effort we'd expect in relation to the size of the automotive industry, or the damage of externalities of driving that could largely be solved or improved (loss of life, injury, congestion, fuel efficiency, cost of transportation etc). I'm also not aware they continued after 2007.
Of course I wouldn't be surprised in the least if all those university teams (e.g. Carnegie Mellon who came in first, or MIT who came in 4th) are either employed at e.g. Google or Apple working on these things right now, or provided or are providing the research to those who are.
edit: yeah Google's driverless car project is headed by the Stanford director whose team won the 2005 DARPA challenge.
I guess I wasn't clear. My concern is over the journalism. I'm ambivalent about the technology. As a matter of public policy I see a lot of handwaving instead of data about the externalized risks of self driving vehicles and disparagement of people with safety concerns instead of reasoned engagement...e.g. there's a factual past about tetra-ethyl lead but not about realized benefits of self driving vehicles. One is history, the other still fantasy.
I also love how all the self-driving car enthusiasts talk about the computers as though they're infallible. How does a self driving car deal with a sensor failure? Will people be OK with it? What if I want to take over the car to avoid a wreck, but the car won't let me for some reason?
Yes these are corner cases and yes corner cases are by definition rare. But the majority of people who buy term life insurance when they're in their 20s or 30s are at VERY low risk of dying. So why are they being so obviously stupid? Because it's actually not stupid!
So just because the odds of dying are small people still buy insurance. I can't see how self driving cars are terribly different. I could die accidentally by my own hand with a small probability, or I could die at the whim of a computer with also a small probability (granted quite possibly smaller).
Over the course of my driving career I've had no accidents and narrowly avoided plenty of them. I also know people who have had several. I am already substantially more safe than the worst drivers, and as a result also substantially safer than the average driver because as far as I can tell, it's a bimodal distribution. Either you're an awful driver, or you're above average. Getting the wreck-every-2-years person off the road is a great goal for self-driving cars.
But because it's a bimodal distribution (or so I suspect) getting a self-driving car to be better than the average driver doesn't make it better than most, it's just better than the horrible drivers. I'd be much more inclined to think positively about self-driving cars once they're much better than the good driver population rather than being better than the bad driver population.
The corner cases are extremely important and are only "rare" relative to the overall number of people driving. When you have 100MM cars (not a researched number) on the road, that one in a million chance happens 100 times every day!
I've had this same argument at work a few times and although I fully admit the idea of a self driving car may be safer in the future, you will not be seeing me in the first few generations of these cars. I generally don't consider myself a luddite, but this is one area where I will prefer manual control for quite a while.
I don't think anyone claims any computer systems are 100% safe. However, I think they will be much safer than human drivers. I, for one, will take the one in a million chance of a computer error over the probability of encountering a drunk/fatigued/distracted driver any day
> However, I think they will be much safer than human drivers.
I think they'll be much safer than the worst human drivers. I doubt they'll be much safer than the best human drivers.
Yes they won't fatigue or get bored or change the radio. But they also don't have any insight, just rules and machine learning. That limits their ability to predict.
My biggest worry would be bad repair work on the sensors or people who know the car is buggy but won't take it in for repairs. You can legislate against that, but people only listen if they get caught.
But there are a lot of up sides, too, for driverless cars (no distractions / intoxication / etc.) so I hope we can work things out.
But we can debug computer failures in a way we cannot for driver failures, so I do think we'll see some predictable level of social backlash and fear of the unknown, even if it's statistically irrational. I mean, driving is already one of the biggest intentional risks that we underrate every day and I doubt computers are ever going to be perfect.
I'm totally for self driving cars but if press myself I can think of harder to solve problems than bad repair. How about hackers who tinker for fun and adjust all the parameters to the edges of tolerance. Drive like a race car mode or drive like a James Bond city chase mode. Or of course cheap and easy slow moving cruise missile bombs.
Kind of like anonymity on 4chan what will anonymity on cheap cars do? Send a car full of shit to your ex. Send a car to crash into their house. Use face recognition and phone tracking to run over a specific person.
I'm sure some of those aren't likely but computer run cars and drones certainly open new ways for mischief
Also, what will police do when they can't count on almost everyone carrying a state-issued ID card on them at all times? Dollars to doughnuts (no pun intended), carrying ID will become mandatory as driver's licenses become less common.
Not yet, but wasn't there recently a big national ID program enacted and then repealed? Eventually people will be too weary to push back.
In the US, it's very common for police to "request" identification in routine encounters with pedestrians and fly off the handle if they don't get what they want. People say it's a post-9/11 world, but this happened to me many times in the '90s.
> Not yet, but wasn't there recently a big national ID program enacted and then repealed? Eventually people will be too weary to push back.
The then-government announced a proposal. They were opposed by the other main parties, and voted out at the next election. Prediction is hard, but I don't see it happening.
You don't need to carry a driving license even if you do drive. If you're stopped and don't have your documents then the police can issue a HO/RT1 notice, which requires you to present your license and your MOT and insurance certificates at a police station within seven days. Carrying those documents in your car can save you an errand, but it isn't legally required.
Hackers might be able to do worse, but someone being cheap on repair (either the person or the mechanic) is likely to be a far more common risk (the kind we underestimate).
> "Remember when having all of the vehicles be driven 80+mph by drunk and/or sleep-deprived, distracted people filled with road-rage while trying to eat, put on makeup, text on their cellphone, and interact with passengers at the same time was considered a good thing?"
Sure, as opposed to what?
Not to mention most people don't drive over 80mph usually.
Horse carriages did run over people and cause accidents as well. Buses and street cars also cause accidents.
I'm all for making traffic safer, but people are stupid, children do run in front of cars, etc
People keep up to it because it means freedom. And not everybody can live near public transport or depend on it to work
A better analogy is anti-lock brakes. When they were first introduced, many drivers were absolutely adamant that no computer could possibly have the nuance and feel to outbrake a human driver on difficult surfaces. Now, it seems utterly obvious an algorithm sensing wheel slip at 1000Hz has a better ability to maintain traction than a human driver with reaction times measured in the hundreds of milliseconds.
Self-driving cars won't suddenly arrive on the market fully-formed. The technology will be introduced incrementally as a series of driver aids, to supplement the range of driver aids that are already standard (ABS, TC, ESC). We're simply seeing the acceleration of a trend that has been happening for years - computers taking over control from the human driver.
Several manufacturers now offer collision avoidance systems that can automatically apply the brakes based on RADAR sensing. Mercedes offer an adaptive cruise control system that can match speed with the car in front and steer through corners to stay in lane.
The self-driving car will be preceded by the uncrashable car.
Yeah, it is a false comparison, but it's useful in a way when you consider removing people from being the drivers. It's all automated and safer. The psychological effect is similar and the safety results should show marked improvements too.
On the other hand, comparing automated trains would be closer --and I've asked before why we have not automated more rail lines, or even bus lines (which follow predictable path) before we automate cars. Still, it's very likely automated cars will result in lower accident rates of all kinds. Which, on balance, will be good.
The only problem with driverless elevators was how to prevent passengers from being trapped in closing doors. The problems with driverless cars are a bit more difficult.
The fact that I'm forced to share with lumps of meat controlling tonnes of metal or opt out of public spaces seems to me antithetical to a meaningful concept of citizenship in a liberal polity.
I personally am just missing one thing to give me confidence in self-driving cars - transparency into their algorithms. I don't need to get deep into them, but some general information on how they "think" for accident avoidance would help me.
Just as an example, lets assume some debris on the highway. I am sure the self-driving car can avoid it, or stop until there is a clear space to go around it. But what if a human is tailgating? Does the car ignore that? Or does it (facetiously anthropomorphically) think to itself, "Hey, that is a human behind me, tailgating, at 75 mph. Swerving left to avoid this debris would put the human-driven vehicle on a path to collide with it, because the poor sap doesn't have my reaction time. I'd better hit the brakes instead, to communicate the danger."
I know that is an edge case, but edge cases are where accidents happen anyway, so I'd like to know what to expect from self-driven cars.
Edge cases are where accidents happen because humans suck at them, whereas you can specifically train on them with a computer.
The bigger problem, that you allude to, is that the algorithms that actually work are mostly machine learned and therefore very opaque to human inspection.
One of the big issues with this is how consumers/government will require algorithms to prioritize.
i.e. one can imagine that there are accidents with a certain context wherein a driverless car could prevent x amount of people from dying, by killing its own passenger. For example an axle breaks, you can't turn right anymore, only left, and you're coming up on a right turn. You're supposed to turn right and stick to your lane, you can't, you either go straight onto the other lane, or go left. You happen to be driving an SUV and there's two small cars in the lane. If you go straight you hit both of them, and the camera registers 2 young children in each and an adult driving. Given the difference in car sizes, you're likely to survive but also likely to kill at least two kids, perhaps up to 6 people, you'll definitely destroy 2 cars and injure all 6. Or you take a sharp left and drive infront of them straight off the road, you drop down in a cliff and die. In both cases the computer calculates to a large degree of certainty that if either choice is made that it'd go down pretty much like this.
Now this is a silly edge case but it's the best I could come up with right away, more plausible ones exists. Anyway the question now becomes, is it okay for cars to be sold that are programmed to prioritize the passenger's life, i.e. killing the maximum amount of innocent people who are not you, estimated between 2 and 6, injuring at least all 6, creating the biggest economic damage and loss of life or livelihood to innocent people (particularly 'innocent' as their car's axle was just fine, yours wasn't).
Or would a government require the algorithm to take a different priority, one in which the car's software is programmed to sacrifice the car's owner/passenger, for the sake of a utilitarian calculation?
I think the answer is pretty clear what's fair, and that if a computer wasn't part of the equation that an ordinary human would likely not have had the mental fortitude to identify the axle steering issue and all options and choose the best one and that the outcome would be worse than a driverless car. But I also think it's pretty clear that the 'fair' decision would still be highly controversial and scary to imagine and something that'll see quite a lot of debate and pockets of resistance the next few decades.
It somewhat annoys me that we could eliminate 99.9% of accidents, yet people still would object to a self-driving car because of contrived fictional situations like the one described above.
A freak situation like the one described above would surely make the news - "Hero driver saves family by giving his life" and/or "Our mom would still be alive if that asshole wasn't so selfish". Yet I don't remember seeing a single story like this, ever. From comment threads on self-driving vehicles, you'd think these moral dilemmas are a daily, if not an hourly, thing in traffic.
Do you allow that situations exist (in the billions of miles driven by the world population every day) in which a driver, whether human or computer, can be faced with a situation where different actions are possible, and one sacrifices the passenger's life but prevents harm to other nearby cars, and another action would harm nearby cars but prevent harm to the passenger of the car, and that one of these actions can be 'slam on the breaks'? And that thus there's a trade off by your car's software between your life and someone else's?
Even if that trade off comes from 100 different actions that can be taken, that situations exist in which someone inevitably gets harmed and the different decisions distribute that harm either to you, or passengers of other cars?
Whatever that situation may be, do you allow that it can exist, even if it's extremely rare, and that someone has to make a decision on how to set the computer's priorities in such an edge case?
That's the point I'm making. The specific example of such a situation isn't very important (unless you don't acknowledge ANY situation with such a tradeoff decision could ever to occur in the over 4000 billion km driven ever year which I'd say is pretty unlikely).
I would say that in all cases, any decision taken by the computer would probably be miles better than a decision made by a human driver in such a hypothetical situation. Firstly, the computer will have a much better overview of what's going on than any hman would. Secondly, a human is most likely to just panic and slam on the brakes as hard as possible.
The problem with hypotheticals is just that: they're hypothetical. They don't exist in the real world. There's never a situation where you have precisely 2 options, one of which kills your passenger, the other of which kills oncoming cars. There's always a multitude of options, with various risk factors associated with each one.
Any source? There isn't much on the web about self driving car software architectures.
You can totally train for those three scenarios (well, in a simulated environment anyways), but whether they have the trust or tech to do that yet, I don't know.
I've been driving for a few weeks with Waze running most of the time and the human generated alerts are incredibly useful.
Accident ahead.
Object on road ahead.
Car stopped on shoulder ahead.
There will be a way for self driving cars to broadcast these types of alerts for human drivers. I would expect that the self driving car would attempt to preserve itself (and prevent harm to its occupants) before trying to protect a human driver from harm.
Elevators are enclosed and move on rails. The "driver" of an elevator does not even consider obstacles, and couldn't see them if he or she wanted to because the floor and ceiling aren't transparent.
Driverless trains operate in many cities around the world. They hardly raise an eyebrow. They are on rails, of course, and those rails are on dedicated, closed tracks out of the way of traffic. The trains just have to coordinate to stay out of each other's way, stop if there is an alarm condition (object or person in track area), and correctly stop at the proper spots at the stations.
The you consider the car hacks, like the recent Jeep one and the bugs like the Toyota ECU issues with unintended acceleration. I don't think that the state of software engineering is sufficiently robust to allow it to drive cars.
The state of wetware is also not sufficiently robust to allow it to drive cars. Wetware likes to drink alcohol, distract itself while driving, or take long trips without the sleep it needs.
Of course it's not comparable in the sense that driving a car is different from operating an elevator, that's not a point the article aims to address. The point is that at one point we thought we needed to manually operate something out of safety or convenience concerns, that we could actually automate more safely and conveniently. Similarly, the point is, that most people had similar notions a few years ago about cars, and we're now seeing they're similarly untrue.
Convenience is the main point of automation; the only thing convenient about manual operation is that it provides a convenient job for someone, who finds it inconvenient to be replaced.
I don't think there was ever any genuine concern about automation bringing inconvenience (other than maybe some ineffective forms of automation: "strawman automation").
"I mostly like the idea of a machine that lets you just throw in your dirty clothes and punch some buttons, but won't that do away with the sheer convenience of washing clothes by hand?"
Automatic elevators at least have their own distinct movement areas.
Automatic cars have to share their space with (human-driven) cars, crazy nuthead bikers and drunk-as-fuck pedestrians, not to mention cops ignoring all the traffic rules to get a fucking burger from McD and other emergency services.
Quite a difference I'd say, even with today's technology.
Normal drivers, Nuthead bikers, drunk pedestrians and cops can all be reacted to much swifter by a computer using radar than any driver behind the wheel. It's just pride to think you could.
I don't think anyone doubts it can react more quickly. The real question is whether it can react more correctly. I think the answer is "soon", for what that's worth, but I also think that until the answer is an unmitigated "yes" it's a valid criticism.
The key is that a human can anticipate a dangerous situation and take mitigating action before a quick reaction is necessary. AI still has far to go before that sort of nuanced behavior can be replicated. Until then self driving cars will be like teenagers learning to drive. Sure they can get themselves from point A to B but I don't want to be near them while they do it.
Disagree entirely. Anticipation is just a function of memory - of having enough training data that you can glean potential outcomes and their probabilities through the attributes that surround them.
You're drivng down a crowded highway and you see an obstacle. You're likely to get rear ended if you hit your brakes and you can't change lanes. However you recognize it's just an empty card oard box. You drive through it with no issues.
...and it turns out the "empty" cardboard box is half filled with metal pipes, which your human eye had no way of knowing in the split-second before your car collides into it in 65 mph. Your car screeches, loses control, and swerves into the neighboring lane.
But luckily a following self-driving car already detected the presence of an obstacle using radar. It detects your car maintaining speed. Within several milliseconds it evaluates chance of collision as VERY HIGH, starts evasive maneuver, scratching your car at a speed which is still uncomfortably high but not likely to cause a fatal accident.
Several weeks later a team of automobile experts review everything recorded in the accident, improve the algorithm to reduce the chance of accident even further in Edge Case 783645: the preceding human-driving car plowing into a small road obstacle at 65 mph. Firmwares are automatically upgraded during the following months.
The computer could immediately check adjacent lanes, communicate with the car behind, know how much space is on the shoulder, take information from other cars who've driven beside the box already, etc. In a fraction of a second.
AI cars could communicate amongst themselves to forewarn each other. They would instantly know typical driving patterns for any location at particular times and in particular conditions. A human would know that for their usual routes at best.
Well, whether it acts more correctly becomes a moot point when it acts correct most of the time, then it's about what acts more safely. There are advantages to having a system that's not allowed to be lax when it comes to safety rules.
Instincts are strange. I have been (happily) surprised by myself on multiple occasions by how I've reacted to near-impact situations. Do I think that means cars can't do it? Absolutely not. I do think that it will take a lot of learning to get the same out of them. On the other hand, do I also think that replacing every car on the road with a self-driving one right now would result in fewer accidents and fatalities? Yes. If it were illegal to drive and all vehicles were required to be autonomous, I 100% believe that would drop the danger of motor vehicles significantly. Where it gets weird is the concept of fault. Right now, because most motor vehicle accidents are driver error, humans screwing up almost serves as an insurance policy for manufacturers. If fewer people die, but more of them die as a result of manufacturers' software... Yeah I don't know. The future is going to be weird.
Well, I'd say it's just pride to be so confident in the ability of human engineers to make machines outperform humans consistently (at least in the next few years that vendors want to start selling auto-drivers.) I don't think anyone's doubting whether radar+computer can move faster than a human, but we may doubt whether it will make the "right" reaction and if it might make unwanted decisions in some other cases we don't think of until they happen. And that's just the "AI" level, not stupid things like your electronic throttle's stack overflowing and locking in the open position[1].
I suppose it depends on what your standard is
and what you're comparing it with. The Insurance Institute for highway safety says there is about 1 fatality per 100 million vehicle miles travelled in the US.
That still translates into a lot of annual deaths of course (and doesn't capture the many serious injuries not included in that statistic). But humans actually drive a lot of miles with relatively few serious accidents.
I didn't exactly say that. But I think you and others are underestimating our collective skills. We make stupid decisions, get tired, etc., but we manage to recover, workaround others' mistakes, etc. I think people are underestimating how hard it will be to get "the last 10%" of driving skill, that gap between slapping together cruise control+camera+radar+GPS to be able to handle the best-case scenarios, and being able to drive at least as well as the average human in the average-case scenarios.
Another thought: what if it takes a generation or longer for self-driving cars to do well with cases like slippery rain or ice? People will stop practicing how to drive at all, until suddenly one day they have to drive in the most dangerous conditions possible. Will the net cost/casualties of accidents still be lower than it is today?
Swifter, yes. Better? I think that depends on the particular circumstance. For example: you look in your rear view mirror and notice an out of control cement truck half a block away heading directly towards you, plowing through vehicles in its path. Would a computer recognize the impending catastrophe and take the initiative, sideswiping a few vehicles in your way in an effort to get out of the situation?
I think it's safe to say that every day driving circumstances are already on par with what a human driver could achieve. It's the extenuating circumstances where a human can make the call of "my life is more important than property damage or even risking others lives" where I have my doubts. Those situations are obviously rare, but still quite important.
That may be the rational response under certain morals, but not everybody is rational always or has the same morals. Who is legally responsible for those Y deaths? Or, even if there was no death but the car broke a traffic law, how is that addressed? If the Y deaths are disproportionately a distinct class of people from X (say, pedestrians, construction workers), are they going to be eligible for some class action lawsuit? Or will it deepen class divides as poor pedestrians become bitter about being hit by self-driving cars carrying rich people? Are the "drivers" going to suffer guilt, wondering if there's something they could have done to prevent an accident? Are issues like this going to cause the law to require a human "standby driver" ready to take over, canceling many of the hoped-for advantages of self-driving cars? If drivers have legal or monetary liability, are they going to want to hand that over to a machine built as cheaply as possible and designed by overworked startup engineers? (perhaps the first generations at least.) Will the cars keep audit records to prove "who" was driving, etc.?
Maybe some of this is getting off-topic; I'm just musing on how complex the practical and societal factors could get.
>Are issues like this going to cause the law to require a human "standby driver" ready to take over, canceling many of the hoped-for advantages of self-driving cars?
The law can say whatever it wants but there's effectively no such thing as a "standby" (for <30+ second timeframes) for a driving system that's operating autonomously. The driver is watching a movie, reading a book, or sleeping. And if the driver isn't allowed to do any of those things--say by the system shutting down if they take their eye off the road for more than 5 seconds--no, then, the system is largely useless.
In which case people won't pay for it except to the degree that it's essentially an assistive driving safety system.
Mhmmm no.
If you have a machine that is manually operated and kills 1/1000 people due to operator errors, and a machine that is fully automatic but kills 1/10000 people due to software errors or "weird edge cases", it's still the second machine that will never be allowed on the market, because even one instance of it killing anyone will cause it to be fully recalled - doesn't matter that statistically it kills less people than a manually operated one. For automatic machines that number is 0, and automatic cars are unlikely to achieve that.
Cool bit of history and I kind of see the analogy, but there's a difference between pulling up and down by cables in a single shaft indoors, and powered rolling on roads outdoors anywhere. Even if the first self-driving cars are on fixed paths too, there are a lot more variables.
But technology in general is a lot more capable now too. Self-driving cars are safer than manual elevators were, and they're a lot more useful to boot, as they get you to anywhere you want to go rather than just to a different floor.
The story a few days ago about a bicyclist doing a track stand confusing a google car, which in turn reverted to performing a sequence of seemingly excessively safe manoeuvres, generated some fascinating responses.
Most particularly an observation that we seem to be stuck in a 'perfect or nothing' mentality - whereas new technologies, even in their own niche, really don't need to solve all problems for all people.
I guess because it feels like that attitude directly contradicts the sarcastic adage 'there's never time to do it right, but lots of time to do it over' (and myriad permutations) that generally encourage us to get it right on the first go so we do not have to waste time revisiting.
Complaining about the 100% applicability of an otherwise fine analogy seems to, again, be missing the point here. Plus, HN readers are not the target audience for this kind of story.
Aside: on a very recent trip to India, spending some time working in a very large, very new office (not retail) building, replete with fully functional lifts, it was fascinating to see each lift had a uniformed attendant that pressed buttons on request.
"The story a few days ago about a bicyclist doing a track stand confusing a google car, which in turn reverted to performing a sequence of seemingly excessively safe manoeuvres, generated some fascinating responses."
Yes, I saw that. I'd like to see a technical response from someone at Google. Their system recognizes bicycles, and treats them as vehicles likely to make unusual maneuvers. So a bicycle starting to move into an intersection, however slowly, is treated as a fence against movement by the car, and the fence length is quite long and angled to allow for sharp turns by the bicycle. You can see this in some of their videos. Someone balancing on a bicycle, with small back and forth movements, will trigger that fence response.
Google is being careful in their development, and conservative in their assumptions about other vehicle behavior. That's good at this stage. Their system would probably classify a child on a bike with training wheels or an old person on a big tricycle the same way as a standing balanced bicycle.
They could certainly program the system to resolve such deadlocks by inching forward and honking, and eventually they probably will. Just as elevators deal with people blocking the door by buzzing and slowly closing the doors at low power.
There's a paper on this: "Go Ahead, Make My Day: Robot conflict resolution by aggressive competition".[1] This has been a practical problem with robot carts which drive around hospital corridors carrying laundry and such. They need to be slightly pushy, or they get stalled trying to get through a busy corridor.
Probably not. The autonomous vehicles will work something out with each other.
A more interesting possibility is that, since autonomous vehicles have to recognize bad driving, they'll probably be reporting it to a database somewhere. That data can then be sold to insurance companies.
Actually perfect or nothing is a nice standard. Did you know that the elevator was quickly adopted because its inventor[1] went to shows and cut the cable holding up the cab he was standing on? It was because he had a foolproof locking mechanism to stop free-fall, in use ever since.
And it really is foolproof: a quick internet search reveals there has not been a single case of an elevator falling death in the history of elevators (EDIT: see replies[2]). (There is one exception I found, which was due to massive structural damage to a building, like a giant explosion, which also damaged the elevator. Check for yourself if you don't believe me.) [Almost] no elevator has ever fallen due to a snapped cable, in the history of elevators. (So even if you're afraid of heights you have no reason to be afraid of elevators, but maybe stop reading this comment here in this case.)
This is why even highly sqeamish people or people afraid of heights don't mind standing in a locked elevator cab, being held up by a cable, and with ten, twenty, fifty or a hundred stories of empty space under them while the cable lifts them up and down. It's just a non-issue (due to height, I guess claustrophobia is a separate issue.)
And this happens because the safety mechanism to keep elevators from falling is, well, perfect. The inventor of the elevator showed it, again and again, using his life.
If it weren't the case, at least some people would feel very differently about elevators! As this case shows, a standard of "perfect or nothing" absolutely impacts public perception and fast adoption, and may even mold people's opinion for centuries to come.
The difference is that no one (or nearly no one) dies taking the stairs. If you're going to replace one perfectly safe means of travel with another, the new one should also be perfectly safe.
But cars are anything but perfectly safe. In many parts of the world automobile accidents are a leading cause of death. Even in parts of the world where seatbelt laws and stringent safety requirements have reduced automobile deaths, they're still significant. What's worse, unlike other "preventable" causes of death such as smoking and obesity, automobile deaths disproportionately affect the young.
For all these reasons, I think the standard for driverless cars should be somewhat relaxed. Unfortunately, I'm guessing the "robots killing humans" mentality may still dominate...but if we can get past that I think driverless cars will have a larger impact on the demographics of death than almost anything else in the last 50 years.
According to this[0] nearly a thousand people in the US died from falling down stairs in 2000. I've injured myself on stairs, albeit not badly, but never in an elevator.
My dad is getting old, and whenever we have to take the stairs with him, I definitely fear for his life.
I'm sure more people die taking stairs than do die taking elevators. The elevator is always the safer option. Unless there is a fire...and even then it's a toss up if you are elderly or have mobility problems.
... and even worse, cars do not limit themselves to killing their occupants, but are responsible for huge numbers of pedestrian deaths, of which many are not even in the road (deaths involving cars mounting the sidewalk and killing people, or even crashing into buildings and killing people, are a regular occurrence). ><
Most of those deaths are caused by horrendously irresponsible and unsafe behavior by drivers. However awful driverless cars are (and I suppose especially in the beginning they will be pretty primitive), the bar is already set really, really, low.
It seems possible that the falls were not caused by the elevator car dropping from a snapped cable, but rather the cable (and possibly the pulleys, motors, etc.) falling onto the car, which then (being pushed by a much greater momentum) dropped, but they are described here as falls.
The 99% invisible podcast[1] actually went pretty deep into this topic in episodes 170 and 171, and they were excellent shows. I actually learned about the elevator showing off the safety of his elevators from another amazing podcast, Memory Palace.
Are there any recorded deaths from self-driving cars at this time, I wonder?
Pre-empting other people's responses of 'not enough data yet', or similar - at the time Otis felt it was necessary to demonstrate in this way, there was a perception of insufficient data also - lack of data is a truism for all new technologies.
Although there would seem to be practical scenarios where autonomous vehicles are much closer than in the general case. I can easily imagine that limited access highway driving with a competent/licensed operator not paying attention or actively steering could be such a relatively near-term scenario. Maybe you need radio beacons at construction zones, be in active cell communication range, weather within certain parameters. But definitely doable, and perhaps with an overall improvement in safety given that a tired or distracted driver drifting across a lane or plowing into the car ahead at high speed is a major source of accidents.
I'm not convinced that foregoing that in absence of robo-Uber (which I personally expect to be many decades out) is the optimal path.
> Aside: on a very recent trip to India, spending some time working in a very large, very new office (not retail) building, replete with fully functional lifts, it was fascinating to see each lift had a uniformed attendant that pressed buttons on request.
I think this may be more of a "show off, we're rich and can afford service" attitude. Spoken from the POV of an European only hearing about India when there is another outsourcing, terrorism, gang-rape or caste-based case (like the two poor "untouchables" girls sentenced to being raped by a village court, because their brother had married and run off with a higher-caste girl), I think it's just symptomatic of the culture.
It's my understanding that menial labor in India is expected to be cheap and so people wouldn't naturally interpret one extra worker as an extravagant show of wealth.
I also read on Quora that Indian parking attendants will do nothing but press a button to generate your parking pass and give it to you, another example of pointless labor that doesn't seem very related to flouting wealth.
Yes, it's almost definitely not showing off - though it may appear that way to outsiders.
My understanding is that, as you observe, with labour being so cheap and a culture that values inclusiveness, this is the obvious course of action. In lots of non-western societies westerners will see inefficiencies and opportunities to replace human labour with fewer and better trained staff, or automated / mechanised systems. But this is to miss the point.
The way I interpret it is a form of UBI - everyone(ish) has an opportunity for a modicum of income, with an underlying tacit agreement.
The difference between this and UBI is that these people have no time or opportunity to explore and possibly learn how to do something progressive. It's a gross waste of human potential.
I totally agree with the sentiment, but remember that for each of us, as we look at other people(s) there's a breathtaking number of activities that we would consider wasteful.
While I am not a huge fan of cultural relativism, I appreciate that there may be some validity in considering such things through that lens. Though it may just be hopefully healthy self-doubt.
Academic exercise -- what differences would you imagine for transition to a UBI between, say, India with this kind of social norm, the UK or AU (as examples with a half-way decent welfare and some 'work for unemployment benefits' programmes), and the USA or Singapore (again as examples, with extremely modest welfare arrangements)?
In tipical geek fashion, this discussion is stuck around matters of safety in driverless cars and how can people be made to trust them. How much better they are, how fascinating the tech is. How driverless cars can replace cars entirely.
But do people actually _want_ driverless cars? Will they _ever_ do?
A driverless car is not a driverless elevator or train. People never drove their own elevators or trains, they were driven by other people. A car is another matter entirely. People enjoy driving their own cars, some because of the control it gives them, others because of the act of driving itself. If this weren't so, people wouldn't choose to drive their own cars in situations where taxis or other forms of public transportation are clearly superior.
I want a self-driving car as much as I want self-drinking beer.
I'm not saying self-driving cars are useless. I can certainly see a future with self-driving buses and taxis. However, for that to happen there are a few human obstacles that must be overcome even if the technology becomes perfect. The same obstacles that currently prevent self-driving trains from becoming more widespread: if you need a human to stand watch against vandalism and to react in emergency situations, he might as well be driving the thing, it's cheaper.
Indeed, there's still a case for car ownership even if one doesn't enjoy driving.
One reason for preferring a car over a bus or taxi is the ability to transport luggage. I frequently find that I have to undertake round trips of 200-300 miles with a car full of bulky and valuable equipment; the destination is often somewhere that is not easily accessible by public transport, or requires changes between bus, train and taxi, even if I could take the luggage via that means.
It might be rather convenient to have the car conduct at least the motorway portion of the driving itself, but access to my own car is pretty much essential in order to do this sort of thing. I suspect that for anyone in business where they must transport goods or tools it will be even more vital.
You would just get a van or truck on demand. That's a huge benefit of on demand, self-driving cars - instead of having one car you try to fit into your every need, you get what you need and when you need it. So, something tiny if it's just you and your partner going to dinner. Or a van if you're taking kids and friends to an event. Or a truck if you have to move a sofa.
I have a sedan. 90% of the time it's just me in the car. Insane.
I am in similar situation to OP - I need a larger car during the weekends because I drive 200 miles away for certain events where I need to take loads of stuff with me. But during the week it's mostly me alone in the car driving to work.
I could rent a truck during weekends - but that would be ungodly expensive, especially since I'm under 25 so every rental company charges literally 2x or 3x for car/van rental to drivers under 25. Then I have to worry about returning it on time, and it has to be in perfect condition or again, I have to pay for anything that's damaged. If I scratch my own car - no biggie. If I want to stay overnight - it won't cost me another $100 per day of rental. And then the question is - is it really "better" for the environment/society/traffic to make two cars rather than one?
Why would they make two cars rather than one? Most people would be sharing cars in x years. There would be fewer on the road over all. You'll use the truck for a couple of hours and then until you need it again, it'll return to the pool of available cars unless you want to hold it for the day.
In the future, you won't be driving the car so I don't think U25 will be such an issue.
Well, even if I don't drive a car personally, I would still like to own one, to keep it personal. Sure, you could order one and it would pick you up, but then you are sitting in a seat that thousands of other people sat in. You can't leave any of your things in the storage spaces, because tomorrow you might get a different car. There's also the factor of hygiene that puts me off buses and taxis.
And they would make two cars because I would need one vehicle to go to work,and another to take my stuff over the weekend - rather than one vehicle which does both.
True. But you are talking about logic, which seldom matches human behavior. Humans are driven by other matters besides efficiency.
A car is a status symbol. This must be taken into account when advocating solutions that may transmit a lower status for its user, be it driverless cars, on-demand cars or something else. Ignoring it is futile because this is above all else what made cars so widespread. No other fact explains why some people have more than one car strongly than this.
Take the Segway, a wonderful device with much going for it except you looked stupid while riding it. It was a perceived low-status symbol and it failed because of that.
Lower prices for on-demand vehicles will mean that car ownership declines. Most people don't care too much about the brand of taxi or plane they take already. Once the price/efficiency mix is right, I will quickly ditch my second car.
I want self driving cars. I could read or work during every commute - that's hundreds of hours regained each year. Eat or socialise or sleep on a road trip. And the car that I need on demand. I hate driving something that's bigger than I need most of the time, just because on rare occasions I need to pick up the kids.
Oh man, this reminds me of the time I met an elevator with an operator. It was in one of the Smithsonian museums - perhaps appropriately the museum of american history? It was only 22 years ago, and I rode the elevator several times just to watch the operator work the strange controls. I wish I'd photographed it.
this reminds me of a truly excellent HOPE X presentation about elevator hacking[0]. one of the most entertaining con panels i've seen, and interesting information on an esoteric topic.
Even after seeing IkmoIkmo's post, I'm standing by my own. It does nothing to prove that the drivered->driverless car transition would be anything close to the drivered-driverless elevator transition. Even historical elevators rarely (if ever) had multiple elevators per shaft or a shaft that travelled in more than one dimension; that's a non-starter for the analogy this article tries (and in my opinion fails) to present as if it proves anything.
People still do not accept driverless trains or tubes yet. I remember talking with an AI expert a few years ago. The London's underground had a fully automated driverless prototype in the 70's. And here we are, in 2015 and only a toy driverless system in London (DLR).
I'm not sure how much that's the case. For many years now, Paris' lines 1 and 14 have been fully automated. I've been to Toulouse last year and their whole subway system – albeit small – is automated. I'm also pretty sure that the train one takes from Copenhagen's airport to the main train station is fully automated, but I'm not sure about the other lines.
All in all, I think the public acceptance issue can be mitigated.
The change in elevators was from having a professional operator to allowing passengers to push the buttons themselves. Elevators are still "driven" by humans, not robots. The self-driving cars are a whole different ball of wax.
Is this here to sort of suggest that "we got used to driverless elevators so we'll get used to driverless cars".
It's perhaps true but a really bad analogy. There's minimal risk of my child being hit and killed by an elevator as a result of a bug in the elevator software.
It's not a technological argument, it's one of human perception. The article talks about how ordinary people like you and me simply didn't view an elevator as something that's safe or convenient without a manual operator, quite like cars today. And that this perception didn't change automatically, it took time and new ideas which didn't have much to do with technology really.
As for safety perceptions back then, it's hard to say exactly what they were like. The article alludes they were pretty bad, when elevator operators went on strike the city was badly affected apparently. I can tell you in the US like 20-30 people die each year in elevators, it's pretty minor given the billions of trips but I think we can easily imagine elevators were not quite as reliable 100 years ago when the automatic elevator existed, and that the accident rate was quite a lot higher.
If your concern is your child being run over the important question is whether there's a higher risk of your child being hit by an autonomous car or a human-driven car.
doctors were also under a great deal of suspicion 200 hundred years ago or so...the idea of the story of Frankenstein was taken from the fabric of the society of that day...most people did not trust doctors...anything new is under suspicion.
While I don't really disagree with the general point, 200 years ago there was minimal understanding of medical sterilization methods and no anesthetics so it's certainly understandable that surgery at least was viewed with, shall we say, suspicion. Given that you would probably die or be seriously incapacitated in some way.
My biggest issue with driverless cars is how they'll handle extremes.
You as a driver might choose to hit some one rather than risk hitting a barrier with your wife and baby on in the car.
In other cases you might rather drive your car into a barrier or even off a cliff rather than hit a child.
Now this isn't about morality or legality you might be morally and legally "wrong" for swaying out of a more dangerous obstacle for your self and the passengers of your car while increasing the likelihood of harming some one else, you might go to jail for that, be found liable in a civil law suit or just feel like shit for the rest of your life but it's still your decision to make.
With dirverless cars that's gone the car really has a choice to make and that's in an extreme case in which it cannot avoid an accident some where there will be a algorithm dictating should it prefer to risk the passengers of the car or the 3rd party it might collide with.
Even if the car can do a risk calculation and choose the lowest results that still isn't good enough since well it shouldn't be upto the car to make that decision because it cannot count for other factors such as who is in the car and who you might hit.
Heck if you as a driver have a choice of swaying left or right to avoid a kid on the road and there's a 200,000 to your right and a 20,000 car to your left you are going to go left every time.
It takes too long for the deliberative mind to make a decision, because thinking through the situation would take ten seconds or more. The way you describe it is not what happens. In actuality the animal brain will make an intuitive decision, and the rational brain will then invent an explanation for that decision and pretend it was going to make that decision all along. By contrast a driverless car will always make deliberative decisions, so if anything the only morally right decision is to let the algorithm decide, because whatever its process it will still reach the right conclusion more often than a monkey brain evolved to avoid lions trying to decide whether to turn the wheel of a car or not.
I would disagree if there's enough time to react there's enough time for a choice now it doesn't mean you can have a lengthy internal deliberation but your mind will take the current biases into a fact.
There's allot of stuff you are doing without realizing it people for example drive 10-20% slower with a child in their car without doing that intentionally, while men tend to speed up while having an attractive woman as a passenger you will behave differently and make different choices based on the situation you are in.
I've been involved in several accidents / near accidents in my life and in all of them i have enough time to react and make a decision, if you can't react and it's not some one really jumping infront of you which means you hit them before you even notice them it means you are driving too fast or otherwise distracted in the 1st place.
I don't care about morality i care about the best outcome for me as it see it i don't want my car to sway into incoming traffic because it calculated that it's a lower risk of serious injury than hitting an idiot on a motorbike that decided to do an illegal uturn,
That situation is from experience i couldn't way left because i might hit incoming traffic if i would say right i would hit the cars to my right if I would emergency break i would still hit him and most likely cause a chain accident with the car/s behind me so i went on knowing that i would most likely hit him, I've manage to get enough speed and stick close enough to the right to not him but even if i would've in my opinion that would've been the best situation even if it was the 1st or 2nd highest chance for some one to be seriously injured use case.
Now I don't remember slowing and thinking hmm can't do left, can't do right, can't break, I remember my eyes running from left to right for a couple of milliseconds and then speeding up instinctively but it was still a decision i made.
Now if it was a kid on a side walk I would've swayed right without a doubt even tho i could've gotten hit by a passing car or driving some one else of the road, and if i had my kid in the car? well if mowing down and entire puppy orphanage was the least risky scenario for me i would've done that in a heartbeat.
On the other hand there are primary concerns with driverless cars in regard to the safety of non-occupants in the public commons and therefore it is primarily a matter of public policy regulating the use of private property in public spaces. The notion that opting out of public spaces is healthy for citizenship in general seems antithetical to a meaningful concept of citizenship in a liberal polity.
There are luddite concerns over self-driving cars. But there's an auto-industry counter tale of "Remembering when Tetra-Ethyl Lead in Gasoline was Considered a Good Thing."