1) It's very important to do an apples to apples to comparison when you're looking at accident rates, and you can't just do that by looking at averages for the US for all miles traveled. There have been issues with numbers for Tesla autopilot before where it looked like autopilot was safer than it was because Tesla compare autopilot against the average accident rates for all miles driven but in reality for the types of situations where autopilot was used, human drivers actually had fewer accidents on average.
2) Waymo has a low accident rate because its vehicles are operating very conservatively right now. If it could scale up and still operate the same way, that wouldn't be an issue, but its vehicles are already causing problems by getting stuck which suggests that it wouldn't be practical for them to operate a significantly larger number of vehicles the way they are operating right now. To scale up, they would need to drive less conservatively, and in that case they very likely would not be safe in their current state. IMO, this means it isn't really meaningful to say that they're "already safer" than human drivers.
This. Once autonomic cars can safely and with confidence steer through a crowded IKEA parking lot on Saturday morning when it's raining we can say they are ready.
Since human drivers can do this, what's missing from autonomous systems to perform here as well? More training data? Different optical or radar systems? Assuming our ability to drive isn't some God given gift, there shouldn't be a hard limit on what autonomous cars can do, yes?
They don't have enough data to brute force the problem with machine learning. The dataset available will never be large enough to allow ML autonomous cars to solve every situation a human driver can. Another AI breakthrough is needed.
LLMs are so good because they have almost the entire digitized text output of humanity available for training. They may not always provide factually correct information, but it is dished out in near perfect English (or whichever language you prefer).
Personally i think if we want to recreate a human driver full AGI is needed. But, in the right conditions current autonomous cars will work well. They just need some external helpers, like being geo-fenced to a set of roads which have all the correct sign posts and markings.
> The dataset available will never be large enough to allow ML autonomous cars to solve every situation a human driver can.
Not every human driver can solve every situation that the best human driver can solve. Also, the most dangerous situations happen at high speed, which humans aren't great at handling unless we've had a lot of training. The main reason I'm optimistic about autonomous cars is that a system like Waymo's will only get better and better over the years. And newly produced cars will be just as good as "experienced" cars.
The other problem is that even if you have car that can do 95% of the driving and only need 5%...
...that makes the human driver, who normally would get say 10k miles of practice every year will drive... 500 miles on average.
It really does need to be "car can do all of the driving", or else it will be putting people that have zero practice in those hard situations it can't handle.
Yes, I agree. And regardless of experience, people at some point get too old to drive, but don't always want to accept that fact. Having cars that can do 100 % of the driving would be great.
> Also, the most dangerous situations happen at high speed
This is a mantra from speed limit fans. There are a lot of dangerous situations happening at low speed (bicicle coming from left in an intersection, children playing )
I am sure they will do this eventually, but it is next level.
They would likely have to setup in a Northern location and run their cars in that location for a while. Also need to update their simulations to account for snow and icy roads. So more complex driving models and more challenging road detection.
Sometimes there is light snow over the whole road and there are no markings. You just have to position yourself relative to street signs or ditches where you think your lane is.
Also we then get to some more interesting challenges. Certain snow conditions can have rather bad traction like very muddy roads do. And self-driving should be able to navigate these better than regular drivers.
How can the car determine the road condition, especially if it varies (snow here vs ice under the snow in this spot vs that's a drift)? It seems like a lot of this is based on driver experience whereas traction controls today are based on things like differential wheel speed comparison. It won't determine a bad situation until you're in it.
Autonomous vehicles are good at making routine repeatable decisions based on clear data. I'm not sure how we can sense and input this sort of data clearly enough.
Eventually these autonomous vehicles should share driving condition data in real-time-ish. If one car finds a stretch of the road slippery, it should basically mark it on the map as a bad location, and then other cars and be careful.
I think weather conditions can also predict road conditions pretty well. And the combine that with historical data on where things were slippery, I bet you can create a pretty good model of what conditions to expect across a driving environment.
Although in Canada, sometimes all roads in a city are just horrid - dozens of cars in ditches or hitting each other because no one can stop. And maybe in those cases, autonomous vehicles should refuse to drive until it is improved.
Where I live, the sharing is done at a meta level. I live in the Sierra Nevada mountains of California and we often get experienced snow drivers who get stuck immediately because they aren't used to driving on 25% grades that experience frequent freeze / thaw cycles (and therefore generate ice). The issues here are predictable and don't even require real-time sharing.
When someone new moves into town, someone will typically explain to them to watch for where the sun hits the ground in winter. Where that happens on a steep slope, avoid driving there when there is snow. Before they know to do this, a human will attempt to climb that hill, not make progress because of the ice, back down, and try another route. This is a rural area without much traffic, so conditions might change significantly between "real-time" updates.
So, we have human drivers "zooming out" in a couple of ways... in time/context (observing conditions that lead to certain routes becoming high risk) and in problem resolution (try another route). This is quite different from what is currently being tried algorithmically.
You need more training data and more sensors and completely different rules that are often opposite the rules for good conditions. The things you really need for snow/ice that are hard for computers are anticipation and flexible rules.
I was talking about general purpose autonomous driving. Special cases of autonomous driving have been in use for a while like trains (the Yurikamome in Tokyo started operation in 1995), buses driving on designated routes, Waymo cars that operated in certain areas etc.
Interesting actual useful stat would be - insurance companies covering Tesla owners, reported accident rates on that pool of owners before & after Tesla ownership.
I raise this because the pool of people buying a Tesla is different than the full driver pool. There is likely serious sampling bias in this pool of drivers due to demographics, geographics, etc..
Insurance seems really weird to me with self driving cars. It seems like the driver being insured is Tesla Motor Co not Bill Smith. Bill just owns the car, he doesn't drive it.
I've noticed a lot of automation legally still require a warm body to take the blame. Seems like eating your cake and having it too.
Ignore Tesla. Yes Elon is going to keep saying they're selling a "Self driving" car but Elon's lawyers are going to keep telling anybody who gets into an accident they were driving because Elon isn't really selling a self-driving car.
Waymo are driving, there is no warm body, in accidents where a Waymo self driving taxi struck somebody that's on Waymo, you're just a passenger the same as if you were in a New York cab, or a Uber, or if an A320 you're sat in smacks into a post because the pilots took a corner too close. You're not the one who needs insurance, because you're not driving, Waymo is.
Agree that Tesla isn't self driving in the technical sense.
However, given public perception and number of units.. Tesla is what 99% to 99.9% of consumers will interact with in the "self driving" space.
Even Waymo still has some safety drivers per the article linked.
And if we are going to constrain our analysis to self driving pilot programs, then we need to constrain the comparison to the same geofenced area.
Arguably there are not a lot of fatal accidents per 1M miles on the local roads of Phoenix/SanFran by human drivers either.
The third important point about not comparing averages is that there are many human drivers that are outliers - people who are on a suspended license, intoxicated (over 1/2 of injured drivers are intoxicated), people who should have never had a license to begin with (either unable or needed more training) due to our
joke testing, etc.
I would much rather see comparisons to safe/prudent drivers vs the average. Try to sell me autopilot as being as safe or marginally better than the average driver and I'll never use that shit - the average driver sucks.
I read this as kind of goalpost-movey. If we can, with self driving cars, clip some of those outliers, we will have outsize impact on the total safety picture. Average skill is what we deal with all the time (On average. ;) so better than that ought to be sufficient.
That long tail of outlier clipping works on an individual human basis, too. How many accidents happen because an individual careful driver is, momentaritly, inanttentive?
"If we can, with self driving cars, clip some of those outliers, we will have outsize impact on the total safety picture."
It depends. I doubt it because the distribution needs to be considered. Replacing average drivers either an average system yields no benefit. If it's applied to both tails equally, then you haven't moved the average either, juar made it more homogeneous. You'd have to target the highest risk groups to make the impact. Many in those groups can't afford or won't adopt the tech - the elderly, low income (or less educated depending on the study), or young males mostly.
The better ways to target only the problematic tail is stricter testing, better education, and possibly permemant interlock devices for DUI offenders.
"How many accidents happen because an individual careful driver is, momentaritly, inanttentive?"
My guess would be probably about as often as autonomous systems screw up in novel environments.
But for these types of analyses you have to take the average because autonomous driving will improve the safety of these “average” drivers quite a bit.
I get what you’re saying though. I’ve been driving for 20 years, have had a single accident (rear ended at a stop light, not really my fault), no moving violations, and have driven about ~300,000 miles total (sounds like a lot but I was a full time lyft driver for a bit).
I’m positive I’m an outlier but I’d also trust myself a lot more than some of the current autonomous vehicles. I’d still assume with AI assistance or partial autonomy, I’d be safer on average though.
If you’re not a reckless male under the age of 25 or a persistently-intoxicated driver, it’s perfectly reasonable to want an autonomous system to be compared to YOUR safe driving profile. Reckless young men and drunks are extreme outliers of safety and they bring the average safety level way down. Being safer than average means nothing. Needs to be safer than an adult driver with clean history and cheap insurance.
Of course it means something. If every reckless drunk <25 year old was replaced by autonomous driving YOUR driving experience would automatically become safer, do you see why?
But that's not the paradigm unfolding here. The reckless drivers will shun self driving because they want to show off and have fun. The drunk driver part could be reduced after sufficient rollout, but we'll have to see how that plays out - the people whonaleays use it would use it when drinking too, but you'll still have people going "I'm fine to drive" and not using it if it's not habit.
Yes, that is the paradigm unfolding here. That is indeed one of the many critical attractions of self driving vehicles.
>The reckless drivers will shun self driving because they want to show off and have fun.
First, this is your unsourced head canon. People with drinking problems, cognitive impairment (both temporary and permanent), etc seem likely to be a far bigger percentage.
But it doesn't matter, because they won't get the choice, which again is one of the core aspects of self driving vehicles. Right now, human driving in America is a quasi-right because it's currently intrinsically tied to "personal arbitrary time/space mechanized transportation". I say "quasi-right" because while legally it's not formally a codified right or in the Constitution or something, as a practical matter if most of the population wasn't able to drive the country would fall apart. And that is recognized by government, both executive/legislative and judicial. Losing driving rights would be a major burden for tens of millions people and would disrupt their ability to get or retain jobs, in turn causing other societal expense. So traditionally government is quite hesitant to wield that stick and almost never preemptively.
Self-driving though would decouple human driving from someone having their own car and ability to have it take them where they want, when they want like right now. That in turn would completely change to political dynamics. Driving would truly become a privilege, and getting a license could become much more like class C licenses or professional emergency responder licenses or the like are. Much, much more serious training and testing, continuing ed requirements as with other professions, and much easier to lose. Because unlike right now, losing one's drivers license won't mean losing the ability to use a car. Anyone who acts recklessly at all, even if there is no damage of any kind, can just preemptively have their license rescinded until at a minimum remediation is completed. Because driving will just be a hobby.
Except that’s not the proposition for early adopters. It’s the wealthier, safer driver replacing their driving with a (clearly) unsafe, inferior autonomous system. And the 18 year old drag racers continuing to drive rusted out Civics, because that’s what they can afford. So it’s a net decrease in safety in the immediate term, with the safe drivers expected to be the primary Guinea pigs in the experiment, built on a hope and dream that the tech will actually deliver eventually.
Imagine the following scenario: your hometown is encircled by forest fires, the only way out is a single road. That road sees some abnormal conditions, smoke, vehicles on the side, maybe burning debree on the road.
Would you trust a self driving vehicle to attempt to drive your family into safety?
If not, then you probably got some food for thought about averages.
> To scale up, they would need to drive less conservatively
Can you quantify this? I think that self-driving cars should drive conservatively. Human drivers who are aggressive or risk taking are idiots even if they get to where they are going on time.
Driverless cars should drive like public transport buses (at least around here) / school buses. Conservative, slower than you would like and safe.
They drive more conservatively than the most conservative human driver, mainly because they don’t try at all to cooperate with other vehicles. It’s clearly due to technical reasons, not because they’re being more safe.
For example, if they need to make a lane change for a turn on a congested street, they will wait for someone to let them in rather than “pushing” their way in like a human driver (especially a bus driver) would. When that inevitably doesn’t happen, they are forced to go around the block and try again.
That exact behavior happened during the “Uber” ride I took on a Waymo, and it is possible they have improved since then.
I agree, but the specifics get tricky. It is possible to drive so conservatively that it’s unpredictable and therefore dangerous.
I’ve known some terrible conservative drivers in my time; I was a passenger with someone who stopped abruptly in an intersection despite having a green light because they were afraid someone else was going to run the red (while it’s fair to say the person who rear-ended us was at fault, it was an accident that would not happen with a more normal driver).
Maybe I’m nit picking on the difference between conservative and unsurprising.
Yes, conservative should probably mean unsurprising to some degree. Sudden unexpected stops is not truly conservative. Yeah, I can see how the definition is complex.
It depends how conservative you mean by conservative.
If you compare a traffic light intersection where there is a clear moment at which you should stop or continue with a roundabout where you need to make a judgement call; being overly conservative would mean stopping at the roundabout until there is 0 traffic.
There was a discussion on this in ep 454 of Freakonomics radio.
In the same vein, the average car on US roads is 12 years old. So the average driver is dealing with the safety considerations of a vehicle that isn't a perfectly maintained current model.
Maybe making the point of comparison with professional drivers in fleet cars would be more accurate, but I doubt that data is available.
> If it could scale up and still operate the same way...
Once they scale up, they will be encountering more other Waymo cars on the roads, eliminating a lot of the accidents where someone else crashed into Waymo, or made an upredictable move. So then they could afford to drive less conservatively.
We are pretty sure that our roads would be much safer if all cars were self-driving, but that cannot happen overnight and there will be an interim phase when robo-cars have to mix with human drivers, which is the hard engineering challenge.
>for the types of situations where autopilot was used, human drivers actually had fewer accidents on average.
Do you have any source for that? As far as I'm aware, there simply isn't enough public data released to actually make an unbiased comparison here. So whichever side of this you're on, you can always twist the data to confirm your point of view. Only when Tesla and all other carmakers begin to hand out all of their data, we will get a useful comparison.
This definitely feels like a very hopeful and intentionally narrow sited analysis. Like, we're saying that Waymo's accidents were primarily low-speed collisions but also they were only allowed to go 35mph max and drive in off peak hours when accidents are far less likely to begin with. If we're doing a comparative analysis then we'd need to compare to humans driving unimpaired during off hours at 35mph at most. And, it's also definitely worth considering how predictable and consistent the failures are; a human might miss someone backing out and hit them once because they are distracted. A computer missing the same means that there's a set of conditions that will trigger failure every single time they occur. So, if those conditions are repeated heavily in any place then the likelihood of catastrophic failure increases substantially.
For sure there is some cherry picking going on. And for sure it depends. And for sure we're missing the big numbers to do a meaningful statistic analysis at this point.
What I think is most interesting about your way of looking at this is whether we'll find those sub-domains where autonomous driving makes sense versus where manual driving will remain safer. And what those domains will be.
I still wonder if we end up in a place--perhaps for a significant period--where you end up with AVs on a subset of roads perhaps in a subset of conditions. I don't live in a city so I have different use cases than those who do live in a city may have, but being able to be driven on the highway in most conditions seems like a much easier task than the general case, applies to conditions where an accident can be serious, and would actually be really useful for a lot of people.
Also you can do the same thing horizontally, with more passengers and over greater distances. GoA 3 services from the 1980s and then GoA 4 services a bit later
[Grade of Automation 3 services don't need a human "driver" but they're not safe enough that we can just leave them to be used unattended like an elevator, there's a trained member of staff somewhere, probably helping tourists or checking tickets or something - the GoA 4 services are unattended, like an elevator there are humans but they're far away in a control center and you're safe enough while you wait for a human to come to where you are and fix whatever extraordinary problem the automation couldn't handle e.g. there's no electricity so you can't go anywhere.]
So Waymo is pretty decent - generally if Waymo was equal to human performance, the number of times it caused an accident would similar to the number of times other drives caused accidents it was involved in but that isn't the case. Humans drivers are hitting Waymo cars way more.
"First, other vehicles ran into Waymos 28 times, compared to just four times a Waymo ran into another vehicle (and Waymo says its vehicle got cut off in two of these cases). Second, Waymo was only involved in three or four “serious” crashes, and none of them appear to have been Waymo’s fault."
This suggests that if we were to replace all humans with Waymo, at least in the nice temperate snow-less/ice-less cities they are currently driving in, accidents would possibly be cut by 4x.
I’d be suspect about how little the Waymos are at fault. Erratic or unpredictable driving behavior can cause accidents even if it would technically be ruled as the other driver’s “fault”.
Also, replacing all human drivers with Waymos isn’t possible. Waymos can’t do everything drivers need to do and don’t operate in all conditions, in all places, at all times. Until they do, these stats are not really meaningful.
This also doesn't account for non-safety related risks of self-driving cars, relative to human drivers.
For example, it's quite common for an autonomous vehicle to slow down or stop in a way that disrupts traffic. This isn't "unsafe", per-se, and in fact it's probably the car's safety protocols functioning correctly.
But (depending how frequently it happens) it's still a strong argument against deploying these things on public roads.
We need vehicles that are not only safer than human drivers, but less disruptive and more competent overall.
Of course there is a limit. But if the self driving vehicles are being a bit slower than human-driven cars, that’s a very small price to pay for the huge safety benefits.
In fact, as a pedestrian and cyclist, I could not care less.
>But if the self driving vehicles are being a bit slower than human-driven cars, that’s a very small price to pay for the huge safety benefits.
The safety benefit doesn't even have to factor in, or only somewhat. Part of the drive for speed with human driven vehicles is that for most of us it represents pure waste in our lives. The two hours I drive to a job isn't some uplifting activity, just robotic drudgery. Yeah I can load up a podcast or something and try to get something out of the time (at the cost of slightly more distraction perhaps!) but that's limited. So of course I'm motivated to have that be as short as possible. Shaving off 15 minutes on a trip made a hundred times a year will add up to weeks of my life back to me, although of course at a higher risk of losing my life entirely perhaps?
Whereas if the robotic aspect was taken care of by a computer instead, I'd be free to make far more use of that time. Or even just sleep, which is still useful. If the car takes an extra half hour, that's not necessarily a problem in the slightest because it's not a wasted half hour. Being conservative will be much easier purely from a self-interested perspective. Avoiding all the other issues is very good as well but I don't think it'd require some self-sacrifice either.
>In fact, as a pedestrian and cyclist, I could not care less.
The issue isn’t that they are too slow, it is that they are too erratic. Slamming the brakes for no reason is unexpected behavior, which makes it more likely that a crash will occur.
> as a pedestrian and cyclist, I could not care less.
I mean, if you don't use something, you don't care about downsides that apply to it. That's not a great argument, unless you also only want to convince other people who don't use cars to not use Waymos.
The article is not asserting that self-driving cars are always better than humans, but simply that in controlled conditions in SF/Phoenix, they seem to be doing a better job than humans. This is already pretty amazing.
Yes, many places are harder to drive in than SF and Phoenix, but they're harder for humans too, and Waymo & co will continue to get better as they get more training data, better sensors, and bugs get fixed.
1976 was the last time it snowed in San Francisco. I wonder if the Waymo software has logic for precipitation/temperature i.e. shut it down. I'm guessing it requires human intervention although hopefully in a timely manner.
Depends on the conditions. Freshly paved interstate with clear lane markers, light traffic, daytime overcast skies with no rain? Maybe. Driving through a construction zones with painted-over lane markers at night, traffic cones knocked into the road, power out, and a police officer directing traffic at an intersection? No.
For mechanical reasons as well. My dad was a resident engineer at the Chrysler automatic transmission plant. He would send early production runs of transmissions to cab companies in San Francisco for stress testing...
If you take snow/ice out of the equation--admittedly a big if--it's probably fair that driving in at least many parts of SF are relatively challenging by US city standards for a person to drive in.
There are cities that are not in california, and some of them get snow.
Self-driving cars don't have to drive in snowy conditions; indeed, they don't have to drive in any conditions at all. It'd be nice if they could though.
Ever walk around in a crowd in a foreign country? I mean one quite distant and distinct from your own? You bump, cut off, and get off frequently due to having a different rhythm and patterns than everyone else. It doesn't matter how conscientious you were in crowds back home.
Self-driving cars are the foreigners. In isolation they may be safer and more predictable. But they have driving patterns that "the locals" simply don't. There are bound to be more bumps and cutoffs as these two groups interact, only at high speeds with tons of weight.
It may not matter how conscientious their algorithms are if the locals keep bumping into them or get continually inconvenienced. The locals aren't going anywhere, for better or for worse. FSD needs to drive like a local, only better. That's a much harder nut for FSD to crack.
No because the cars are self-selected to only drive on roads that the companies consider safe to begin with. Self driving tech currently operates on probably less than 1% of American roads, and that's a fraction of roads on the planet.
>Waymo and Cruise have driven a combined total of 8 million driverless miles
For context, the average American drives about 13k miles per year, so this is about as much as much data as a village of a thousand people produces annually.
The actual experiment that would produce relevant data: A substantial mass of driverless cars on a random set of roads under random conditions, including with a high enough concentration that the cars need to interact with each other the way humans need to interact with other drivers. (obviously we know how this would end, which is why nobody attempts it)
Over ten years ago, this article pointed out that "The Ethics of Saving Lives With Autonomous Cars Is Far Murkier Than You Think".[0]
>"Let’s say that autonomous cars slash overall traffic-fatality rates by half. So instead of 32,000 drivers, passengers, and pedestrians killed every year, robotic vehicles save 16,000 lives per year and prevent many more injuries.
>"But here’s the thing. Those 16,000 lives are unlikely to all be the same ones lost in an alternate world without robot cars. When we say autonomous cars can slash fatality rates by half, we really mean that they can save a net total of 16,000 lives a year: for example, saving 20,000 people but still being implicated in 4,000 new deaths."
So they say 1 crash every 60k miles (or about 5 years worth of driving). I really doubt humans have one crash every 5 years on average.
Then they say 1 fatal crash every 100 million miles.... And then make it sound like we have to have self driven cars drive 100 million miles before we can tell if they are safer.
> So they say 1 crash every 60k miles (or about 5 years worth of driving). I really doubt humans have one crash every 5 years on average.
Oh not humans. Americans. Most developed countries are much safer. America has found a lot of exciting ways to make this worse, whether its choosing more dangerous technology, not prioritising safety in design, or just obvious stuff like oops we made it crucial for everybody to drive so now we have to let everybody drive even though that's unsafe.
1) You would need self driven cars to drive over 100 million miles to validate that, assuming that there have been no deaths from full 'self driving' (i.e. Cruise/Waymo rather than Tesla, which isn't level 4)
2) Depends what you call a crash - 1 impact per 5 years might not be entirely wrong... According to this link:
> The car insurance industry estimates that the average driver will file a claim for a collision about once every 17.9 years.
This matches me - I've had 1 crash in 15 years of driving which resulted in a claim (someone drove into the side of me), but there have been at least 2 other impacts which have not resulted in a claim (1 person bumped into the back of my car on a motorway and impacted but but no damage, and there was one incident where I accidentally scraped a car when parking and I agreed to pay the driver for repainting directly without going through insurance).
There is a big gap between scraped a shopping cart and head on collision. This is why Tesla reports 'miles between accidents resulting in airbag deployment.'
The national average is 600k miles. For FSD (which has over 300M miles driven) it is 3.2M miles between accidents.
while this methodology makes sense, it's hard to get the thought out of my head of the Tesla self driving car intentionally not deploying the airbag because it will bring down it's stats
Then get into your head that Tesla engineers use the fact that the entire fleet has video and telemetry data in order to fine tune the use of airbags for the exact opposite purpose: to reduce injury caused by both excessive and insufficient airbag deployment.
Yeah it sounds like they are using weird averages and applying them to a small fleet of self driving cars and saying they are safer because they have had less accidents than the millions of people on the road
two things: 1 most crashes are very minor (e.g. getting into or out of a parking space) and also most crashes are from people who are some combination of under 20, over 80, drunk, or just atrocious at driving. most people's crash rate is a lot lower
I think the big issue with self driving cars isn't that anybody doubts that they are safer in standard situations.
The issue is, that I can totally see a self driving car fuck a rare obscure situation up in the worst way — where even the worst drivers would probably do okay.
E.g. let's say you are in a tunnel and there is a fire, you car blocks the fire truck. The normal line of thinking would be to throw all existing rules of traffic out of the window and so totally illegal maneuvers that might even damage your or other peoples cars. A self driving car might be able to do that, but it cannot make the judgement needed to know when to throw all rules out of the window.
Now in that example the driver (if they exist) could take over, but there are similar examples where there is less time to decide and/or a switch to manual might not be feasible. Think of all situations where multiple dangers have to be weighed against each other, for example a rock slide where a street is partly blocked (enough for the AI to stop) but there is still rock coming down the mountain. A situation where a criminal gang blocks the road, in order to make you stop and rob you — this being a thing that could become easy game when self driving is wide spread enough.
The problem with self driving cars is that even if you statistically manage to be safer over the typical journey today, people still wanna know that it keeps them safe in very atypical conditions as well. And that as of now is not the case.
>The issue is, that I can totally see a self driving car fuck a rare obscure situation up in the worst way — where even the worst drivers would probably do okay.
So? Unlike with humans, self driving networks of cars can learn from a rare obscure situation, then the entire network can be updated to handle it going forward, forever. Whereas "the worst drivers" (drunk speeder road ragers say) will just cheerfully plow into students in front of their schools yet again.
>The problem with self driving cars is that even if you statistically manage to be safer over the typical journey today, people still wanna know that it keeps them safe in very atypical conditions as well.
Meh. This is just a typical argument from incredulity/unfamiliarity. People have zero issue accepting significant risk given solid benefits and most people's driving most of the time is short and boring. That'll be Good Enough to get the ball rolling, and it'll only ever improve.
>And that as of now is not the case.
Where is the suggestion in the article that the current cars are ready to handle absolutely everything everywhere today? It's about incremental gains and safety in one particular very high population location.
You are missing my point here: Whether the lack of confidence outlined in my post is rational or not wasn't the point of my post at all. The lack of confidence (even if it is irrational) is something that needs to be overcome before the technology can receive wider adoption.
This is not an argument that can be easily be wiped away by a class of people who wants to believe in new technology (and profits from its potential adoption). After all from the standpoint of most untechnological people the past decade has been a one where everything they knew got shittier and less reliable while claiming to be "smart". It got even to the degree where you need to create subscriptions for basic features of your damn car, to the benefit of exactly nobody other than the shareholders.
In that climate taking the leap to trust a new technology is something you need to be willing to make. And many people aren't buying the promises. Again, whether they are right or wrong isn't the point here, but some technology is better suited for incremental safety improvements than others. E.g. nobody gives a damn about roombas falling down stairs, but if it is a damn car with their family in it they are more careful about their decision.
There are also psychological aspects that play into this. Given the choice:
A) travel inside a box that has a 10% chance to kill you and your family without you being able to influence the outcome at all
B) travel inside a box that has a 15% chance to kill you and your family, but if it happens you are at fault and you can also prevent it from happening
Most people would choose B just because the agency they have gives them a better feeling. Now granted, when we reduce the probability of A and/or increse the probability of B there will come a point where most people will flip. In the case of self driving cars this is not a out numbers, it is about perception. And the point of my post was that the perception isn't there yet.
Curious comment about driverless vehicles. How does insurance work? How are the rates set (high due to ambiguity now?)? How do you exchange information? How do you clear the roadway while also not fleeing the scene? How does the vehicle know if it makes contact at very low speeds (no real shock of impact but enough to scratch/dent/run over a foot etc?
WallStreetMillenial on Youtube just had a nice explanation why comparing risk for self driving cars and human drivers is not that simple. At least for Tesla, the answer is "hell no".
Seeing how there are tons of young males in big trucks driving around asserting territorial dominance, I suppose self-driving cars that are not really that good are indeed more safe, comparatively speaking
This question seems very dependent on the country in question. The article seems to be about the US. I suspect it is very different in a country which features well-designed road layouts and driver training, like the UK.
Downvoted because the article does a really good effort to analyze all data and adds other considerations. A simple 'no' without arguments does not add anything to the discussion.
> With all that said, it seems that Waymo cars get into serious crashes at a significantly lower rate than human-driven cars.
At the end of the article, the author makes the appeal that self-driving development should continue and increase on public roads because the companies can't develop their products otherwise.
The companies can absolutely develop their products outside of public streets. A few thousand acres and some purpose built infrastructure would do it. If that's not cost effective, well that's their problem.
If you invest (significantly) into a test suite that realistically mirrors the real world, that argument doesn't fly. Think Chaos Monkey squared. But it's neither cheap nor fast. Sure, some rare edge cases are probably impossible to model even with great care. But that's not the stage self driving is at, they struggle with pretty common and predictable problems.
You’re talking out of both sides of your mouth here. Even the greatest test suite on the planet will miss edge cases in the real world. Self driving cars are already 99.999% successful and effectively perfect in simulation.
1) It's very important to do an apples to apples to comparison when you're looking at accident rates, and you can't just do that by looking at averages for the US for all miles traveled. There have been issues with numbers for Tesla autopilot before where it looked like autopilot was safer than it was because Tesla compare autopilot against the average accident rates for all miles driven but in reality for the types of situations where autopilot was used, human drivers actually had fewer accidents on average.
2) Waymo has a low accident rate because its vehicles are operating very conservatively right now. If it could scale up and still operate the same way, that wouldn't be an issue, but its vehicles are already causing problems by getting stuck which suggests that it wouldn't be practical for them to operate a significantly larger number of vehicles the way they are operating right now. To scale up, they would need to drive less conservatively, and in that case they very likely would not be safe in their current state. IMO, this means it isn't really meaningful to say that they're "already safer" than human drivers.