Hacker News new | past | comments | ask | show | jobs | submit login
NHTSA’s full investigation into Tesla’s Autopilot shows 40% crash rate reduction (techcrunch.com)
808 points by fmihaila on Jan 19, 2017 | hide | past | favorite | 314 comments



It's interesting how vague this is. There's an NTSB investigation still pending into a specific Tesla crash.[1] The goals are different. NHTSA asks "do we need to do a recall?" NTSB asks "exactly what, in detail, happened here?" NTSB mostly does air crashes, but occasionally they do an auto crash with unusual properties. Here's the NTSB report for the Midland, TX crash between a train and a parade float.[2] That has detailed measurements of everything. They even brought in a train and a truck to reconstruct the accident positions.

It took a combination of problems to cause that crash. The police lieutenant who had informed the railroad of the parade in previous years had retired, and his replacement didn't do it. The police marshalling the parade let it go through red lights. They were unaware that the traffic light near the railroad crossing was tied in to the crossing gates and signals. That's done to clear traffic from the tracks when a train is approaching before the gates go down. So ignoring the traffic signal took away 10 seconds of warning time. The driver thought the police had taken care of safety issues and was looking backwards at the trailer he was pulling, not sideways along the track. People at the parade were using air horns which sounded like a train horn, so the driver didn't notice the real train horn. That's what an NTSB investigation digs up. Those are worth reading to see how to analyze a failure.

[1] https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

[2] https://www.ntsb.gov/investigations/AccidentReports/Pages/HA...


Here's a reconstruction video they made to accompany the parade collision report: https://youtu.be/tMsjJWJFBbA


The only thing that matters is "NHTSA notes that crash rates involving Tesla cars have dropped by almost 40 percent since the wide introduction of Autopilot"

Even if autopilot was faulty in some way, if more people were living than dying - does it matter?


Yes. Otherwise, you'd hear about coal pollution all the time, but nuclear meltdowns wouldn't make the news (unless they became much more frequent).

Similarly, 9/11 would be a footnote compared to sketchy pain killer drug trials. The black lives matter movement would focus more on sentencing disparities and for-profit prison conditions than police shootings, etc, etc.

The reason it matters here is that it's presumably easy for Tesla to fix any bugs that are killing people.


You are making comparisons between unrelated domains to make your point, which isn't really fair. Nonetheless, I think what you are really wanting to say is there is always room for improvement, and we should strive for that even if we've already made massive gains.


Simply put, saving lives should not stop one from saving even more lives, if possible.


This assumes that the crashes avoided by Autopilot are of similar magnitude/deadliness as the crashes facilitated (for lack of a better term) by Autopilot.

That is, Autopilot makes certain kinds of accidents more likely — those in which the human driver is not monitoring the vehicle carefully enough, Autopilot misses something critical, and an accident results. It is possible, though not necessarily the case, that these accidents tend to be more serious than accidents that Autopilot prevents, such as lane-changing bumps or other less-fatal accidents.

To be clear, I'm not saying that evidence has been presented that the type of accidents prevented are less important than the type facilitated. I'm just pointing out that not all accidents are equal, and that there may be correlation/causation that makes the overall calculus more complicated.


Well, yeah. Imagine there's heart surgery which has only 20% success rate. So every time 100 people go in for that surgery, only 20 survive.

Now, someone makes a robot that has 100% success rate, but every 100 operations, it has a glitch in the sofware, where it stabs and kills the patient instead. It has been calculated that on average, using the robot only kills 1-2 out of 100 people going in for surgery, compared to the old method of 80 people out of 100. The robot is clearly 80 times better than manual procedure, so why not keep using it?

The answer seems to be - because robot killing people is not an unavoidable outcome. It can be fixed. It's a software problem, a mistake that costs someone their lives. Similarily, an auto-driving Tesla may be killing less people on average than manual drivers, but that doesn't mean that any software problems that lead to it don't matter.


Step 1: don't recall the robot surgeon. While it does have a fatal bug, recalling eat would kill 79% more patients. Oops.

Step 2: Correct the bug, so the robot stops killing patients.

The Tesla autopilot may be in a similar situation. Even if there is a number of fatality inducing bugs, disabling it would be even worse. That said, 40% crash reduction is not enough. I want to see 75%, 90%, and more.


You are omitting the PR angle: "I DON'T CARE THAT FAR MORE PEOPLE WOULD DIE OTHERWISE - BAN KILLER ROBOTS NOW!!!!"


That one is easily dealt with: "Our robot surgeon reduces the fatality rate by 98.5%. Our study of the remaining fatalities suggest we can reduce them further. We hope to reach near-zero fatality rate within the year.

Don't even speak of the bug. Just state what matters: this stuff is better than what we had before, and it can (and will) be even better. This may prevent talks about killer robots.


This analogy doesn't work because you don't do manual heart surgery on yourself. Whether or not a robot does the surgery, the hospital's butt is on the line.

The biggest difference between an autopilot accident and a traditional accident is fault. Most accidents are driver error. Even if it drives accidents, overall, down (which it will), changing the responsibility for safe travel from mostly the individual to mostly the manufacturer has huge legal repercussions. Not to mention many people's discomfort with no longer being in control of their own safety, even if study after study shows that they shouldn't be.


https://en.wikipedia.org/wiki/Therac-25

The therac arguably saved way more people than it killed. However... it's one of the most infamous software bugs in history.

It's the same story you came up with above... except it's real, and it got pulled.

They did however improve since then.


Yes it does, if it's faulty in some way it should be fixed in future versions and the results of the investigation should be public so that everyone in the industry can learn from that, just like in aviation.

Apart from that, more people living than dying could also imply that certain groups of people, maybe kids or bikers, could be at a higher risk of dying while the majority has a much lower risk, resulting in a net positive. So the acceptance of self-driving systems would also depend on the distribution of the risk.


This is a very simplistic way to look at it. We're in uncharted territory here. Did crash rates go down because driver error went down? Probably. So accidents for which the driver is liable go down a lot. And accidents for which the car-marker is liable go up a little. That last bit is still a very big deal.


Probably? Why do you think it likely that driver error would go down more than 40%?


NHTSA reports that 94% of all auto accidents involve driver error. Autopilot is a feature that reduces the number of decisions drivers make. It would be improbable for the two not to be extremely related.


>>Even if autopilot was faulty in some way, if more people were living than dying - does it matter?

If you let your friend drive you somewhere in your car because you think they're a better driver, is it their fault if they crash?

The reason could be anything - you're drunk and they're not, you're in really congested unfamiliar area near where they live, whatever. If you think they're statistically going to be a better driver it does not absolve them of liability. Someone is always to blame.


It depends who you ask. I'd bet that the widow of a crash victim would disagree...


Isn't this the same argument that could be used in the context of hospital borne infections?

Autopilot will reduced the number of fatalities in the same way that going to the hospital when you are sick will reduce the number of fatalities. However some people that use Autopilot will die when they otherwise would not have if they hadn't used Autopilot, just like how some people going to the hospital will die from infection when they otherwise would not have if they hadn't gone to the hospital.


I agree! It's a net positive, and I'm in favor it.

But it doesn't mean that people dying from faults "don't matter". Every HAI-caused death is a tragedy, and they certainly matter.

The issue is much less black-and-white than the GP's simplistic view.


Yes, it does seem vague. Isn't the NHTSA investigation meant to be a non-biased root-cause-analysis about last year's trailer crash? Instead, this just seems to be a grading report on Tesla's autopilot compliance.


Section 5.3 pretty much sums up the root cause of this accident: the driver assist requires the driver to be alert, and the driver had 7 seconds to react and didn't.

There's no requirement that driver assist software magically become self-driving software in that situation. Put another way: it's nice if the software can avoid a problem, but not required.

The main remaining concern is that drivers not be confused about what they are required to do; there's discussion about that in this document.


Not really. The NHTSA report says that auto-braking is initiated only when the radar and MobileEye vision systems both report an obstacle. That reflects poor sensor fusion. Tesla took an off the shelf radar product from Bosch and an off the shelf vision product from Mobileye, ANDed their processed outputs together, and called it a day. They don't seem to have confidence data from each sensor and some means to fuse it. If the radar is reporting "target dead ahead, 3 secs to crash" (radars have range rate data), and the vision system is reporting "unrecognized situation ahead", that ought to at least trigger a slowdown and alert.

This is probably why Musk pulled radar and vision processing in-house. Combining the data from both sensors is more effective, but more difficult, than combining the results from processing both sensors separately.


And that's the problem of "aaaalmoooost self-driving cars, just a few corner cases:" 1. the corner cases are right where the risks are, and 2. the hype of "autopiloted" cars creates an unrealistic expectation of L5 autonomy.


NHTSA is unfortunately highly captured by the industry whereas NTSB has a separate chain of command that ensures more independence.


Very interesting, thanks for posting


After looking at the report it looks like Tesla ran into the same issue we did in the 2007 DARPA Urban Challenge. The trailer was higher than the front facing sensors. We and most other teams had all assumed 'Ground Based Obstacles' meant that any obstacles on the test track would make contact with the ground in the lane of travel. DARPA decided to put a railroad bar across the street and expected cars to back up and do a U-Turn when they encountered it. The bar was too high off the ground for our forward LIDAR to see it so we collided with the bar at nearly full speed.[1] The sad part about this is that when we were drinking after dropping out of the challenge our team leader said something along the lines of 'At least we know no one will ever die now from the mistake we just made.'

[1] https://www.wired.com/2007/10/safety-last-for/


Predictions like that only work out if the lesson is sufficiently broadcast. Clearly, since this is still newsworthy here, that is not the case. (However this back channel is helping.)


If there isn't some compilation or review article along the lines of "all serious failure modes encountered in autonomous vehicles since 2005-ish," there should be.


That's an excellent idea and should be managed centrally (by the NHTSA or similar). Basically a communal regression test suite that all self-driving vehicles have to pass.


Agreed, this is a fantastic idea. All the DARPA documents from all three autonomous car related Grand Challenges should be availible. Not sure if they are all released on their website if not they're FOIA'able since most everything DARPA does is public.


The main issue in Tesla's case is that "assist" can't prevent an accident if the driver isn't alert for 7 seconds. Tesla's software is level 2; you were aiming at 3+. As Tesla heads for 3+, sure, they're going to have to solve the problem you (literally) ran into.


I don't understand why the paradigm isn't about full volumetric prediction .. is it because people were too tired of solving all other issues (surely possible at the time of DARPA challenge, but Tesla had time and money since); or hardware limits (like scanning a larger area would impede signal quality or make processing too heavy ..)


I'm not sure what you mean, precisely. The grand goal IS complete autonomous function, but it's wise to approach this with caution. Legal and moral grey area abound, new technology brings new technophobia, definite "completeness" would require rigorous proof and we're JUST getting autonomous vehicles into the wild. But the kicker is that if you actually care about the safety of the public, then you have to care about whether or not the public will adopt the new technology that makes them safer-- and to do that you have to introduce it slowly and respond to criticism.

Other companies are being even more cautious; Ford isn't putting autonomy on the road until 2021. In the meanwhile Tesla can sell products as Level 2, and perhaps should do so until roads are as safe as commercial aircraft.


I know and I don't care. If they want public approval they can run demonstration, and probably 392584 other marketing techniques to generate social interest. Crappy solutions suck; period.


Which one seems more likely? "We could absolutely do that, but meh...just ship it already, who cares" or "doing that would require a major redesign, more sensors and more gigaflops"? (Okay, in the current hw/sw development paradigm, the first one doesn't sound as unlikely as it ought to)


For a company like Tesla, I dearly hope they don't think this way. If so it would mean everything they talked about was marketing drivel (the safety of the car frame etc). So far Musk felt honest with most of his design claims. Unlike lone SDV hackers on github for instance.


Making sure mistakes only happen once is why I think all companies working on self driving cars should have to supply their sensor data for a fixed time before any collision or successful crash avoidance publicly. The changes they make to their code remain their own trade secret, but building up an extensive library of test cases means everyone's software will get safer faster.

So far NHTSA has encouraged such data sharing but they haven't outright mandated it.


Tesla comes off extremely well in this report. For one thing, the 40% statistic cited in the headline appears to be well supported by the NHTSA report (section 5.4) and actually manages to frame the incident in a very positive light:

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

I had hoped to see more information about this specific incident. For instance, any data on whether the driver had his hands on the wheel, what steps the car had taken to prompt his attention, etc. But that doesn't seem to be included.


Just funny that the 40% from Autosteer seem to exactly match with the general AEB safety improvement rate also mentioned in the report...

    IIHS research shows that AEB systems meeting the commitment would reduce rear-end crashes[emphasis added] by 40 percent. IIHS estimates that by 2025 – the earliest NHTSA believes it could realistically implement a regulatory requirement for AEB – the commitment will prevent 28,000 crashes and 12,000 injuries.


Here is some of that IIHS research [1] from Jan 2016 that gives lots of raw crash numbers broken down by manufacturer, type of system (AEB/FCW), injuries involved, etc. Really informative.

Some excerpts from "Effectiveness of Forward Collision Warning Systems with and without Autonomous Emergency Braking in Reducing Police-Reported Crash Rates", January 2016:

"FCW alone and FCW with AEB reduced rear-end striking crash involvement rates by 23% and 39%, respectively. "

"Among the 15,802 injury crash involvements in these states, the percentage of injury crash involvements that were rear-end striking crashes was larger among vehicles without front crash prevention (15%) than among vehicles with FCW alone (12%) or FCW with AEB (9%). Only 4% of rear- end injury crashes involved fatalities or serious (A-level) injuries."

"Approximately 700,000 U.S. police-reported rear-end crashes in 2013 and 300,000 injuries in such crashes could have been prevented if all vehicles were equipped with FCW with AEB that performs similarly as it did for study vehicles."

[1] http://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/II...


For those unaware, "AEB" stands for "Autonomous Emergency Braking". See: https://en.wikipedia.org/wiki/Collision_avoidance_system

Did Tesla cars have AEB prior to Autopilot installation? If not, then this suggests the 40% reduction in crashes may simply be due to the installation of an AEB system. What effect Autopilot's other features may have would remain uncertain.


Yes, Tesla cars had AEB since March 2015. Autopilot (with autosteer) came out in October 2015. Presumably Tesla's AEB gets improved a little with every firmware release.


Thanks. But the NHTSA report looks at data starting with "MY 2014" (i.e. calendar year 2013) cars.

If AEB did lead to a 40% reduction in crash rates for Tesla cars, as it did for other car models, I suspect that moving the dividing line by five months later wouldn't change the figures much: you would still have a lot more crashes prior to AEB and fewer after.


Tesla was growing rapidly in that era, and the sensors weren't even available at the start of calendar year 2013. All cars shipped with sensors starting in October 2014. It'd be nice if the NHTSA report quantified this a bit, but they didn't. I don't think your suspicion is correct: some of the AEB benefit is in the earlier figure, and the number of airbag deployments is not small.


> Tesla was growing rapidly in that era, and the sensors weren't even available at the start of calendar year 2013. All cars shipped with sensors starting in October 2014.

I don't follow the statement about sensor availability. That doesn't seem to change the fact that all the miles driven "after Autosteer" benefited from AEB, while at most a small fraction of the miles driven "before Autosteer" would have had AEB available.

Given that we know AEB systems do reduce frontal collision rates by 40% for all cars, as the NHTSA report stresses, that implies we cannot attribute the reduction in crashes "after Autosteer" to Autosteer alone.

(Which matters because articles such as the one posted are claiming a cause-and-effect relationship between the introduction of Tesla Autopilot and a reduction in crashes, but if a significant portion or even all of the reduction is due to AEB systems that other cars have also started to adopt then we're mistaking correlation for cause.)


I don't believe they counted cars without the sensors, either before or after. Your statement about "at most a small fraction" has no evidence.

Finally, who said the reduction in crashes is Autosteer alone? Not me. It appears to be a combination of AEB and autosteer. That's what "some of the AEB benefit is in the earlier figure" means.

(Edit: note that the above comment was edited without marking the edit. See below.)


> I don't believe they counted cars without the sensors, either before or after. Your statement about "at most a small fraction" has no evidence.

Thanks for explaining. The NHTSA report says:

> ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to[21] and after Autopilot installation.

> [21]: Approximately one-third of the subject vehicles accumulated mileage prior to Autopilot installation.

So yes, cars which never had Autopilot installed were not counted, but they obviously did count cars from calendar year 2013 which later had Autopilot installed or else they wouldn't have said "MY 2014".

I don't think it's safe to say that the bulk of the miles driven "before Autosteer" happened between March and October 2015 on cars which had AEB systems available.

> Finally, who said the reduction in crashes is Autosteer alone? Not me. It appears to be a combination of AEB and autosteer. That's what "some of the AEB benefit is in the earlier figure" means.

It's implied by Elon Musk's tweet and all the press articles I've seen that Autopilot/Autosteer is responsible for the 40% reduction in crashes, as I mentioned in an edit to my previous comment.


WTH. I didn't say "it's safe to say that the bulk of the miles driven "before Autosteer" happened between March and October 2015".

If you want to debate Elon Musk's tweets and not what I'm saying, respond to Elon. If you respond to me, please respond to what I'm saying. You've totally wasted my time, and all the readers of this thread.

(Edit: Oh, and thanks for not marking your edit an edit.)


> WTH. I didn't say "it's safe to say that the bulk of the miles driven "before Autosteer" happened between March and October 2015".

I didn't say you did. I suppose you could interpret "I don't think it's safe to say X" as referring to your speech, but I read it as "I don't think X".

> If you want to debate Elon Musk's tweets and not what I'm saying, respond to Elon. If you respond to me, please respond to what I'm saying. You've totally wasted my time, and all the readers of this thread.

Well, I never asked you to debate it. It's a relevant because the TechCrunch article that we're discussing does imply that Autosteer or Autopilot is principally responsible for the 40% reduction in crashes, and in my first comment I expressed my doubts on the claim.

I would agree that "a combination of AEB and autosteer" is possibly the real reason behind the reduction in crash rate, but I would point out that while we already have strong evidence that AEB alone can lead to a large reduction in crash rate we cannot prove or disprove that Autosteer's impact is comparable.


Tesla model years match the year of production. 2014 Teslas were built in 2014.

Tesla rolled out AEB in March 2015 as a software update for all cars with the appropriate hardware. All Teslas produced starting in mid-September 2014 have the appropriate hardware, so several thousand MY 2014 Teslas have AEB as of early 2015.


I don't think it suggests it as much as it can be inferred from looking at both sets of data.


Exactly - this is a significant weakness of the study.


Yea, it really doesn't seem like autosteer would avoid that many crashes. All that autosteer does is keep you in your lane on a marked highway. Human errors that autosteer would correct are probably relatively rare, things like distractedly drifting into another lane or falling asleep at the wheel. AEB, on the other hand, helps avoid the most common type of crash, rear end collisions.


Autosteer can also change lanes, and would avoid crashes caused by missing a vehicle in a blind spot. Seems particularly useful on a busy highway, with vehicles in adjacent lanes leaving far too little following distance between each other.


If the car can detect something in the adjacent lane, then it'll give you a warning if you try to change lanes into it manually too.


My car has that - there are too many false positives for it to be useful. I most commonly hear get the warning when there are two left turn lanes and I'm in the right one (because I need to turn right at the next road). Sure there is a car to my left, if it doesn't turn left I'll hit it, but the other car would be at fault in that case.

I will admit that there have been a couple times it saw the car I was about to merge into before I did. It is not clear if I would have seen the car a moment latter anyway or not.


Mine has never had a false positive. I've yet to accidentally merge into anybody with it, so I don't know how it behaves in the situation it's supposed to handle.


"Number of crashes" is not as important as "crashes weighted by severity". "Distracted driver" is the number 1 root cause of accidents, and "failure to stay in lane" is the number 1 proximate cause of fatal accidents in most states.

http://www.businessinsider.com/the-cause-of-the-most-fatal-c...


OK but the cited 40% figure is just for crash rates, not for severity weighted crash rates.


A stylistic note: putting quotes in code format like that makes them unreadable on mobile browsers. It's a single-line side-scrolling element with three words in frame at a time that I can only see after lifting my thumb.


Is the base size and confidence levels reported anywhere? I get professionally suspicious when reports publish percentage figure comparisons of rare events.


What events do you consider rare? Car crashes?


Presumably they meant airbag deployment crashes in Teslas.

We can estimate the absolute number from Tesla's press release in December stating that 1.3 billion miles have been driven on Autopilot, while the NHTSA report states the accidents they count occur 0.8 times per million miles.

So in that case it's data from about 1000 crashes in the Autopilot case, and more in the non-Autopilot case. Which tells us that the statistics should be good enough (anything else would be a major scandal).

However, that is not what they are comparing! As the NHTSA report states in footnote 21, they're not actually comparing Autopilot miles to normal driving. They are comparing crash rates in vehicles before and after Autopilot was installed in those vehicles. That muddies the waters quite a bit in my opinion, especially when they don't tell us how significant this decrease is. I'm not saying it's wrong, but I'm also not very confident that it is a causal effect, as opposed to just a correlation.

Here is a PDF link to the NHTSA report, since TFA only links to a Scribd (thus unusable) version.

https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF


So they're comparing miles driven manually before a certain date to miles driven manually and automatically after a certain date? It certainly implies that Autopilot is safer, but why not separate it properly (Autopilot vs manual) and give us far more accurate information?

It just seems...odd.


Comparing all miles both before and after neatly sidesteps the problem that the autopilot only works on relatively good roads.


Even so, Tesla should be able to tell whether or not you're driving on a highway with all of the data they collect. Why not compare that data to the Autopilot data? This report wasn't made by Tesla, it was made by the NHTSA. So why not do a more meaningful comparison?

It's weird.


Crashes of cars that were hard enough to trigger the airbag, but still not so hard that the car would not get an autopilot update later in it's life. The "before autosteer" column does not include miles or crashes of Teslas that did not survive until OTA availability.


Actually you can argue that car crashes are rare. 1.2 deaths per 100 million miles is rare. (I'm not able to find statistics for accidents without deaths, but we can extrapolate)

A lot of people drive a lot of miles which is why there are a lot of crashes. So you can argue either side of this depending on how you want to twist the numbers.


Auto crashes are the leading cause of death in some age ranges. You have to decide what you want to normalize to... other kinds of death is a good choice.


They published that the driver took absolutely no action, and that he had 7 seconds to take some action (7 seconds from the time the truck started to turn across his path until his car struck it). The last action he had taken was to enable the cruise control at 74mph 2 minutes before the crash.


Yes, from previous reports it was already pretty clear that the driver was asleep at the wheel (literally or otherwise). The car is not supposed to allow this; it's supposed to check for hands-on-steering-wheel and other indicators of driver attention. It would have been nice to find out how that system broke down in this case.


It doesn't require continuous engagement. You can drive with hands off the wheel for several minutes at a time in many situations.


Besides, it's perfectly possible to fall asleep but with your hands still touching (even gripping!) the steering wheel.


That's also a solved problem - Mercedes cars have "attention assist" which can figure out how attentive you are by measuring how you drive. If you are literally falling asleep at the wheel, the system will get very annoying and insist that you stop for some rest.

Of course, whatever algorithm they are using won't work if the car is driving itself, you can't judge the reaction time if all the driver is doing is holding the wheel.


One of the approaches used on long-distance buses is "track head position and angle." If you're "looking" into your lap or into the ceiling while driving, that's a fairly reliable signal that something is wrong, never mind how you're holding the steering wheel.


The technology is almost certainly there to very aggressively monitor "eyes on the road" and quickly blare alarms if they aren't. It's probably fair to say that would be a very unpopular auto feature.


Didn't they make the check more aggressive after the latest accident? Or am I just mis-remembering speculation.


They've been tweaking it pretty regularly since the beginning. The initial release had no timed events at all, and only asked the driver for hands on the wheel when the system's confidence dropped below a certain level. Then they added some timers as well (I think before this crash) and have been tweaking the situations and durations. Right now, I believe at highway speeds it's either 1 or 3 minutes depending on whether you're following a car or not. (The system has more confidence when it has another car to track, so it's more lenient there.)


The car should immediately pull over and shut down if it detects any activity on the driver's Twitter or Facebook feeds: Social Media Strike Out!


...then fashion a start-up based on "Ownership" of the vehicle and code and sell devices to Tesla owners to over-ride any part of the system they don't like, then profit!


Then sell the customers list to law enforcement to make side money! What a fantastic time to live in SV!


>Then sell the customers list to law enforcement to make side money! What a fantastic time to live in SV!

implying that law enforcement won't work with homeland security to subpoena your data at cost under a gag order :)


Sell the list of those using social media in the car to existing advertisers too. Extra high levels of engagement!


Then post a joke thread about it on HN. Maximum Reddit!

[Seriously, though. I don't want to downvote everyone in a chain but there's probably a more HN/SNR-friendly way to comment on potential abuses of automaker access to social media feeds.]


Are you telling me my car should have pulled over on the highway and potentially caused an accident when my wife used my phone last week?


  immediately pull over and shut down
That could be more dangerous than allowing the trip to proceed in many situations (e.g. high-speed, narrow roadway). Fellow drivers assume a certain degree of predictability of other drivers' behavior when no surprise hazards are present.


Would require you to link your social media accounts to your car so easily avoided.


Entice drivers to link to their social media accounts by offering features like automatically tweeting a photo of anyone you honk your horn at, and updating an online leader board of top speeds driven along every section of road.


>top speeds driven along every section of road

If your goal is to make things safer I don't think this will help.


Yeah, when I worked for TomTom and suggested that idea, they weren't very enthusiastic about it.

And my other suggestion about a TomTomagotchi that you have to regularly drive around to different places to keep happy didn't go over very well, either. Maybe a petroleum company would be more receptive to that idea, though.


The tomagotchi idea is sort of like Waze. You get littlrle "candies" the more you drive. That one worked out for them.


Oh, I think the car would be able to learn your social media accounts after a while, just looking for any geotagged posts that tend to match the car's position


How many driving seats does this car have?!


What if you have none?


"The only winning move is not to play the game." - Wargames


Ford is doing something similar[1], by trying to take the best photos for you while driving, presumably to avoid you from using your phone.

Its such an odd patent, perhaps its a high quality signal about their ability to execute on self driving.

[1] https://techcrunch.com/2017/01/18/ford-patents-a-car-camera-...


I didnt read the report, but is this measurement done on a a/b testing way? There is usually some data skew/bias when they are done on different time scale.


Same cars pre/post OTA. There was a ~6 month peroid where AP HW was out in the wild but not used.


Confounding factors. For example, crashes can be more likely after sunset or during inclement weather so seasonality can play a role. Or maybe the drivers who had Tesla longer got used to it.


Could this be accounted for if the sample period was large enough, ie over 12 months to cover seasonality and a few months for time of day.


I suspect OTA update was applied simultaneously rather than staggered over the course of the year, so probably not. You can control for seasonality using some other data as a proxy, but the more adjustments you make, the more potential there is for introducing errors. Randomized test/control split is almost always better than before/after comparison.


this particular one, maybe, but there might be others. like the user base pre-ota and post ota (number-wise) could be different.

I think there is a good SaaS business with A/B testing robot/car software OTA. I would imagine some metrics would be: 1. car speed 2. battery consumption per mile. 3. user comfort metrics (if it's possible to measure, maybe like hearthrate, body temperature etc - i can imagine if robots were driving bad, it'd be visible in heart rate or something like that).4

The list probably goes on. It would be interesting to see how much battery consumption changed after OTA, for example.


A lot of people are asking what was in the October 2015 Autopilot 7.0 update. That was the introduction of autosteer. Automatic emergency braking (AEB) was initially released in 6.2, see https://www.tesla.com/sites/default/files/tesla_model_s_soft...

6.2 was announced in March, 2015.


> data supplied by Tesla


Where else could it possibly come from? Tesla is the only one collecting this data.


bingo


I really would like to know the absolute numbers for this reduction. It's really questionable if this would hold up with millions of Teslas on the road in the hands of less wealthy(probably better educated) drivers.


> wealthy (probably better educated) drivers.

You mean drivers like the ones behind the wheel of all the BMWs and Audis that cut me off with no turn signals, speed, make aggressive lane changes, tailgate, and generally act like giant douches because a simple traffic ticket is nothing to their pocketbook while their time == money?

Those people are going to be a hell of a lot safer behind the wheel of an autonomous (level 4) vehicle where they can be on the phone and their laptop as the vehicle obeys the speed limits and safe following distance.


Anecdotally I have a lot more near-incidents with 4WDs and little hatchback getabouts. In my experience the 4WDs have been distracted or drifting outside their lane, and the little getabouts have been tailgating and driving aggressively. Having "P" plates (1) also raises the chances massively.

Could just be the numbers of those kinds of cars in my area though.

Generally I feel that a douche can drive any type of car, and I try to drive in fear like everyone else on the road is a drunk prison escapee.

(1) provisional licence holders - new drivers (and also people who have lost their licence) have to display "P" plates for a year here (Australia).


I actually agree that douches drive any kind of car. I singled out BMWs and Audis because the prior post was claiming that highly educated/wealthy drivers would be 'better'.

I've seen jalopies driven like someone has been up on speed for 4 days straight.

Or the drivers in cheap sensible cars that are terrified of merging onto the freeway faster than 40 mph.

And the distracted drivers who realize they're about to miss their exit so they hammer on the brakes and put on their turn signal and completely fail to match speeds.

Ricers with after market exhaust are another kind of car that usually drives way too fast and aggressive compared to how their lowered suspension actually performs.

The people in road tanks who seem to split the difference between terrified and aggressive and expect you to just get out of their way because they're bigger.

I'm probably missing entire swaths of shitty drivers that I can't think of right now because my blood sugar is a bit low.


I find that drivers who fail to keep right except when passing are the leading cause of 'aggressive' and 'unusual' passing conditions.


Because the aggressive drivers behind them aren't responsible for their own actions in rage passing left lane hogs?

Somehow I manage to consistently undertake people who are going slow in the left lane safely. Its not that hard.


> because a simple traffic ticket is nothing to their pocketbook

Finland has a system where a traffic ticket is a certain percentage of your monthly income. For example, running a red light is typically fined for 25% of your monthly income.


He said less wealthy, better educated drivers.


Pretty certain he meant it the way I quoted it.


Ah it now makes sense, I did ask him to clarify but I just parsed in my brain when reading your reply he meant fewer wealthy (better educated drivers) as opposed to less wealthy meaning not as wealthy. Thanks for your help.


I read it as less wealthy meaning not as wealthy as well. Funny how ambiguous some sentences can be.


Well it's really that he used the wrong word. "Less wealthy drivers" can't correctly refer to the number of drivers.


Stop being so self righteous. Do you own a BMW or Audi? I do, and I'm totally polite driving mine. Also, when you have a car that handles and responds better you tend to drive it harder and you forget that everyone else is in some inferior product.


> when you have a car that handles and responds better you tend to drive it harder and you forget that everyone else is in some inferior product.

You can buy traction control, ABS, better suspension, steering, better tires, etc. None of that will make the idiot behind the wheel any better, although it may lead to overconfidence.

Thanks for making my point for me though.


Wealthy motorists drive more recklessly: http://www.pnas.org/content/109/11/4086.short


I agree. Here is a more direct study review proving your point. https://wheels.blogs.nytimes.com/2013/08/12/the-rich-drive-d...


Makes sense, but its still interesting how the result would change when applied to the general population. Would VW Beetle drivers have the same reduction?


Jerks drive Mercs


I think you pointed to the wrong link. That's a study about ethics, not safety.


If you read the abstract you will see that the studies involve driver behavior.

"Results Studies 1 and 2. Our first two studies were naturalistic field studies, and examined whether upper-class individuals behave more unethically than lower-class individuals while driving. In study 1, we investigated whether upper-class drivers were more likely to cut off other vehicles at a busy four-way intersection with stop signs on all sides. As vehicles are reliable indicators of a person's social rank and wealth (15), we used observers’ codes of vehicle status (make, age, and appearance) to index drivers’ social class. Observers stood near the intersection, coded the status of approaching vehicles, and recorded whether the driver cut off other vehicles by crossing the intersection before waiting their turn, a behavior that defies the California Vehicle Code. In the present study, 12.4% of drivers cut in front of other vehicles. A binary logistic regression indicated that upper-class drivers were the most likely to cut off other vehicles at the intersection, even when controlling for time of day, driver's perceived sex and age, and amount of traffic, b = 0.36, SE b = 0.18, P < 0.05"


Did they account for bias in their observers?


The rates dropped by 40%. The drivers remained the same. Why do you think a different driver demographic will not show a similar improvement?


Do you think peoples ability to behave/drive responsibly vs recklessly could be at all linked to their demographic profile? I am talking statistically, not looking for anecdotes...


Yes, it's very likely that different demographic profiles have different driving behaviors.

But given that they've shown this 40% crash rate reduction in their current demographic, it's reasonable to expect to see a similar reduction in other demographics (maybe not of the same magnitude, but at least in the same direction).


If you believe a certain profile of people is more accident prone, maybe people that fit this profile will have more advantage when using autopilot?


Varying insurance rates based on age and gender seem to indicate that, yes, demographics play a role in driving responsibility.

A Tesla is not a cheap car. I kind of suspect the average Tesla driver is more responsible than the average driver. It may be the case that the average Tesla driver is very well off, and treat the cars as expensive toys though, so it's tough to say without numbers. If i had to bet right now, i'd bet on the more responsible side though.


I suspect the average Tesla driver is more irresponsible than the average driver, simply because they're rich and are used to getting their way.

If you study rich drivers I have no doubt that you would find that they cut people off more, overtake in more dangerous situations, use their indicators less, drive over the speed limit more, proceed through amber signals later, run red lights more often, and generally show less regard for the rules than other drivers.

This behaviour will also correlate to other areas of life where richer people simply care less for the rules. They're more likely to use social contacts or money (e.g.: "lawyer up") to get themselves out of any trouble they find themselves in.

Teslas are not cheap cars. Given that money is (social evidence of) power, the people buying Teslas (and other expensive cars in general) are likely to expect other drivers to respect their status as powerful people.


Credit rating is highly correlated with wealth, and is also highly correlated with reduced accident rates. This is evidenced by me personally, as I now get much better auto insurance rates with an 800+ credit rating than I did when I was in the 600s right out of college.

I would be willing to bet money that the insurance actuaries that actually calculate risk disagree with your extremely prejudiced opinion.


Credit rating is highly correlated with wealth, and is also highly correlated with reduced accident rates.

You're assuming that accident rates are directly correlated with claim rates, however. A wealthy individual is less likely claim for a minor accident, preferring to pay out of pocket for small repairs rather than take a big hit on their premiums. Someone less wealthy may not have that option.

There are other reasons why wealth may reduce your risk to insurers that are unrelated to safe driving. You're less likely to leave a car parked insecurely in a dodgy part of town, for example.

Finally, "drivers of luxury brand cars" certainly aren't the same set as "wealthy people with good credit records". Plenty of lower net worth people can still manage a lease on a BMW or Mercedes, or buy a used one.

Drivers of luxury cars cause more accidents, insurers say: http://www.telegraph.co.uk/finance/personalfinance/insurance...


If you disagree, commenting is probably more appropriate than downvoting.

I'm still waiting to hear how your personal opinion about wealthy drivers is more accurate than auto insurance actuaries that calculate risk for a living.


> If you disagree, commenting is probably more appropriate than downvoting.

FWIW, in case you were unaware, Hacker News doesn't allow commenters to downvote direct replies. So whoever might've downvoted you wasn't the parent, and someone corrected it with an upvote or undown (your comment isn't gray as of this writing).


Do the people you are describing with such harsh prejudice have more accidents? That's the real question.


In my own opinion:

This list is not strongly related to risk /of/ accident, but is correlated with severity (a small change is likely to have a small correlation with severity).

  * over the speed limit
This list is highly subjective and extremely situation dependent.

  * overtake in (more) dangerous situations
  * through amber signals later
(IMO: Most lights should have longer yellow signals, a mechanism for telling fresh and stale apart should also be standard)

All of these are very bad things that everyone should avoid doing, and signals should be given starting 5-10 seconds before the action.

  * use indicators less
  * cut people off
  * run red lights


There have been a few studies that show generally wealthier drivers do behave more recklessly/selfishly. https://wheels.blogs.nytimes.com/2013/08/12/the-rich-drive-d...


Isn't this even better for Tesla then? People that supposedly already drive carefully now have fewer accidents.


It seems like age is a rather large driver of involvement in crashes (younger/newer drivers end up being more crash prone) according to here: https://www.aaafoundation.org/sites/default/files/2012OlderD... (page 5)

Which seems like the current driver profile is possibly safer than the overall population since it's unlikely a large percentage of young drivers own a $100k car. It seems like the effect of widespread autopilot could be greater if it included this cohort.


Car insurance, and their actuarial models, are based entirely on this being true.


Maybe other drivers are really good, or maybe this class of driver is really bad?


35,000 people die from car accidents every single year in the USA alone. Other drivers are not "really good".


Personally I believe most people are really terrible drivers. That being said, there were 210 million drivers in the US in 2009. There's probably a lot more today. That's just 0.01% of drivers. That fact makes me have to question just how bad people really are, despite my own personal opinion. Or maybe cars are just really safe. A better statistic might be number of accidents.

In 2015 2.44 million people were injured in car accidents. That's a much more telling figure of just how bad most drivers are. Apparently our cars are just really safe.


I believe most people are inconsistent drivers. Usually they do a reasonable job, but some fraction of the time they're worried about a sick kid, or a job interview, or a recent breakup, or whatever, and they aren't paying attention.

There are some truly consistently horrible drivers. But in general most people do ok most of the time. I think the real challenge is cutting of that really bad end of the distribution, which autopilot seems to be doing.


Those same drivers are, unfortunately, also likely to be those least amenable to actually using it. Some people I know have expressed disgust toward the idea of autonomous driving; coincidentally, almost all are horrible drivers that I wouldn't want to be with as a passenger.


I had an econ professor assert that as cars got safer there were more accidents because people trusted the car to protect them more.... Logically, to reduce accidents due to carelessness we should put a 12" sharpened spike right in the middle of the steering wheel pointing at the drivers chest. If all accidents were deadly wouldn't drivers then exercise the utmost care?


There was a coach bus crash in the Alps a few years ago, and in the investigation they found out that one of the factors contributing to the crash was that the driver only had few years of experience, and only ever drove modern, top of the line buses. The problem with those is that they drive as easily as a small hatchback, despite being 20 meters long and weighing 17 tonnes. The engines are powerful, the transmissions are usually automatic, the brakes are strong - so you can easily forget that you are driving a 17-tonne fridge, and no matter how good your brakes are, you can't be braking continuously on a 15km long mountain descent, because the whole system will fail. The bus was equiped with engine and magnetic retarder for the drive, but like I said, on a modern bus that drives this easily, the driver just never had any need to use them. I think the report concluded that a driver with more experience in driving older vehicles, where you couldn't rely on assisted disc brakes, where all you had was a set of brake drums and just had to know how to use the 2 other braking systems, would have saved the bus.

Similarily, airplane pilots spend so much time in autopilot mode that they forget how to operate a plane in emergency situations.


It's a great joke, but a poor strategy. Remember the end game: minimize harm. The universal solutions that remove human error have a better long-run potential to minimize harm, since they can exceed the effectiveness of even the most attentive and cautious human driver.


10 accidents where everybody walks away is much better than 1 accident where someone dies.

20 years ago they redid the traffic pattern where I used to live to something many people found really confusing and many people claimed felt dangerous. A study 2 years later found that, while it was true that accidents on that part of the road had increased, the number of injuries due to accidents had plummeted. A follow up study a few years later found that accidents had dropped back to around their original level and serious injuries where still basically at zero.


The problem with this theory is that the rate of fatalities has gone sharply down as cars have become safer. There may well be some degree of compensation where people drive in a more dangerous manner, but the added safety far more than compensates for it.


The average driver goes about 10 years between a crash, which doesn't seem to be all that good.

Another factor is the extent to which bad driving doesn't result in an accident, either because of infrastructure, other drivers reacting, or just a lack of cars. For example, most red light runners don't cause a crash because stoplights usually have an all-red period to compensate, there often aren't other cars to crash into, and other cars might brake to avoid crashing into the culprit.

Many crashes require two bad drivers to misbehave at about the same time. For example, I couldn't tell you how many times I've had somebody change lanes into me, but it hasn't (yet) resulted in a crash because I've seen it and gotten out of the way before they hit.


They're not really safe when compared to other forms of transportation.


Other forms of transportation have better trained pilots. :)


Precisely. Which is what self-driving cars aim to give cars.


So...better driving training, then, yes?


Probably as the technology scales up to millions of people using it, the numbers will actually be even better for safety. There is a network effect that compounds the safety factor - as more self-driving cars are on the road, there are fewer rogue human drivers. So with millions of Teslas driving around there will be that many fewer people causing accidents by human error.


Wait... are you saying that people that are less wealthy are better educated? If so, I need to see some sources. That sounds preposterous.


no. he needs more parens to make the order of operations clear. also, love the standard internet response to data. "the data in not perfect, or is incomplete, thus I will hold steadfast to my intuition"

love you, internet.


It's very reasonable, when presented with data that doesn't pass a sniff check, to mentally bitbucket that data. Unfortunately, lots of people's noses are miscalibrated - And what's worse we're not sure which ones.


I also found that confusing. I think in this case "less wealthy" is within the context of Tesla owners, which is basically upper middle class(?), which sort of makes sense. At least that's how I read it.


No, i just took education as one possible factor that might be different between Tesla and Camry or Hummer drivers. Tesla Owners have a specific selection bias, its just hard to describe what that bias actually is, so i just took a stab in a direction to get the discussion going.


Assuming by education you mean formal schooling: Consider that wealth is accumulated over time. While outliers certainly exist, the older you get, the more wealthy you are apt to be, at least to the point when you retire and start to use those reserves to live on. Also consider that in 1970s, the time when the boomers started coming of age, having 'high school or more' was reserved for just 50% of the population. Only a mere 25% for the oldest segment of the population. It has steadily climbed to more than 80% for the youngest adults. But they, statistically, haven't had time to amass a large amount of wealth yet.


They said it was a 40 percent drop after the roll out of auto-pilot features. That means it dropped 40 percent with the same driver base. Please read the article before posting comments.


So "better" drivers saw 40% reduction; doesn't that indicate that worse drivers will see even greater reduction in accidents?


Are you assuming the more wealthy owners have lower levels of education? Or have I misinterpreted your statement? I would have assumed tesla owners would fall into the more educated category. No idea which assumption is correct just wondering about your reasoning.

For my assumption I assume that due to the higher cost of a Tesla better educated people are opting for them over the alternative because of the environmental benefits.


I would have assumed due to the high cost of a Tesla that the owners would tend to be older, and thus a generation educated in much fewer numbers. Today's youth are much more educated, but they are still young, which hasn't given them much time to build wealth.

I mean, who do you think is more likely to buy a Tesla: A 20 year old fresh college graduate, or his 'uneducated' father who has been saving and investing for 30 years and just sold his lifelong home in San Francisco? My bet is on the latter. The boomers in particular are considered the wealthiest generation ever, but not the most educated generation ever.

I'm rather fascinated by how many are questioning the education thing here though. Coming from a farming community, where every (older) farmer I know is multimillions simply by virtue of having purchased farmland when they were young, it's difficult to see how education had any impact on that wealth. Some of them do have educations, some don't. It doesn't seem to make any difference.


I evidently misunderstood the original comment, when OP said less wealthy (better educated) he meant that there would be fewer wealthy (and better educated) owners. I don't agree with the sentiment and think OP didn't really explain his position well either.


You don't agree that wealthy people tend to be older? Even though time is the easiest way to increase wealth (compound interest is your friend). Or you don't agree that older people are less educated? Even thought the data suggests otherwise.


I didnt say that and I am unsure how you inferred it from what I said.

OP linked wealth with being better educated, he also inferred that better educated and wealthy meant they would be better drivers. However as others have pointed out there are studies showing that wealthier individuals make worse drivers (the studies don't appear factor in education level and wealth is determined by brand of vehicle).

I disagreed with OPs sentiment that fewer wealthy (and therefore worse educated - in OPs opinion) drivers would result in more crashes.


> OP linked wealth with being better educated

Doesn't that contradict what you said before: "Are you assuming the more wealthy owners have lower levels of education?" Although I think it is quite reasonable to assume that the wealthy do have less education, statistically speaking, give the nature of wealth and the more recent focus of educating the populace.

Heck, when I graduated high school in 2000, only 64% of us did graduate. The graduation rate for high school, less than two decades later, is now over 80%. That's substantial growth over what is a fairly short period of time, all things considered. And the rate gets worse the further back you go.

> I disagreed with OPs sentiment that fewer wealthy (and therefore worse educated - in OPs opinion) drivers would result in more crashes.

That was my misunderstanding. I thought you meant that you did not agree with the sentiment that wealthy people could be less educated. Thanks for the clarification.


For those who don't want to signup to Scribd just to download a publicly available PDF: https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF


Thank you! I'm completely unwilling to register for that site just to download a pdf


Oh, hey, will you look at that.

The imperfect, incomplete, beta, level 2 self driving cars that were supposed to be the "dangerous" area of self driving are ALREADY better than human drivers.

Can we stop the politics and deploy all the real self driving cars to the road immediately, since the government has proven that even the shitty variety is safer than humans?


Level 3 is the real danger area. That's the level where the driver is able to do other things, but must be prepared to retake control at any time. Those two ideas are basically contradictory: if you're not already involved with driving, it takes too long to get back into it to do anything useful.

Tesla's system is level 2, meaning that the driver must (as in, is supposed to) remain engaged and aware of the driving task at all times, although the car will handle common cases on its own.

I believe the plan for everybody is to go straight from level 2 to level 4, since it looks like level 3 is just not going to work.

In any case, we can't stop the politics and deploy all the real self driving cars to the road immediately, because there aren't any real self driving cars to deploy yet. The technology is advancing extremely quickly and it's getting close, but politics is not the only obstacle.

If and when we have a real, production-worthy self-driving system that's being blocked by politics for no good reason, we can revisit this. I get the sense that, at least in most places, the politics will be pretty easy once the technology gets there. Getting a lot fewer registered voters killed has a way of swaying legislators. Especially in the US, where it's a state-by-state decision, a few states will want to get a jump on it and then the rest will face immense pressure to follow.


There is zero chance I'd stay awake behind the wheel of a level 2 or 3 vehicle.


I'm in the same boat as you here. Level 2 and 3 are unsafe for me as an individual. When I drive, actively focusing on the road and driving keeps me awake. When I'm a passenger, the lack of mental concentration plus the gentle rocking motion of a moving car knocks me out within ten minutes.

The current Tesla autopilot is obviously working well for many people, but for myself I have to wait until level 4. Anything more hands on than that is inherently unsafe because of my habits.


What kind of "CARTCHA" challenges and games might be fun to play, to prove to your car that you were paying attention to the road and traffic, yet not distract you from being ready to take over driving?

Computer vision and speech recognition could be useful for proving the driver's awake, as well as driving the car itself.

https://en.wikipedia.org/wiki/Car_numberplate_game


The challenge of driving the car manually, without any hyped autopilot.


This. There's nothing else because any other task requires diverting mental focus away from the act of constantly assessing the changing environment around your car. That by definition leads to slower reaction times.


It's the reverse for my wife. She starts driving and within minutes gets tired. She'll switch to passenger and be wide awake for hours.


That's why these level 2 systems are supposed to verify that the driver maintains contact with the wheel. Not that you can't necessarily get around some of those restrictions, but the system is supposed to warn you if it detects you aren't paying attention and eventually it's supposed to pull the vehicle over safely so if you fall asleep it won't be dangerous.

In addition, not having to spend as much energy just keeping the car on the road may help to prevent getting tired. Then again, maybe it won't have any impact.


I've done a few thousand miles with Tesla's Autopilot and I find it to be much easier to stay awake with it than without. Constantly making little adjustments is pretty tiring. Paying attention to the the road and other cars is enough to keep my attention. Your experience may, of course, vary.


I don't know how long your durations on the wheel are - but microsleep[1] is notoriously hard to self-diagnose, even for drivers of non-self-driving cars.

1. https://en.wikipedia.org/wiki/Microsleep


It goes along with overall fatigue, right? I feel a lot less tired after a lot of Autopilot driving than after a similar amount of non-Autopilot driving.

As for durations at the wheel, I did a Virginia to Wisconsin round-trip over the holidays, and have done other trips of similar lengths in the past. Of course, I'm not spending more than about three hours at the wheel at a time, since the car needs to charge and I have various physiological needs.


I avoid using even cruise control because I know I am a worse driver when it's engaged. I firmly believe I'd be a danger to myself and others for the first 30 seconds if a level 2 or 3 car had to hand control back to me.

Modulating my speed keeps me engaged with driving and I use it as the trigger to cycle through my responsibilities. I check ahead / speed, adjust throttle, check rearview mirror, check left mirror, check right mirror, repeat. When the throttle step is gone I find myself losing siutational focus very quickly. I've always thought it was because it was the only action that involved actual physical engagement, all the rest are just eye movement.

I just bought a new car that alerts when it's below 4 degrees C about the possible icy conditions, but it only does so after the car has been running for several minutes. By the time it goes off I'm already driving and I mistake it for an engine warning or other catastrophic failure. Every time. I live in Canada, of course it's icy in the winter. Toddlers could have told you that. This is an absurdly dangerous thing to have the car do, because when the beep goes off all my attention is on processing the alert, not about the pedestrian I'm about to run over. I have the same basic fear about getting control of a level 2 or 3 car. All my focus will be on why the car is handing me control, not on the actual task of driving.


You need audiobooks or podcasts. Heck, I love taking a conference call while on my commute (handsfree of course) - 80% of it is in the left lane of an interstate.


The left lane is only for passing, and if you are passing >80% of the cars on the road, you are a reckless driver.


Incorrect, the left lane is only for passing only in certain states, and mainly on roads with only 2 lanes of traffic in each direction.

>The Uniform Vehicle Code states: Upon all roadways any vehicle proceeding at less than the normal speed of traffic at the time and place and under the conditions then existing shall be driven in the right-hand lane then available for traffic ...

>All states allow drivers to use the left lane (when there is more than one in the same direction) to pass. Most states restrict use of the left lane by slow-moving traffic that is not passing. The table below describes the law in effect in each state.

http://www.mit.edu/~jfc/right.html


> if you are passing >80% of the cars on the road, you are a reckless driver.

Or you are driving at the speed limit on a 2x2 highway used as a trucking corridor. (For example: most of I-5.)


Right. In California trucks have a lower speed limit than cars, which results in what feel like pretty dangerous situations.

The right lane is pretty much for trucks going 55.

The left lane is a combination of: * Trucks going 60-65 to pass a truck in the right lane * 'Typical' cars who are going 75ish * Fast cars who are going 80-90

So there is a huge disparity of speed in that lane -- and that disparity seems like it must be a huge factor in many highway accidents.

It always feels really dangerous. I wonder if it is safer than just having the trucks have the same speed limit as the cars.


The speed disparity can be a huge issue. Drive over the Donner Pass sometime or any major Western highway with lots of trucks and mountainous spots (i.e. most Western highways at some points) and the different speed combinations can be pretty nerve wracking to navigate.


Nope. Not all Interstates are 4+ lanes of traffic. Many states simply require slower traffic to use the right hand lanes.

http://www.mit.edu/~jfc/right.html


The only level 3 cars that I think about to hit the road are Volvos and they are supposed to pull over and park if you're asleep when things are tricky. https://www.youtube.com/watch?v=asKvI8ybJ5U&feature=youtu.be...


Does Level 3 require instant attention?

I thought it allowed the car to require some parameters (such as road type or weather). If my car can handle the hours-long uninterrupted highway sections of a road trip, yet still require manual driving on city streets, that is still a huge value proposition for time & safety.


It doesn't require instant attention, no. But it's hard envision a scenario where the car needs attention at some point in the future, not particularly soon, but can't drive itself. Maybe I'm drawing the line at the wrong point, though. If a car can drive itself but may occasionally need to pull over and ask the driver to tell it which way to go, at the driver's leisure, is that level 3 or level 4? I thought 4, but I'm having trouble finding a clear answer.


Level 4: "Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles." (emphasis added)

In my mind, if a level 4 vehicle is unable to reach a given valid destination without assistance, that would be considered a failure.

In reality, macro-scale rerouting around obstacles that the self-driving system does not know how to overcome is not a particular challenge, GPS systems do this easily. The definition doesn't say it has to take the most efficient route. ;)


>In reality, macro-scale rerouting around obstacles that the self-driving system does not know how to overcome is not a particular challenge, GPS systems do this easily. The definition doesn't say it has to take the most efficient route. ;)

Oh, that's going to go over really well. Say you're drunk, and want to go home in your self-driving car. However, there's a blind left turn that your car knows it can't safely navigate, so it happily routes you a 15-minute drive through back roads for a trip that should take 5. People are going to love that.

(I've heard that UPS does route planning with a system that discourages left turns, even sometimes doing three rights to avoid some dangerous lefts.)


I've heard the same thing about UPS, but the way I heard it, it's only because left turns can be so much slower. A route that leans more heavily on right turns will usually be faster overall.


my (completely uninformed) understanding was that level 3 was more like "level 4 in perfect conditions". So it would need to pull over, or warn the driver to take over if it starts raining or snowing or the road (or it's lines) is uneven or ill-defined. Or perhaps that it's 100% self driving in all conditions on a highway, but still needs manual driving for "surface roads".

Many of those scenarios would allow some time for the driver to take over, and would most likely be able to pull over if the driver can't or won't for some reason.


The original NHTSA levels had this very clearly at level 3, but the SAE's levels seem kind of ambiguous as to whether that case would be level 3 or 4. I feel like it's more in the spirit of level 4 than level 3, though, as level 3 requires "prompt response" from the user to a request for intervention, while level 4 has the car still doing something sensible even if the user doesn't respond.


Right, it's that "prompt" which seems critical here. If there is no constraint on how long the driver can take to respond then there isn't an issue, but if there's any requirement for how quickly they need to get back in the game, it seems doomed to failure.


Let's say that the car is entering a fog or other weather condition, which it realizes is beginning to obstruct its visibility or control (e.g. snow, heavy rain, wind). After visibility or control impairment has crossed a warning threshold, but before putting safety at risk, the car can alert the driver and begin the process of handing over manual control.


I think the idea is that at Level 4, the car can drive the way humans drive. If it's unsafe for the car to drive autonomously, it's unsafe for humans, too.


The politics will hopefully be solved soon. A few red states, that are anti government and pro innovation (like Arizona or something) , will make them legal, and then we will start collecting irrefutable data that they are 10 times safer.


I think you are missing a /s there ...

However, Arizona would be a beautiful place to have fully autonomous cars.

The snow bunnies make driving there a nightmare.


Maintaining safety is easier than navigation when you need to deal with all situations. Consider parking in an open field or responding to a traffic cop redirecting traffic. A car may simply stop and require a driver to respond in such situations with any real safety risks.


Wouldn't Level 3 be like Otto? (Automated, No human attention required on Highways; Human driving otherwise)

The truck would know well in advance which exit it would take and could give the driver a heads-up some minutes before they need to take the wheel.


As I understand it, level 3 could still require human intervention at unforeseen times (e.x. weather, etc).


That conclusion is not supported by the data in the report. A supportable conclusion based on this report would be that Tesla automation with auto-steer produces 40% fewer air bag deployments than Tesla automation without auto-steer.


Actually we can't even go that far.

>22 The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual Autopilot use.


Which is exactly how you would want the analysis to be done.

Because of course there were no autopilot miles in the before bucket you would be skewing the analysis if you only looked at autopilot miles in the after bucket.

What we know is after installing the autopilot feature, accidents with airbag deployments reduced 40%. Not just during so-called "safe" autopilot miles, but over all miles driven.

Now it's plausible that autopilot makes you a better driver both while it's on and off, but I'd say it's just exceedingly common to rear-end people due to distracted driving.


Except there are "autopilot" miles in the before bucket. They are comparing before and after Autosteer so Autosteer is in the after bucket only. TACC however is in both buckets, aka this is supposed to be a comparison showing the increase in safety when going from level1 to level2 systems but both buckets have tons of human driven miles mixed in.


Level 1 = cruise control. All cars in the entire world are level 1 autonomous these days.


It's not just "cruise control", it's "adaptive cruise control". From the report PDF (emphasis theirs):

the driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task


You'd be surprised how many cars are still sold without cruise control.


Level 1 in the context of this study is specifically TACC.


Thats like saying "This medical procedure doesn't save lives, it just results in 40% fewer incidents of sudden heart failure".


You should really read the report. It's comparing Tesla TACC to Tesla TACC+Autosteer. Aka the safety increase when going from Level1 to Level2. It doesn't have anything to do with being safer than human drivers.


I think it is reasonable to assume that Tesla TACC is not less safe than Tesla non-TACC, therefore the conclusion is also relevant to comparisons between Tesla level 0 (entirely human-driven, no assist) vs. Tesla level 2.


Why would it be reasonable to assume that?


Because traffic-aware cruise control has existed for years from other car manufactures, and I have seen no data showing that this feature decreases safety.


Perhaps you should be looking for data that shows that traffic-aware cruise control increases or matches safety of human drivers then? Seems strange to just assume one side of the argument if there's no proof either way.


I can rephrase my original comment according to your terms: there is no evidence that TACC reduces safety. That's equivalent to what I said. Do you have evidence to the contrary?

Moreover, if experience showed that TACC reduces safety, NHTSA and its counterparts in other countries would have noticed and put the brakes on it. If you don't think this is a reasonable assumption, so be it.


Editted to respond to edit. (perhaps my original comment was a bit brash as well)

The problem here is that you're mis specifying the null hypothesis. We have had human drivers for 100 years. New automated systems start to come into the fray. We're trying to figure out if they are safer than humans. The null in this case is that they are not. That's what we assume until proven otherwise.


> We're trying to figure out if they are safer than humans.

No, we're not. We're trying to figure out if TACC has any impact either way. Accordingly, the null hypothesis is that they are no more and no less safe than human drivers alone. You need data to show the direction of the change. You just picked a direction (i.e., towards worse) that was more intuitive to you.


Fair enough. I'd still say that invalidates your original comment.

>I think it is reasonable to assume that Tesla TACC is not less safe than Tesla non-TACC, therefore the conclusion is also relevant to comparisons between Tesla level 0 (entirely human-driven, no assist) vs. Tesla level 2.

Is it really reasonable to assume it's not less safe in this case? Neither of us can say either way, seems like it would be impossible to assume either way.


> Is it really reasonable to assume it's not less safe in this case?

Yes, because this assumption is consistent with the null hypothesis. The negation of this statement is that it is strictly less safe, which is not consistent with the null hypothesis, and for which there is no evidence.


Okay so let me follow. We're fine assuming it's not less safe. So then we must be fine assuming it's not more safe.

These two lead us to your null which is that it is as safe.

When a new unproven technology comes out does it seem correct to automatically assume that it is as safe as the current standard? Would you not expect some proof?


> When a new unproven technology comes out does it seem correct to automatically assume that it is as safe as the current standard? Would you not expect some proof?

In the absence of any data or given priors whatsoever, we should give the three possibilities (less safe, as safe, more safe) equal weight. This means that there is a 33% chance it's less safe, which is too large to ignore. That's why we want more proof when we deploy new tech, introduce a new drug, etc.

But note that specifically in the context of TACC as it stands today, we are not in that situation. There has been plenty of data accumulated already, which constitutes the proof you are looking for (i.e., the data we do have doesn't show a decrease in safety.) That's why today it is reasonable to assume that TACC is not less safe than no TACC.


> There has been plenty of data accumulated already. That's why today it is reasonable to assume that TACC is not less safe.

Haha, well we could have totally sidestepped this whole thing then! Can you show me the data?


Level 1 is Human drivers, level 2 is computer "lane assist" and "automated cruise control".

So yes, it is literally human Vs computer driven safety.


Level 1 is Driver Assistance. Level 0 is No Automation. It's literally a graphic in the report... Someone kindly posted the pdf link if you'd like to read it.


Do you actually believe that cruise control is less safe, to any non negligible degree than human driving?

You are bringing it up, but I don't think you are doing this for any reason other than to be contrarian.

Also, ALL cars have cruise control right now. So the 40 percent STILL applies to all these cars.

The correct way to frase the study is that "99 percent of cars are level 1 autonomous cars. For this 99 percent, going from level 1 to level 2 would make them 40 % safer"


> You are bringing it up, but I don't think you are doing this for any reason other than to be contrarian.

You shouldn't assume intentions. A ton of people here are misreading this study (or should I say not reading it).

> Also, ALL cars have cruise control right now. So the 40 percent STILL applies to all these cars. The correct way to frase the study is that "99 percent of cars are level 1 autonomous cars. For this 99 percent, going from level 1 to level 2 would make them 40 % safer"

The comparison here is only on Tesla so accordingly it's only on TACC vs TACC+Autosteer. TACC is fundamentally different than the type of cruise control that is in basically all cars.


Or saying "just because this drug cocktail appears to reduce incidents of heart failure relative to the weaker drug for patients with a specific condition doesn't mean we should scream at regulators to abandon all further attempts at regulation and start selling all potent heart-related drugs under development over the counter"


The "shitty variety" that doesn't have to deal with all the stuff the current state-of-the-art pure self-driving cars don't actually do yet like drive in imperfect road conditions and adapt to unusual circumstances because it's just a braking assistant and cruise control system with humans behind the wheel?


I think the primary issue many of us have is the name. the name implies far more capability than the system delivers. I have seen clear day demonstrations but am eagerly awaiting Tesla to show us a night time rainy video which I have seen references of testing having been done


Here is video of the automatic breaking feature triggering at night during light rain: https://www.youtube.com/watch?v=9X-5fKzmy38

The name is good for technical people, not so much for lay people who don't know that autopilots are more like cruse controls than something that completely flies the plane (they have been getting better though). The 40% reduction in air-bag activation seems to indicate the name is good enough while its probably not optimal.


Autopilot on planes still requires there to be human pilots for edge cases. I think everyone understands this, because everyone knows that there are still pilots on planes.


I would guess emergency braking is the big difference. The only real stopper there is cost, there are multiple different implementations in vehicles on the road right now.


The real safety payoff will be when manual driving is banned on public roads. We will have to fight the "freedom is doing whatever I want" people who are going to defend the right to manually drive on public roads.


Or just the not-rich people who have old, not-autodriving cars? First-world problems I guess.


The non-rich will adopt self driving cars faster than anyone. They will use these cars as a service and not need to own one or pay for insurance. It will be rich people with their sports cars and fancy motorcycles fighting us on this.


on the other hand, i am easily 3 times safer than the average driver, so self driving vehicles would endanger me.


Even if that was true, the fact that all the other idiots are in self driving cars that are not crashing into you makes you safer.


All the bad drivers would be 40% less likely to crash into you. And since your are a 3x Driver, the only you risk you have while driving, are all the bad drivers.


Dunning-Kruger effect on display?


Not necessarily, making other drivers safer makes them less likely to collide with you.

Depending on how all the numbers would work out, making other drivers safer could well make you safer too. It might not, but it could.


Or perhaps it will make you 40% more safe than you already are?


The 40% number isn't very informative. The report has multiple notes about it:

ODI analyzed data from crashes of Tesla Model S and Model X vehicles involving airbag deployments that occurred while operating in, or within 15 seconds of transitioning from, Autopilot mode. Some crashes involved impacts from other vehicles striking the Tesla from various directions with little to no warning to the Tesla driver.

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to[21] and after Autopilot installation.[22] Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

21 Approximately one-third of the subject vehicles accumulated mileage prior to Autopilot installation.

22 The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual Autopilot use.

So the actual rates of crashes for Teslas using Autopilot vs Teslas not using Autopilot aren't reported.


Reading it, I think it is essentially a 'before October 2015' vs 'after October 2015' comparison of overall accident rates, given the caveats there. So the -40% here should be a lower bound, since it includes a time period with 0% Autopilot usage (before October 2015) and a time period with an unknown but <100% Autopilot usage (after October 2015). If people after October 2015 used Autopilot for 50% of their mileage, then you'd expect the true effect of Autopilot to be more like -80%.

This strikes me as suspiciously high: what fraction of airbag-worth accidents are supposed to happen on the highways where one can use Autopilot at all? So maybe there's some other trend going on which is driving this decrease.


This accident reduction is driven not by the auto-driving part of the package, but the emergency response part. The autonomous breaking and evasive steering that occurs when an accident is imminent is what is preventing accidents.


Also, airbag activation isn't a perfect proxy for accident rate. Passenger safety is only a subset of vehicle safety.


I'm not sure that it matters as much as you imply.

Even if the only benefit of Tesla's autosteer is that it makes you more focused in the parts that are actually dangerous, a 40% improvement is still a 40% improvement.


That really looks like a terribly small dataset on the "before autosteer" side. According to footnote 21, we are looking exclusively at the sum of miles Teslas had on the clock when someone paid for the post-factory activation. Due to the growth of Tesla sales, that set is likely biased quite a bit towards younger cars with few miles at time of update availability. Tens of millions of miles might sound impressive for a statistical base, but tens of airbag activations does not.

At least they did not compare "all Teslas" vs "Teslas with Autopilot Technology Package checkbox checked", since that would select for drivers who do a lot of miles in boring long distance driving. That kind of driving happens to be the one where any car has an "airbag mileage" far above average, so it would be just natural that they have a lower crash rate, autosteer or not.

But a part of that effect might still be present in the numbers if customers are less likely to pay for a post-factory update than while ordering the car (which I do suspect). If this holds, then the subset of drivers who activated post factory (with miles already on the clock, so that this part of their car usage appears in the "before autosteer" column) will have, on average, a bigger desire for autosteer (and more naturally safe long distance cruising in their driving patterns) than the average of those who got the car with the option factory-activated (and who thus appear only in the "after autosteer" column).

Edit: oh, and here's the really big one: the "before autosteer" number only includes cars that where upgraded. How is this important? All airbag activations that happened in an event that messed up the car beyond repair cannot appear in the "before update" dataset, because nobody spends thousands of dollars on an autopilot update for a car wreck.

[edit 2]: This should actually make the "before" number much better than the "after" number, so i really wonder what's going on here.


Waiting for the headline: "Human Fails to Prevent Accident, Outraged Public Calls for Banning of all Human Drivers"

The obsession with perfection in self-driving cars is misplaced, they just need to be demonstrably better than humans.

This is obviously the future.


At the very least in a legal standpoint that's not true, because with a human driver it's far easier to define fault (which in turn allows for the issue to be resolved) than if two autopilots happen to collide. Who pays for what then?

From a safety point of view, I'm more inclined to agree however I'm not entirely convinced that we've passed the novelty stage of it yet where drivers will remain vigilant because it's a new and exciting feature and that mentally primes them to keep focused. It wouldn't suprise me that as it becomes more mainstream, drivers begin to be less vigilant on staying focused as society as a whole begins to trust it more. That's when everything could go bad.

Nevertheless, you're absolutely right that this is the future no matter what so we need to design it as best we can to resolve all of the issues it may give rise to.


There's all sorts of cases in real life now when defining fault is not at all easy. Children and animals are two examples of 'autonomous' individuals who's liability may lie with another party (parent/owner), buy also may not, depending on the circumstances.

Additionally, who's at 'fault' really doesn't matter when in most cases the damages are fully outsourced to insurance providers. In the case of autonomous driving, this insurance burden is shifted up a level to the automaker, but this cost will be passed on to the car's owner in some way, either in the cost of the car or a subscription to autonomous driving.

However if autonomous driving is safer than non-autonomous driving, premiums will go down, probably dramatically. Insurance companies can much more easily monitor the quality of autonomous driving for various automakers / cars either by the testing specification that they meet or the real-world statistics. Are autonomous Tesla's in less accidents than autonomous Volvo's? Cheaper premiums for Tesla.

This is exactly the situation you want to be in. Premiums for individuals are blunt instrument... it's hard to change many individuals behavior, the rewards may be low for the individual (save $3 on your premium!), and the penalties can be pretty random (you sped that one time so you're a high risk driver... or are there just more police in your city?). It's way better to incentivize at a higher level, like the automaker.

The only case where automakers would truly be at risk is something they should control, and that is if they pulled a VW and committed fraud on their autonomous driving tests, or were negligent in overlooking scenarios or providing updates.


Or an alternate headline : "Telsa confirm they will always say its the customer fault!"


Can anyone who is in the industry comment on how Autopilot performs in poor weather (ie. flash floods, thunderstorms, snowstorms etc...)

All I can find from the article about weather was in section 3.1:

> The manual includes several additional warnings related to system limitations, use near pedestrians and cyclists, and use on winding roads with sharp curves or with slippery surfaces or poor weather conditions. The system does not prevent operation on any road types.


Not sure about Tesla's, but many others are completely disabled or say specifically not to use them in poor weather conditions.

You have to look at what they're using for this, it's cameras with computer vision and radar. Radar has reflection problems like all hell at ground level and computer vision using cameras is generally piss poor (hence why you see videos of the Tesla hitting reflective objects).

It's a miracle that they can do this well using it IMO, even without adverse weather.


It's not recommended, but does apparently work in a pinch: https://electrek.co/2016/12/28/tesla-autopilot-snow/


Havent tried it in heavy snow yet, but it performs pretty good in heavy rain. The software is constantly updated, and as a user, you really feel it getting better.


How well does anyone drive through a flash flood?


It depends. The Tesla system is reliant on good contrast between the road stripes and the road surface. With grey skies and rain, that contrast can go down to a level where the lines cannot be reliably discerned, and AutoSteer is not available.


This is really good news. A major worry with driverless cars has been that companies would be harshly punished for accidents, even if there was a dramatic reduction in crashes overall.


They managed to weasel themselves out of similar problem when they rebranded vehicular manaslaughter to jaywalking shifting the blame to the victim. I'm sure there's no problem here that a bit of marketing and societal engineering can't solve.


I think there's sometimes a lot of marketing and hand waving in this type of argument "crashes go down with autopilot". Most car accidents are caused by drunk or old people, and they drag the average up. if you tell me a Tesla autopilot beats a drunk guy it won't surprise anyone. Now as a non-drunk, young(ish) driver but experienced and careful, my statistics don't look anything like the average, they look at lot better. You have to convince this demographic, not beat the averages otherwise it's not rational for me to buy the feature. I'm guessing this goal post is a lot harder.


I'm confused. You think young(ish) people need to be convinced to buy semi-autonomous vehicles? If they can afford it, why wouldn't they buy it? People love things that make their life more convenient.

In my experience, its the older people that are somehow scared of the concept of a self-driving car and would need to be convinced.


Because of security. I wouldn't buy an autonomous car that has an accident rate worse than my demographic. A car that has an accident rate better than average is not good enough. I hope that explains my point.


I suppose the best comparison would be to compare non-Tesla owners of the same demographic. Are accident rates correlated to SES? Do people who buy luxury sedans crash less?

Just a guess: probably.

Still, I know I drive stupidly sometimes, and autopilot might help.


Yes exactly, it would be interesting to see fair comparisons of people of the exact same profile (both driver and car) tesla vs others. I don't like that so many people seem happy enough to compare autopilot against the total average of accidents in the road and say "they save lives". When you are going to buy the car you want to know if it's going to be better than you.


For your reference, a level 4 automated car will look something like this:

https://www.youtube.com/watch?v=jhUX8qWFGc4

Imagine you are too fucked up to drive. Your car will be able to pick you up. Do you need Pepto Bismol too? Your car will pick it up from a drive through billed through your license plate. I'd give roughly 15~20 years for this to take place.


So I'm far less likely to crash if I use this, and I have something to blame if I do. Everybody wins! (Except the engineers).


The truck was visible for at least 7 seconds prior to the crash in the full report - another article here:

http://arstechnica.com/cars/2017/01/after-fatal-tesla-crash-...

Strangely enough, giving more people autopilot would probably be better than letting people drive. I think Tesla's picked the right time to enable it, since the cross-over point between autopilots being better than humans in general use cases has been reached.

Call it a beta if you want, but it's a pretty damn promising beta.


I don't always like Gladwell, but his piece on the Ford Pinto and the NHTSA philosophy towards auto safety more generally is quite worth the read [1]. I hadn't considered the intersection of this and self-driving car tech, but I wonder if NHTSA will basically take the position that as long as self-driving tech saves lives overall, a few "bugs" where the car kills the driver are an acceptable trade-off.

[1] http://www.newyorker.com/magazine/2015/05/04/the-engineers-l...


40% figure is meaningless unless the absolute numbers are reported. How do we know if this difference is statistically significant ?


was 1.3 crashes / million miles. Is now 0.8 crashes / million miles.

Tesla's have driven >= 3 billion miles


So 3000X the difference of 0.5, or 1500 crashes avoided.

:O


What other car company can even recover the airbag deployment rate per mile?


I believe that GM's OnStar also collects that data.


I only want to use open-source code for something that my life depends on. That way it can be open to any one to inspect some can independently determine if the code is behaving as desired.


That is a pretty remarkable report. It essentially holds Tesla up as an exemplar of the standard other car makers will be expected to achieve.


This means that autopilot must be engaged at least 40% of the time (Amdahl's law!). Tesla owners, is that realistic?


Does anybody know what the "Population" field indicates at the top of the report?


Impressive. 2/5 reduction is a lot of lives saved.


I've railed on about the safety issues of autopilot before and how I'm not entirely comfortable with the pace they've developed compared to the considerations of human-machine interfaces and driver attentiveness, particularly given my (moderate) exposure to these sort of problems in other industries. Thus I'm particularly interested in that section of the article.

What I found interesting is that figure 10 shows that as you jack up the independence of the machine, the level of driver distraction accordingly increases. Adaptave Cruise Control (ACC) shows a significantly higher percentage of shorter-duration off-road glances than Limited-Ability Autonomous Driving Systems (LAADS). Additionally, countermeasures help to alleviate some but not all of that increase in distraction. Importantly, this is coupled with the point that the duration in which drivers have to react to most impending is under 3 seconds. This may seem obvious but it's a critical set of data to help objectively demonstrate the risks involved with losing or even reducing alertness.

It goes on to say that Tesla has addressed the risks of mode confusion, distraction, etc. and has implemented solutions to address this unreasonable risk, which they in turn define as abuse that is reasonably forseeable. In this, they're talking about the reasonably forseeable risk of eg: the driver not understanding if they're in autopilot or not. It goes on to mention that Tesla has also changed its driver monitoring strategy to promote driver attention, which I take to mean detecting hands on the steering wheel.

Either way, Telsa's main approach to dealing with driver alertness by testing for hands on the steering wheel. My concern is that this doesn't consider the alertness of the driver to their surroundings, particularly other vehicles that may be approaching them or the process of anticipating hazards (approaching an intersecion where there's a blind corner and adjusting focus to pay more attention to what may come from it, for example). I don't see how Tesla's countermeasures address this.

The physical act of manually driving causes drivers to maintain alertness not only to where they're going, but also the situational alertness of what's around their vehicle. Specifically, it's the process of random actions that requires an taking input, making a decision and executing the appropriate action that maintains this alertness. If the driver isn't having to make those random decisions and take action then their alertness drops. Autopilot, even with hands on the wheel, eliminates much of that random decision-making and reacting.

When you drive, you mentally note the vehicle over your shoulder that is in the lane next to you, and subconsciously consider that they may do something insane. You consider those blind corners as you approach them and that vehicles may spontaneously appear out from them. You see a truck on the road which is approaching a bend and give it a wider berth because its centre throw may cause it to cut the corner into your lane slightly. These are all tasks that you do, that you may not do as well or at all when autopilot is steering, because you are not as engaged with the driving process.

Critically, I don't see how ensuring hands are on the steering wheel causes these alertness tasks to continue as frequently as manual steering. The driver may be in the physical location to quickly take over, but they may not be in the mental location to do so. This is the major issue I have with the rapid autopilot development based on my experience in related areas where maintaining situational alertness proved to be very difficult when the person was engaged with only a limited scope of requirements to prove their conscious presence.

I feel like the report doesn't really drill in to this as much as it needs to. It begins to touch on it around Figure 10 but sort of hand-waves it away saying 'Telsa considered it discharged their responsibility to make sure drivers stay focused by implementing countermeasures', but I believe it's more nuanced than that. It investigates the extent to which Tesla's system is good at ensuring drivers are physically present (that is, their hands aren't on the passenger seat making breakfast) but it doesn't really look at the mental presence that delivers situational alertness.

That mental alertness is the major sticking point for me. I don't really have a solution beyond "drive manually" which isn't reasonable, because this technology is here to stay and will continue to grow, but it's why I've always been bearish about the rapid pace of rollout of these driverless technologies, particuarly when advertised as 'beta'. As I've said before, no amount of disclaimer and 'hey, you should do this' really changes how drivers behave once the equipment is placed in their hands.


Great. There is no doubt that driver assists cut down on crashes. But what tesla has on the road is far from a total eyes-closed autopilot. That is an inflection point with this tech that nobody has dared to test on the public road. I remain unconvinced pending those trials.

Also, still havent seen any autodrive handle off-road driving such as boarding a carferry or navigate a construction zone manned by an inattentive flag person.


Haven't there been a lot of tests of self-driving technology on public roads? Tesla has released a couple of videos of theirs, and Google has put a couple million miles on theirs. What hasn't happened yet is a production-ready, customer-owned vehicle on public roads.


Not unsupervised. Unleashing a robot into school zones without a human in the loop is still considered too dangerous legally if not physically.


That's pretty much the rule about all robots operating in proximity to humans (except perhaps elevators, escalators, automatic doors and toy robots).


> But what tesla has on the road is far from a total eyes-closed autopilot.

It doesn't matter that it is not full autopilot. What matters is that whatever they have right now, as currently deployed in the real world, produced the numbers behind NHTSA's conclusions.


What's your point?


The dream remains a dream. Being able to call your car like batman did the batmobile, a car able to navigate unsupervised is still a decade, perhaps decades, away.


Oh jeez. So what if it's 10 years away? Even if it's 20 years away it's a piss drop in terms of time. Do you remember what the Internet was like in 1994? I do. It was shit. There wasn't even Google. Forget about looking things up on your mobile, there wasn't even mobile, nevermind Internet on mobile. It's barely 20 years and the world has _completely_ changed because of it. Self-driving cars are barely at the 1994 Internet stage. It's more like at the BBS stage with 9600 baud modems.

Just sit back and chill. It's going to happen.


A small correction, but there were mobiles. They were huge (most people used them attached to the car), and their weight was measured in kg, but they existed :)


And they were very powerful. The distances between cell and tower were an order of magnitude greater than todays networks. If they were still supported, mounted carphones would still have a niche market.


>It's going to happen.

You cannot compare every arbitrary thing to how internet has progress and just win the argument. That is just wishful thinking.


Ok sure, but I don't care about that as much as I care about a drastic reduction in danger on the road. This is a big step forward for that.


I don't completely trust the NTSA and I'm skeptical about auto-piloting cars but accept that more and more of those will be on the roads. I will never ride in a vehicle that lacks an override mechanism.

In general, I think we are moving way too fast towards these self-driving vehicles because certain factions want to try and replace long and short haul truckers with robotic systems that are cheaper and damn the consequences.


I don't really know why this is surprising. Computers are already better than humans at most tasks that involve a limited set of behaviors and they have infinitely better response time than humans (and continue to get better). How could anyone think that a report like this was going to end up any differently?


You might want to read about Therac-25


It's okay to withhold judgement and to be skeptical until seeing actual data. Especially for rapidly developing technologies like self-driving cars.


I didn't say it wasn't ok to be skeptical, just that I wasn't surprised by the results. Teslas have been on the road for over a year with AutoPilot and there haven't been any accidents caused by the system. Why anyone is surprised that AutoPilot is safer than a human driver is what I'm not understanding. Based on the votes, though, apparently computers are evil and shouldn't be trusted and we should all be amazed when they don't just kill us.


You said you didn't understand why it was surprising, which is a different thing than not being surprised.

Computers are good at lots of things like playing chess, and still completely incapable at lots of things that involve common sense. Expecting autonomous cars to be better than human drivers eventually is one thing, but it is certainly not obvious that we are there yet.


The history of self driving cars shows that it is not an easy task. We've been working on this for over 15 years. I was on a self driving car team during 2008-2010. We won multiple intercollegiate competitions. At that time I was certain it would be 20+ years before we had something that could drive on the roads. The progress has been spectacular.


No, programming the self-driving car is not an easy task. A car that's already been programmed, though, is definitely going to be safer than a human driver and driving for the computer is an easy task.


> definitely going to be safer than a human driver

That's just ridiculous. It is going to be safer in some cases and less safe in others, and obviously it depends on the system and the driving conditions. To say that a computer is necessarily a better driver is just science fiction until we have solid data showing that to be the case.


> We've been working on this for over 15 years.

Over 25 years, actually - CMU's ALVINN (1989) was one of the first full-sized vehicle implementations:

http://repository.cmu.edu/cgi/viewcontent.cgi?article=2874&c...

It was preceded by an E-MAXX based RC car system called DAVE (DARPA Autonomous Vehicle).

CMU also worked on an investigated other systems, going back to 1984 or so:

http://www.cs.cmu.edu/afs/cs/project/alv/www/

They had an interesting system called the Terragator, from at least 1983; I can't find a paper about this system, though I know they exist - there is a video, though:

https://www.youtube.com/watch?v=beGRM9ZmWlI

One might consider the Stanford Cart as an earlier example (I believed it used computer vision techniques - and was really slow at computing the "next move"):

https://web.stanford.edu/~learnest/cart.htm

(side note: As part of SAIL, the Cart was instrumental in paving the way for Thrun's Stanley - which won the DARPA Grand Challenge, and ultimately led to Google's efforts in self-driving cars)

ALVINN was unique in that it used a (simple compared to today's CNNs and deep learning) neural network (and a WARP systolic array):

https://en.wikipedia.org/wiki/WARP_(systolic_array)

...coupled to a camera to digitize and model/learn the road features for driving, while "watching" the human driver (early form of "behavioral cloning" - but it wasn't termed as such in the various papers you can find about ALVINN).

Interestingly, during the ML Class in 2011 (which I participated in), one of the students - after we were shown several times how ALVINN worked - decided to replicate the system using "modern hardware":

http://blog.davidsingleton.org/nnrccar/

We've really come a long way...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: