Hacker News new | past | comments | ask | show | jobs | submit login
Two people killed in fiery Tesla crash with no one driving (click2houston.com)
384 points by bdcravens on April 18, 2021 | hide | past | favorite | 736 comments



Some people are quoting the recent Tesla safety report [1] as evidence that Autopilot is on average much safer than a human driver. This is a classic case of the Simpson's Paradox [2].

On the first look it seems that Autopilot is 4x safer than driving without any safety features (1 accident every 4.19 million miles vs 0.978 million miles). However, the data used to compute the stats is different in two important ways:

1. Autopilot cannot be always activated. This means that is some particularly difficult situations, the driver needs to drive himself. These are more dangerous situations in general.

2. If a driver disengages Autopilot to avoid an accident and engages it again straight away on a 10 miles drive, then you will have 9.99 miles driven on Autopilot without accident. The statistic misses the cases where the human driver intervened to avoid an accident.

This means that we are comparing the same measure (accidents) on different datasets and therefore in different conditions. This is dangerous, because it may lead us to wrong and often opposite conclusions (see Simpson's Paradox [2]).

I'm not saying that Autopilot isn't safer than a human driver, given that the driver is at the steering wheel and alert, but that this data doesn't lead to that conclusion. If the driver is not sitting at the driver seat, then it is certainly much more dangerous.

[1] https://www.tesla.com/VehicleSafetyReport [2] https://en.wikipedia.org/wiki/Simpson%27s_paradox


Just for the record, people who study the problem space concerning traffic safety have disavowed the word "accident" because it all too often dismisses the preventable root causes that can be learned from here.

context:

* https://laist.com/2020/01/03/car_crash_accident_traffic_viol...

* https://usa.streetsblog.org/2016/04/04/associated-press-caut...

* https://chi.streetsblog.org/2021/04/05/laspatas-ordinance-wo...

It'd be nice if folks here would be mindful of the role language plays. Here's also a preemptive "intention doesn't matter" because the first post I share covers that in the section "The Semantics of Intention", where it argues that the decisions have already been made in both the designs of our streets and in the choices people make behind the wheel, and those have known and changeable outcomes.

last edit I swear, but a good catchphrase I've seen recently that I'll be pinching is "Accidents happen, crashes don’t have to."


> Here's also a preemptive "intention doesn't matter" because the first post I share covers that in the section "The Semantics of Intention", where it argues that the decisions have already been made in both the designs of our streets and in the choices people make behind the wheel, and those have known and changeable outcomes.

The argument made here isn't very good. "Traffic violence" implies intention to do violence. Intention to speed is indeed intention to incur the risk of harm, but that risk is in general quite low, whereas 'traffic violence' strongly implies direct intention to cause harm. Intention to take risks that have harm as a potential consequence is not equivalent. It may be true that 'accident' cuts too strongly in the other direction, but the correct term is clearly closer to 'accident' than it is to 'violence'.


Every crash that is not deliberate is an accident. Even if from negligence (drunks) the actual crash is accidental. Ascribing intention to every mistake, turning every tragic death into a homicide, ends in dark places. We may hate the person whose mistake causes a crash. We dont make them a murderer.


> Every crash that is not deliberate is an accident.

I don't agree, probably because we have different meanings and implications around the word "accident".

For me at least, there's some subtle semantics surrounding the word "accident" which are at the very least unhelpful in the context of traffic incidents.

There are lots of incidents which are not "accidents", they're the result of a driver choosing to do something, or choosing to not to do something.

A crash shouldn't be described as "They accidentally ran a red light", or "They accidentally went too fast into a curve", or "They accidentally failed to notice the pedestrian".

In those scenarios, "intention" doesn't matter, and the driver fucked up and cause the crash and should be held accountable/responsible. There's no wriggle room there for reducing the responsibility by saying "accidents happen, it wasn't my fault..." There was no "accident", there was a fuckup on someone's part.

Motorcyclists here in Australia have a term "SMIDSY" - which stands for "Sorry mate, I didn't see you", which is pretty much guaranteed to be the first thing out of a driver's mouth after they've just driven straight into you. It's not that they didn't "see you", it's that they didn't bother looking, or that they looked and didn't take any notice. Those are not "accidents". They are fuckups.


>> In those scenarios, "intention" doesn't matter, and the driver fucked up and cause the crash and should be held accountable/responsible. There's no wriggle room there for reducing the responsibility by saying "accidents happen, it wasn't my fault..." There was no "accident", there was a fuckup on someone's part.

Intention absolutely always matters. People often do drive through lights unintentionally. If that is a proximate cause of an accident they are punished for negligence, not murder. Even intentionally driving through a light is a form of negligence. The only intentional crashes happen where people use the vehicle as a weapon[1]. If you deliberately run someone down you are a murderer, whether you broke any traffic laws while doing so is irrelevant. If you kill your wife by deliberately crashing into her, the cops aren't going to write you a speeding ticket while they read you your rights.

[1] There are also rare cases where crashes are deliberately caused by non-drivers. A crash resulting from someone icing a road, throwing a brick at a car from an overpass, or tampering with a vehicle are deliberate crashes even though the driver(s) involved are not at fault. But if the road is icy because of something like a broken water line, it is very possible for nobody to be responsible: a truly blameless accident.


> People often do drive through lights unintentionally. If that is a proximate cause of an accident they are punished for negligence, not murder.

Oh sure. I'm not proposing people who fuck up be treated as murderers. It's just that in my head, saying it's "an accident" is minimising the blame. If you drive through a light unintentionally, that's totally your fault. You should have and could have avoided doing that by paying attention. You're right that it's negligence. I'm less sure than you are that it deserves to be called "an accident". It seems to me way to close to the schoolyard bully claiming "I was just waving my arm around, he put his face in front of my fist."

> But if the road is icy because of something like a broken water line, it is very possible for nobody to be responsible: a truly blameless accident.

I don't live in a place where it freezes, so I'm kinda not the right person to have an opinion here - but I would have thought that if you live/drive somewhere that ice or black ice happens, then there's at least some "driving to the conditions" argument to be made that the driver was negligent in that case?

I guess bottom line there is that to me, negligence is not covered under the term "accident". If the incident should have been avoided by a driver not being negligent, then it wasn't an accident.

(But I'm not a linguist, and am questioning whether that's a widely held opinion. Thinking about it, it's likely because I'm a long-term motorcycle rider. I cannot afford to have "accidents", because they hurt _way_ more than people driving cars having "accidents". I consider myself at fault for any situation _I_ could have avoided by "being less negligent", and perhaps unfairly impose that burden on other drivers as well. But I will none the less get angry when they say "Sorry. It was just an accident" while I'm the one lying bleeding on the road because they were negligent...)


Fellow biker here, and I think you're just stating an arbitrary line.

The first negligent action could be climbing on the bike.

I think I'm safer slowing and rolling through a stop sign in my truck than I am getting on the bike in the first place.

I realize it's different-- causing a threat for others and all. But I think the point stands-- you're just debating where negligence starts.

I tend to agree with the GP. It's an accident anytime it wasn't intentional, even if it's a preventable accident.


> People often do drive through lights unintentionally

Generally this is in the context of driving too fast.

Speeding is a choice. You are either intentionally disregarding your speedometer, or the designers of the street have made it too easy to get to higher rates of speed rather than apply calming designs to slow traffic down (another intentional choice).

The intention is baked in.


> People often do drive through lights unintentionally. If that is a proximate cause of an accident they are punished for negligence, not murder.

No, traffic violations are generally strict liability, intent, or mental state more generally, does not matter.

Sure, if you intend to kill and do, then you will also be guilty of murder, but that is a separate and additional issue.


Liability is not criminality.



If we are going down this route of the semantic of road incidents, then lets commit fully and using the law as it is where I live. Any traffic incident here always involve two people who failed to comply with traffic laws.

A pedestrian crossing a street is required to look for cars and avoid the road if there is a risk for a collision. It is in the law, even if the crossing sign is green. No person has a right to cross a road or driving a car over a crossing. Both individuals is only allowed to do their thing if it can be done without causing an accident.

The same is true for traffic lights. Green does not mean you have a right to drive through, but rather means that drivers are allowed to drive through if such thing is possible without causing a traffic incident. Basically ever single action on the road under any circumstance is qualified under the condition that it won't cause an accident.

Naturally no pedestrian is going to be charged for being hit by a car unless there is some truly extraordinary circumstances, but the intent of the law is to make people understand that traffic is about mutual communication and responsibility.


> Any traffic incident here always involve two people who failed to comply with traffic laws.

Are you sure about that? It would mean that it's technically illegal to cross a road unless you're absolutely certain that there can't be a speeding driver around the next corner who's gonna get to you faster than you can react. Or that you aren't allowed to cross the street at a green light even when all the cars are waiting because you can't actually be certain that a car isn't gonna start driving suddenly and run you over. Or that someone tailgating you is somehow caused in part by you.

It's pretty ridiculous to claim that both sides failed to comply with the laws in every incident.


Yes, 100%, it is illegal to cross the road unless you can be reasonable certain that doing so won't cause an accident.

Look at the perspective of the law maker. The goal is zero traffic accidents, the vision zero as it is called (https://en.wikipedia.org/wiki/Vision_Zero). If pedestrian think that they have a given right to cross the road then there is a risk that people won't apply common sense but rather just step out on the road without looking or thinking.

Similarly, the driver of the car is always guilty of negligence, regardless if the pedestrian is at fault. This include if the pedestrian walk against red. Just because the other party is also at fault does not diminish the obligations of the driver to not hit a pedestrian. Regardless of traffic light, the driver is not allowed to drive over the crossing unless they are reasonable certain to not cause a traffic incident.

In the extreme, if both driver and pedestrian wanted to be 100% guarantied to not cause an accident then both would simply stand still. In practice both hopefully act with common sense and work together to avoid an accident, which happens to align with the goal of those who wrote the law.


>> Any traffic incident here always involve two people who failed to comply with traffic laws.

> Are you sure about that?

There are at least single vehicle incidents which do not involve two people. Over cooking a corner and spearing off into the scenery is just one person fucking up.

But I agree with the thrust of the GPs argument, in any two (or more) vehicle incident, there was probably one party who “caused” it, and another party who should have avoided it. Not _every_ incident, but I’d argue that it applies to the vast majority of multiple vehicle incidents.


Just so you're aware, there is a crime known as negligent homicide (although it's usually treated as manslaughter instead of murder).

When you drive at speed (above ~25-30 mph for someone not protected by a giant steel cocoon, I think ~50 mph for someone who is), you are now in control of a loaded weapon that will kill or seriously injure anyone against whom it goes off. If you are unable, or unwilling, to handle a loaded weapon with the requisite amount of care to avoid negligence, then perhaps you are not deserving of the privilege of handling that weapon.


I am aware. It was on the test.

>> negligent homicide (although it's usually treated as manslaughter instead of murder)

Not usually. It can never be treated as murder. If it was negligent then the killing lacked indent per se.

>> If you are unable, or unwilling, to handle a loaded weapon with the requisite amount of care to avoid negligence

Good luck selling that to the AARP. People have medical incidents behind the wheel every day. A heart attack/stroke/seizure can easily result in a crash, including deaths. We don't lock people up for that unless they were negligent, unless they had some reason to know it would happen. Everyone one of us may suffer an aortic dissection or brain aneurysm at any moment. That's why large commercial aircraft have two pilots. Driving while human is not negligence.


Criminal negligence applies when you could and should have taken steps to prevent the incident but failed to do so. Having a heart attack while driving is not negligence, unless you are somebody who has a medical condition that makes a random heart attack while driving a likely foreseeable outcome (say, angina).

That said, many car crashes are probably caused by negligent actions anyways. Someone who is speeding through a stop sign without stopping is negligent in doing so, especially because the kind of people who do that once tend to do it all the time.


To further your position and to loop it back to the original point about crash vs accident: even in the case of an AARP member who had a heart attack that leads to a collision, accident is not the right word. There is nothing accidental about it. We know that building infrastructure that requires driving into old age will result in deadly collisions, yet we continue to choose to invest to grow this infrastructure. The resulting incident is a crash or collision, not an accident. A good book on the subject is Normal Accidents by Charles Perrow.


> Every crash that is not deliberate is an accident.

If you were to define the word "accident" as "everything not deliberate", then, sure ok. But that's not how the word is typically used.

There are vanishingly few real accidents in traffic. Nearly all crashes are also not intentional.

The vast middle is crashes due to driver incompetence or negligence. Not intentional, but totally avoidable if the driver had done their job (pay attention, be lucid, have suitable skills).

People like to call things "accidents" because it shifts blame to destiny. But reality is that nearly all crashes could have been avoided by the driver.


> risk is in general quite low

Let's phrase it differently. Suppose you have a button. If you press it you can save a total of just under 5 days over your lifetime of driving, but there's a ~1.5% chance somebody instantly dies. Do you press the button?

I think most people make that choice because they don't understand how dangerous their actions are, not because they think that's an acceptable return on investment (and in that sense, you're probably right that "violence" isn't quite the right word), but the fact remains that we have a system where pedestrians can't feel safe crossing the road at a crosswalk at a stop sign in front of a stopped car, and almost all pedestrian deaths would be prevented by just not breaking traffic laws or otherwise driving recklessly. I definitely think "accident" cuts too strongly in the other direction.

A brief reminder just in case anyone reading doesn't know yet and will change their driving behavior positively in response:

- Blowing through stop signs kills people (if you don't have enough visibility, stop first and _then_ pull forward)

- Speeding drastically increases the chance of killing pedestrians you hit; please at least slow down in residential areas?

- Tailgating kills people

- Texting while driving kills people


Putting others at risk is harm. The amount of harm was larger than expected, but that doesn’t excuse the intent to cause harm anymore than someone dying during a beating is excused because they didn’t mean it.

Reckless Endangerment for example is inherently a criminal offense even if no one was injured.


No, intent matters. Intending to kill someone with a car is punished far more severely than killing someone because you were speeding and lost control.


That’s irrelevant. While attempting to kill someone is considered a worse offense their both harm.


To be more clear, the amount of harm harm intended is the only difference. Manslaughter is still illegal even if the penalty for murder is higher. The difference is simply in the degree of harm intended not in the lack of intent to cause harm. If someone dies through because say you happened to have parked a normal car in a normal parking space then that’s perfectly fine, but kill someone because you where speeding and that’s your fault.


You can't have an accident without a risk of accident first.

Risk and harm aren't always inherently crimes. For example a cosmetic surgeon harms people and exposes them to unnecessary risks. In contact sports people willingly subject themselves to risks of serious injury. Same when people drive on public roads. The degree of risk is critical, hence the "reckless" requirement for the crime.


The critical difference is in sport and cosmetic surgery the other party willingly puts themselves at a specific level of risk. It’s not accepted for a cosmetic surgeon to increase risk by randomly preforming a second unrelated surgery while their under without telling the patient. Similarly, nobody agrees to have people do 200 MPH on a public road, in fact all speeding is breaking the law the only thing that changes is penalties getting more severe.


It seems you missed both my points.

1. People do indeed willingly put themselves at risk by driving on a public road, which was in the same list with the cosmetic surgeon. And doing so also puts others at risk.

2. The degree of risk is what matters. Ergo while both obeying the speed limit and driving 200 mph (assuming it was possible) puts others at risk, the latter risk is considered to high a degree.

It is not a fundamental choice between risk and no risk, but between unacceptably-high risk and acceptable risk.


Speeding is by definition both an unacceptable risk and illegal.

Even driving the speed limit is only acceptable in ideal conditions. Driving at those speeds is illegal in rain, fog, snow, etc as it puts others at what is considered an unacceptable risk.

You and many others may decide to ignore public safety and the law, but that doesn’t somehow make it safe. While you personally may feel homicide is a reasonable trade off for convince, don’t assume everyone that happens to be near a public street and who’s life your willing to sacrifice, agrees with you. The occasional person killed in their own homes might have considered it a reasonable trade off, but we can’t ask them. EX: https://www.wfft.com/content/news/Woman-in-Fort-Wayne-killed...


Another miss. Now you have it backwards. I am not defending speeding (good lord). I am pointing out that legal behavior is risky too. Therefore your point about risk equating to harm is useless.


Plenty of things are acceptable harm. Exhaling CO2 in the same room with somebody is actually physically harmful but nobody generally worries that breathing becomes lightly more difficult when you’re around. The bounds for acceptable harm seem invisible, but they definitely exist.

Therefore some levels of risk being acceptable is in no way a counter argument.


I agree with everything you said except the last statement, which is an assertion, not logic.

Since you agree that one cannot avoid creating risk, what is the logical value of introducing the idea that risk is "harm"? You can just as well talk of unacceptable risks, rather than "unacceptable harms". Further, you apparently agree that all "accidents" result from parties taking a risk, whether small acceptable risks (more rarely, no doubt) or by larger unacceptable risks. So what point are you left making about the existence of "accidents"?


The last statement was countering your train of logic, it wasn’t support for the idea on it’s own.

I am not introducing the idea that risk is harm, I am acknowledging that society has agreed that risk is harm most clearly in the case of Endangerment laws. Therefore that must extend to all levels of risk not simply the most extreme cases. Therefore someone deciding to put others at risk must as generally agreed by society be seen as willingly doing harm to others.

As to “accidents” the outcome may have been undesirable by the person at fault by speeding, failing to maintain proper distance, failing to maintain your car, driving in unsafe conditions, driving when you’re incapable of maintaining control etc are hardly accidental. Their the result of deliberate risk taking with others lives. When you anti up the health and safety of pedestrians you can’t then say their death was anything but your fault.

That said, sure their might be a few hundred actual accidental deaths in the US from manufacturing defects, or undiagnosed medical conditions. But, calling a drunk driver hitting someone by going down the highway in the wrong direction an accident is a completely meaningless, at that point just call it a bad thing and move on.


You can't counter my train of logic by using the same points I used to support it, at least not without providing something more. Both "high risk" and "low risk" behaviors are qualitatively the same; they create a risk. They only differ quantitatively in the amount. And while you have been running for several posts with the idea that a high risk behavior is a sure cause of an accident, most are also generally very unlikely to cause an accident in absolute terms (that's usually why people take such risks, after all), making them very low risk compared to actual intentional violence (the point of the poster you responded to). Drawing a line for legal/illegal risks just allows us to punish people for taking risks we don't want them to take. It is still meaningful to call it an accident to identify these qualitative similarities.


That’s why I specifically said low levels of harm are acceptable.

Your argument boils down to saying it’s ok to play Russian Roulette with unwilling people if their is X chambers in the gun, but not ok if their is X-1 chambers in the gun. I am saying it’s never ok to do so but there is a polite agreement where people ignore low levels of harm based on a host of factors.

This is consistent with events that have already happened and events that have yet to happen. You can reasonably argue the risk was low for events that didn’t happen, as in the building was strong enough see it’s still standing. In that case speeding cameras should be legally different than a cop pulling someone over. The cop is stopping you from speeding, but the camera doesn’t.

On the other hand if risk is inherently a harm then past or future harm is irrelevant. Which is how things treated, you can’t argue the outcome when you have put others at risk.


> The word "accident" suggests nothing could have been done to predict or prevent the collision.

I mean, that's clearly rubbish, isn't? When I accidentally shut my thumb in the door, I use the word accident to indicate I didn't do it intentionally. It doesn't minimise the fact that I was a blithering idiot and could have avoided a broken thumb with the minimum of attention.


The language is definitely a bit tricky: think about how many accidents are caused by driving aggressively or choosing to use a phone while driving. Nobody chose the accident but it was a direct, easily predicted consequence of a deliberate choice and wouldn’t have happened if they had followed the law. That seems to be rather different from you hitting your finger with a hammer, unless perhaps you were trying to check Facebook at the same time like the average commuter.


To me it's just a matter of stakes. No one will die if you close your door without taking care. On a road that's a realistic threat so people should account for it in how they act. For me that moves it from accident to negligence (in an everyday language sense, not legalese).


The traffic people avoid the word, the people that study commute patterns and write policy papers about how reducing the number of vehicles reduces accidents (have a phd for that one). The people who design crumple zones, who decide traffic light brightness, who build the brakes that prevent crashes and the seatbelts that make them survivable ... the people who represent 99% of your safety still call them accidents.


I happen to agree (I've had this exact debate before), but to a lot of people it's a loaded term.


Wow, how unhelpful. I don't know the acceptable way to call this out, but how about focusing on substance instead of just bringing pedantic language stuff into the mix. I see this happening a lot, when people can't find a real way to contribute, they start debating the position of commas or whether it should be called inquiry or enquiry or whatever. It distracts from real debate, and maybe gets you some attention (lots of bureaucratic leader types love this sort of thing) but it's really wasting everybody's time.

(Edit, the irony isn't lost on me of providing a low value comment that doesn't contribute to the discussion in response to one I accuse of something similar. But I've seen so much time wasted and so many people getting ahead and in some cases basically build a career on engaging with these kind of language things instead of doing any actual work, I wanted to bring it up)


His comment added a lot more to the conversation that your comment (or mine.)

Sometimes pedantism is important and useful. In this case I have no problems imagining that we could reframe our understanding of how to design traffic systems by reframing the language we use to talk about how those systems fail.

Not that I necessarily agree, but I don't think you can dismiss the argument by waving your hands and saying "pedantism is bad".


Emergency vehicle driver training (which is acceptable in some states in place of a CDL, and also covers other rules) used to be called "EVAP" (Emergency Vehicle Accident Prevention).

It's now called EVIP (Emergency Vehicle Incident Prevention).


A friend of mine has a saying

"There are no accidents, there are only fuckups. Maybe getting hit by an asteroid would be an accident - but hitting or getting hit by another vehicle, or driving into the scenery, is _always_ a fuckup on somebody's part, not an accident."

(Though I'll bet "people who study the problem space concerning traffic safety" have a more socially/professionally acceptable word for "fuckup")


That's a stupid phrase. I can think of many cases where there was not a fuckup yet vehicles collide. Example: someone has some kind of acute medical issue happen (maybe a seizure) and crashes. Or maybe there is black ice that you cant see. Maybe a truck flings a tiny rock it couldnt see in time into the windshield of a car behind it. It is really easy to come up with examples like this.


> Or maybe there is black ice that you cant see

This deserves expanding upon. You’d rightly assume people who live in cold places know to look for ice on the road.

The only traffic accident my ex was involved in was also the only time I ever saw ice on the road in Lisbon, in 30 years of living there. We saw a pileup, she barely touched the breaks, and we entered a skid. Ended up bumping against a car in the pileup (at pretty low speed so it was harmless). Still — freak accidents do happen.


> I can think of many cases where there was not a fuckup yet vehicles collide.

And yet the examples are extreme cases, which while do happen, are very rarely the cause of crashes.

The vast majority of crashes is not because someone had a seizure, it's because they weren't paying attention or were incompetent.


Right, but if you actually read the parent comment you will notice that "_always_" was used and even emphasized, along with "there are no accidents", and the only example provided of a non-accident is a non-car crash example of a meteorite. My examples are much more plausible, and the list goes on. I came up with those examples in like 5 seconds. The point is that there are lots of cases where it truly was an accident.

> "There are no accidents, there are only fuckups. Maybe getting hit by an asteroid would be an accident - but hitting or getting hit by another vehicle, or driving into the scenery, is _always_ a fuckup on somebody's part, not an accident."


The NTSB openly defines transportation investigations as accidents. The entire purpose of the agency existing is to issue recommendations based on the root cause for transportation accidents to prevent the same thing from happening in the future. Any foreign transportation safety agency counterpart in the world has the exact same definition. Whoever the “people who study the problem space” are it sounds like they are trying to twist language for their own personal beliefs and not really studying the problem at all.

https://www.avweb.com/aviation-news/ntsb-accident-investigat...



That line in the movie has always stuck with me. It was almost too good, since it was so correct and insightful that there was no real comedy and it took me out out the movie.


Interesting - I never thought about this aspect! This crash was of course 100% preventable by... driving.


> crash was of course 100% preventable by... driving.

By following the guidance indicated to you in the manufacturer's owner's manual that every new car is supplied, yes.


The word "accident," as it pertains to traffic collisions, is actually just translated to "collision" in my head. In no way does my brain understand it to mean that "this was done unintentionally," it instead basically acts as a homonym to the word which has anything to do with intention.


> Just for the record, people who study the problem space concerning traffic safety have disavowed the word "accident" because it all too often dismisses the preventable root causes that can be learned from here

Nah, it's just a line from law enforcement and prosecutors who want to feed more people to the justice system. More convictions means more revenue and career advancement.


People who study safety are not synonymous with law enforcement and prosecutors.

Edit: having previously worked with safety officers in aerospace who take this exact stance on definitions, I can say they aren’t neither law enforcement or very much concerned with putting people into the criminal justice system. Their concern is mainly to understand the systemic root causes that f accidents in order to prevent them from recurring.


The marketing and messaging around auto-pilot simultaneously argues that auto-pilot is safer than a human driver but blames the driver when there is an accident.


Heads I win, tails you lose. What's so difficult to understand? /s


Autopilot in a plane can make things significantly safer by reducing cognitive load for the pilot. However the plane autopilot will in no way avoid a collision. Pilots are still primarily at fault if the plane crashes.


Teslas aren't planes, though, so how does the etymology of the word "autopilot" help here?


Um.

> Autopilot is such a misleading term.

>> The functionality is almost identical to the only other time we regularly use "autopilot", in airplanes.

>>> Yeah but like, who cares about etymology and stuff? Misleading af.


Ok, I'll try in other words: Statistically speaking, about zero persons know how the autopilot in a plane works (me included), while they do know the word autopilot. Therefore, they can't infer the limitations of Teslas autopilot from a plane's autopilot.


I seriously don't understand this disconnect. You know the word autopilot because it is a technology in airplanes. That is the only reason you know of the word.

Statistically speaking, 100% of people know that 1. Airplanes can have autopilot 2. Passenger jets still have multiple pilots in the cockpit, even with autopilot.

You don't need to know the intricacies of how autopilot functions to recognize the significance of those two facts (which I'm sure you knew) and apply the same to Tesla.


The etymology doesn't help.

It was an intentionally misleading word for Tesla to choose.


A human and Autopilot working together is safer than just a human driving. Autopilot by itself is currently less safe than just a human driving (which is why it's still level 2). There's no mixed messaging.


> A human and Autopilot working together is safer than just a human driving

This is not my understanding from colleagues who studied the human factors of what is now called level 2/3 automation many years ago. Partial automation fell into an "uncanny valley" in which the autopilot was good enough most of the time that it lulled most human participants into a false sense of security and caused more (often simulated) accidents than a human driving alone.

Since then I've seen some evidence [1] that with enough experience using an L2 system, operators can increase situational awareness. But overall I wouldn't be surprised if humans with level 2+/3 systems end up causing more fatalities than human operators would alone. That's why I'm relieved to see automakers committing [2] to skipping level 3 entirely.

[1] https://www.iihs.org/api/datastoredocument/bibliography/2220

[2] https://driverless.wonderhowto.com/news/waymo-was-right-why-...


This is absolutely correct. And related to the issue of situational awareness, Tesla Autopilot has utterly failed at the basic design systems concept of "foreseeable misuse."

Having worked in the driver monitoring space, it pains me to see a half-baked, black box system like Autopilot deployed without a driver camera. Steering wheel and seat sensors are not up to the task of making sure the driver is attentive. Don't even get me started on "FSD," which proposes to work in harmony with the human driver in far more complex scenarios.


There's no mixed messaging?

The driver is there just for regulatory purposes, all cars self driving in 2016!, cross country summon in 2017, coast to coast autonomous drive in 2018, Tesla self driving taxis in 2019, FSD making teslas worth 250k$ in 2020! Etc etc

There are a lot of statements by Elon Musk


Those are all "coming soon". Tesla and Elon Musk are 100% clear that today, you still need to be an attentive driver while using Autopilot.


All those years are in the past.


But those were never guaranteed dates, just very poor/optimistic predictions


"I'll sell you this box. I know it is empty today, but I assure you, tomorrow it will contain a lump of gold!"

On the next day: "I'll sell you this box. I know it's empty today, but tomorrow..."


They were, Musk always used that phrase "not a question mark".


> A human and Autopilot working together is safer than just a human driving.

I am not so sure. The data from Tesla is always comparing apples and oranges and I have not seen a good third-party analysis confirming this hypothesis.


The problem is these are not independent. Autopilot can lead to inattentiveness or other things that come from the sense you are now being assisted. So it boils down to a question similar to “is one driver at skill level X better or worse than two co-drivers at skill level Y+Z” where Y is less than or, unlikely, equal to X and Z is currently known to be less than X.


I have read the criticism of how the Autopilot miles aren't apples-to-apples comparisons with national averages many times. However, this cherry-picks a single number from the safety report and ignores the other reported statistics. If the explanation for why Autopilot miles were so much safer than non-Autopilot miles is that people turn it off in dangerous situations — and thus equal or greater numbers of crashes were occurring for Autopilot users overall compared to the national average, they were just occurring when Autopilot was off — the crash rate without Autopilot engaged would have to be higher than the national average. Otherwise, where would the crashes go?

However, it isn't. The crash rate with Autopilot off (but with other safety features on) is about 4x better than the national average. And with all safety features turned off, it's still 2x better.

I don't think you can explain away the high safety record of Autopilot by claiming the crashes are concentrated in the non-Autopilot miles, because they aren't. While Autopilot miles are safer than non-Autopilot miles, non-Autopilot miles are no more dangerous than the national average (and in fact are less dangerous).

Autopilot+human is considerably safer than human alone.


Even if what you argue is true, it doesn't follow from this report. Why is the accident rate of Tesla with Autopilot and all safety features off 2x better than the national average? Because there is a difference in the demographics - Tesla drivers are probably younger and more enthusiastic about driving than the average driver.

Now, if you do the same statistics on the same demographics for all non Tesla cars, you could actually get less accidents than Tesla - here are where the hidden accidents went. Again, I don't have the data about this and I don't claim it is true, but without knowing this, you cannot make the conclusion you are making as well.

Otherwise I agree with you - I also believe that Autopilot+human is safer than just human. Unfortunately, the usual way that people interpret these numbers is that Autopilot is safer than human...


I agree that the demographic skew probably accounts for some of the difference. Probably also that Teslas need less maintenance (esp brakes, due to regenerative braking), so are less likely to fail for mechanical reasons — although I don't think most crashes are due to mechanical failure, it should show up to some degree in the stats.

I think the argument that the Autopilot numbers are essentially fake because the true crashes are concentrated in the Autopilot-off scenarios is hard to make a case for though, given the stats on Autopilot-off driving being so comparatively good. You would need incredibly good demographic skew to account for that if the crash rate is concentrated — you don't need to just equal the average after correcting for demographic skew, you need to be considerably worse than it. So while it's not a perfect metric, I would be much more surprised if Autopilot+human was more dangerous than human alone.

I 100% agree with you that this is only an argument for Autopilot+human though. Current Autopilot without humans, at least anecdotally (I have a Model 3 with Autopilot), does not seem safe. However, I think the concern among some that Autopilot is unsafe as it currently is typically operated — i.e. with a human in the loop — is largely contradicted by the evidence.

My personal anecdote is that I feel much less fatigued by driving with Autopilot, especially on longer drives. It's imperfect, but it actually generally helps improve my alertness because I don't have to constantly fiddle cruise control settings based on which car is in front of me or make micro wheel adjustments to stay centered in a lane; I usually take over proactively whenever there looks like a sketchy situation is coming up like multi-lane merges with trucks for example. And when those situations happen, I'm able to stay more alert and focused because I haven't been spending my energy on the simple stuff that Autopilot is good at, so I think I end up being safer overall even when it's disengaged. I notice a pretty large cognitive difference — which was unexpected for me when I first got it, because I thought I probably wouldn't use or like Autopilot, and initially was quite mistrustful of it.

Obviously this is just a personal anecdote, and not data! But what data we have, while imperfect, seems to support that conclusion much more than it supports the opposite.


Expanding upon your personal anecdote: Is there scientific research on this matter? (Measuring alertness/fatigue on non-assisted vs assisted driving) It could be valuable.

Personally, I think driving is nearly always a waste of my time, so I avoid it when possible. Plus, I don't think of myself as a very good driver. Reading your anecdote made me think about how I feel after a long drive vs a long train ride. I cannot put a finger on it, but fatigue from constant required adjustments when driving /might/ be a factor.

More likely: I like how I can spend my free time when riding a train vs driving a car -- which is somewhat limited to passive listening: radio/music/audiobook/podcast/etc.


> Tesla drivers are probably younger

Don't younger (hence less experienced) drivers generally have more accidents? If this is true, isn't it more evidence that Tesla's safety features are helpful?


I think "younger" here is meant more as "not old." 16 year olds are less safe drivers, yes, but on account of the price they're not going to be a big part of Tesla's demographic.

Since there's no affordable model, and they're a newfangled gadget with strange refueling infrastructure and a giant touchscreen for a console, Tesla owners probably skew toward middle aged. So they'll have fewer drivers in the less safe age ranges at both ends of the spectrum.


That depends on how you define younger. Not many teenagers can afford a Tesla though so in this case younger probably means mid 30s to early 40s. That largely removes very inexperienced drivers and the elderly.


Risk by age decreases from 16-25, bottoming out from 30 to 40, before increasing again. 30-40 is, likely, a huge part of the Tesla demographic.


Your last paragraph is the most important one. Autopilot is driver assistance, and it shouldn't be a surprise that it helps. But these results are comparing human + computer vs human, and does not in anyway indicate that the computer alone is better than a human, let alone a human + computer, which should be the benchmark.


I agree these numbers only argue for human+computer vs human, and not computer vs human.

I'm curious why you think the benchmark should be computer vs human, though. Autopilot is very clearly a human+computer system; it states you need to be alert, and it forces you to keep your hands on the wheel and apply wheel torque occasionally to make sure you're actually paying attention. Why would Tesla benchmark something they don't ship (and how could they even do that)? The question for the general public, and for Tesla owners, is whether the current system is safe. It appears to be.


Theae stats are often quoted by Musk and Tesla to suggest that driverless cars are here and safer than human drivers, and the only thing preventing them are regulators. They are never quoted to imply that driver assistance makes driving safer, which i believe they would.

So, one has to compare computer vs human. In fact, more than that. One cannot compare modern technology to one from the previous century. So one must compare computer to the best passive driver assistance that one can develop for humans. So Tesla must compare a driverless solution to their own driver assistance solutions aiding drivers, and not the "average car on the road"


> Theae stats are often quoted by Musk and Tesla to suggest that driverless cars are here and safer than human drivers, and the only thing preventing them are regulators.

That's surprising, I hadn't seen that. Could you link to an example?


This video from 2016 (https://www.tesla.com/videos/autopilot-self-driving-hardware...) saying "the driver is there just for legal reasons, the car is driving itself"

This page (https://www.tesla.com/support/full-self-driving-computer) says "Will help us enable a new level of autonomy with regulators approval"

And many many more for Elon Musk's Twitter and various appearances.


Yeah, and in the part of that second link that directly addresses the question:

> *Will the FSD Computer make my car fully autonomous?*

> Not yet. All Tesla cars require active driver supervision and are not autonomous. With the FSD Computer, we expect to achieve a new level of autonomy as we gain billions of miles of experience using our features. The activation and use of these features are dependent on achieving reliability far in excess of human drivers, as well as regulatory approval, which may take longer in some jurisdictions.

That clearly states that there is still a technical challenge to overcome which is prior to any regulatory issues.


When has Tesla said that driverless cars are here?


This video from 2016 (https://www.tesla.com/videos/autopilot-self-driving-hardware...) saying "the driver is there just for legal reasons, the car is driving itself"

This page (https://www.tesla.com/support/full-self-driving-computer) says "Will help us enable a new level of autonomy with regulators approval"

And many many more for Elon Musk's Twitter and various appearances.


> However, it isn't. The crash rate with Autopilot off (but with other safety features on) is about 4x better than the national average. And with all safety features turned off, it's still 2x better.

You still can't figure that out from Tesla's stats. It'd have to be "compared to the same roads in the same conditions". Tesla only knows where its vehicles have been driven, not every vehicle on the road. Let's be honest, this stat is just marketing.


The total crash rate in Tesla cars is not necessary less than that of say Prius cars.

Comparing Tesla cars crash rate with that of the overall population is dishonest:

1. drivers are biased population 2. the age of the car is biased


It is not "dishonest." Toyota, AFAIK, does not publish these numbers; comparing to the national average is just the best you can do. Publishing the numbers without any comparison would be silly; what does it mean to know Tesla's accidents per mile if you not only don't know it for any other manufacturer, you also don't even know what the national average is?

And while I couldn't find numbers for Prius specifically, it seems that hybrid cars are actually on average more dangerous than other cars, so I would be surprised if Tesla were not handily besting the Toyota Prius given Tesla's safety record: https://www.thecarconnection.com/news/1022235_hybrid-drivers...

Yes, there may be biases in driver population that make Tesla owners slightly more or less likely to crash. However, I think it is a very large stretch to claim that this would result in the fairly astoundingly different safety numbers.

As for the age of the car: car age is mostly a statistical factor due to safety systems in newer cars. (It is also important in terms of deaths due to safety standards like crumple zones and airbags, but we are talking about a count of accidents, not deaths; if a crumple zone has been used, it is an accident.) Tesla publishes the statistics both with safety features on (4x better than national average), and the numbers for if they have been disabled which is still 2x better.

I think if the claim that the crashes are concentrated in the non-Autopilot miles were true, and that Autopilot+human is more dangerous than human alone, it would be very hard to understand how the crash rate was still 2x better than the national average with safety features disabled and Autopilot off.


> It is not "dishonest." Toyota, AFAIK, does not publish these numbers; comparing to the national average is just the best you can do.

When you know, unequivocally, that you are missing huge swathes of information, and drawing all manner of conclusions and inferences not supported by those statistics, it's not "just doing the best you can", it's "being disingenuous and misleading with numbers".


The real world is not perfect. People have to make the best decisions they can with the data available.

I think it's better to publish the numbers that are available. Tesla can't publish Toyota's numbers, because they don't have them. I don't think they're at fault for comparing against the only benchmark available, and I think it's better to have that comparison than not. Many, many writers have claimed that Autopilot is inherently unsafe and believed it would cause massive numbers of crashes compared to traditional cars. The data shows that not to be the case.


If you make your decisions based on partial information, knowing that large parts of the information is missing, and then pretend that it's not, you are not making the best decision. The best decision must take into account that information is indeed missing.


Really? I think I remember reading that accidents in newer cars are more rare. How does anybody know that? Can we not at least compare to similar aged cars?


Can we not at least compare to similar aged cars?

I would love that. Do you know where to find that data though? I don't think it is published anywhere, which is why it's hard to use as a benchmark.


The quoted statistics on either side are not helpful here. See:

>> Driving to Safety

>> How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

>> Key Findings

>> Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.

>> Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

>> Therefore, at least for fatalities and injuries, test-driving alone cannot provide sufficient evidence for demonstrating autonomous vehicle safety.

https://www.rand.org/pubs/research_reports/RR1478.html

Note also that Tesla's numbers are reported after several years that Tesla cars with Autopilot have already been driven on public roads. Whatever the numbers say now when Autopilot was first released there was no evidence of it being safer than human-driven cars, only wishfull thinking and marketing concerns.


Please correct for demographics. The average Telsa owner does not include poor people driving beaters with bad brakes, so there's a heck of a lot of self selection going on that is probably skewing the statistics.


As I like to point out to people when they quote this self driving statistic, student drivers have the best driving record out there. No fines, no accidents. Yet nobody would ever confuse a student driver for a good driver even if they are probably better than current self driving tech.


Why do you say that? Student drivers certainly do get into accidents, despite the fact that some driver's ed cars allow the instructor to take partial control. When my partner was in a program, the student driving the car they were in rear-ended another car.

Maybe you mean that it doesn't go on their driving record, but is that really true? The one reference I could quickly find of this happening says that the student was issued an infraction: https://www.upi.com/Odd_News/2018/04/04/Student-driver-crash...


You're missing the point: if you don't have a good sample size, then you don't have good data.

Data which also excludes situations (i.e. autopilot throwing control back to the human and counting that time as "autopilot not in use") is bad data.


I think you replied to the wrong comment, I'm talking about student drivers, not Teslas.


You're getting really into the weeds on a broader point. Suppose we give a new driver their license - but they've never driven before. Technically they have a perfect driving record. Even 1 or 2 hours in on road, still a perfect driving record.

Most student drivers in fact will have in fact, completely perfect driving records. That accidents happen is irrelevant - just think - those stats for the first couple of months probably look spectacular compared to the normal population.

The original comparison is all about this use of a biased dataset to draw an invalid conclusion. Except with student drivers we know that actually we shouldn't trust that conclusion, because in practice they have an insufficient amount of experience and have not likely dealt with many challenging road situations.

With Tesla, it's the same sort of problem.


> That accidents happen is irrelevant - just think - those stats for the first couple of months probably look spectacular compared to the normal population.

This is exactly the point I'm disagreeing on. I have no reason to think that student drivers do better (in, say, number of crashes per thousand hours driven) than other drivers. In fact they probably do worse. The person I replied to suggested that student drivers all have perfect records, and I'm genuinely perplexed about how that could possibly be the case... my question is genuine - I simply don't understand what they meant.

> student drivers have the best driving record out there. No fines, no accidents.


> I'm genuinely perplexed about how that could possibly be the case

I live in Europe where student drivers must be accompanied at all times while driving by a professional instructor with secondary controls at their disposal. There's no "just have an adult with a driver license dozing off in the passenger seat and you're good to go".

The instructor holds a lot of responsibility, they are responsible for everything that happens in/with the car, so they make sure to be very conservative with that brake pedal and instructions. "No accidents" may have been a small exaggeration, surely a couple of them will eventually have one. But statistically students are by far safer than regular drivers because their mistakes rarely if ever turn into an incident. The special conditions (constant supervision, lower speeds, controlled route, etc.) make sure of this.

But this just makes my point in a way my comment above couldn't: when you're lacking data even reality can be genuinely perplexing.

Tesla's statistics are misleading because this serves them. Comparing the number of accidents between a fleet of modern cars to one where the average age is 12 years, excluding city driving because nobody does that anyway, and not counting every driver made adjustment as a failure of the AP is specifically meant to give the wrong impression.

Any car can drive itself on the highway if you just tweak the steering wheel one in a while to keep it on the road. That's what, 0.1s every 10s? But saying it drives itself 99% of the time is misleading at best.

I'm sure driver assists help reduce accidents and they're the way to the future. But Tesla's "conclusion" that AP is safer than a human driver based on their misleading statistics is a flat out marketing lie.


> I live in Europe where student drivers must be accompanied at all times while driving by a professional instructor with secondary controls at their disposal.

I don't know if you're familiar with Asterix comics, but this is one of those "All of Gaul?" type situations. Belgium allows parents to teach children on a probationary license. Nowadays, the requirement is the parent has held a license for 8 years and hasn't been in an accident in I believe the last 3. The student driver also isn't allowed to drive between 10pm and 6am during weekends and the nights preceding and following official holidays.

It also allows people to drive fully autonomously up to 18 months with a probationary license before they take their official test - which they fail once on average. During Covid, those 18 months might have become 24 or more, because test centres were closed.


Mea culpa, I wasn't aware of countries in Europe where this is allowed and I generalized because I'd rather not share specifics. Is this a years or decades old thing?

Maybe it was just my bias making me assume it's the kind of field where you'd want professional training in a car with dual controls, not a regular car with an instructor who might have never driven after getting their license 8 years ago, or one that has an abysmal driving record but took a break for 3 years and it's "clean". And even a great driver would have difficulties avoiding an accident when the controls are on the other side of the car.


In Belgium? This system's been around since the late 90s at least.


That's a pretty weak argument regardless of your stance on self driving cars. Student driver records aren't meaningful because we don't have enough data to make a judgement. We have lots of data on self driving cars. There are other ways to cast doubt on self driving car records but this isn't one of them.


Tesla's self driving cars are students that are under constant supervision of their teachers.


This shouldn't be downvoted to light grey. The analogy is excellent.

I presume this comment is talking about student drivers where the teacher has an override steering wheel. So the student gets into accidents at a lower than average rate because every time they get close to doing so the teacher takes over.


1) Most student hours aren't in override steering wheel cars

2) For this analogy to make sense, it needs to be an average driver, not a driving teacher

3) Unless you're claiming AI and human drivers are uniquely suited to solving different types of driving, if the effect you're claiming is true, you would expect the rate of human/teacher driving hours to be much worse than average because they miss out on all of the "easy" miles. So far, no evidence of this.


That's just a bad analogy though. If a student driver accumulates a million miles of driving with no accidents, they're probably a fine driver even if the teacher was in there the whole time. Conversely, you're not a safer driver if tomorrow a driving instructor decides to sit in the back seat of your car tomorrow.


Not if the student gives control back to the teacher every time there is a problem the student can't handle. The real metric is "dangerous/complicated driving situations mastered" and "miles driven" is only a stand-in. A bad one if the student can deselect the few miles which had the dangerous bits.


If we're really pushing this analogy however, then you would expect the teacher to have a much higher accident rate than the average driver because you're claiming the teacher only does the hard stuff, or, in the very least, misses out on the majority of the easy stuff (assuming, for comparability, that the teacher is an average competence driver).

Specifically, if you're claiming only 99/100 miles are easy and have no chance of crashes, then if a human only drives for the 1/100 miles that are hard, they should have a 100x higher crash rate than the human that drives all 100. They should probably have an even worse crash rate because of the switching cost of suddenly taking control, unless you want to make the weird argument that suddenly taking control of an autopilot car is safer.

The tesla report says autopilot experiences a crash every 4 million miles. With autopilot disengaged, it's every 2 million miles. The baseline national average is every 0.5 million miles.

I can't find the perfect statistics, but one study suggest uneducated people are 4x more likely to die in a car crash, so let's give some generous rounding and say, normalized to wealth, tesla drivers not actively using autopilot are at comparable levels to the average driver (1 per 2 million miles).

Unless telsa drivers are phenomenal emergency handlers, its difficult to explain how the non-autopilot crash rate could be so low, while also claiming tesla is hiding the true crash rate of its autopilot features by pushing difficult miles to human drivers, because the human drivers are receiving normal crash rates on what you claim is a much more difficult set of miles.

Its possible (probable) that the autopilot would experience a higher crash rate if it were not allowed to call in a human. But to ask generally if autopilot is reducing the total number of accidents that the drivers would experience otherwise, I'd say 'probably'.


It means they are probably fine to drive the courses they have been driving. However, we know with self-driving that it isn't necessarily representative of what they will be asked to drive. It is also not a particularly relevant comparison to human drivers, unless we normalize a bit for road/driving conditions.


You are artificially adding in a detail that the student is only driving in limited courses to imply there might be unfamiliar conditions to a self driving car where we shouldn't trust its performance.

That is a better point, but also just a different point from the one originally made.


Do you mean that any individual student driver is likely to have a perfect record because they haven't driven much? That seems like it wouldn't apply here, because Teslas have, in fact, driven a large number of miles.

Or are you claiming that supervised student drivers are much safer because they, like self driving cars today, are supervised?

Maybe I'm totally missing the point of your analogy, but it seems like it doesn't clarify much.


A student driver ran into my neighbors yard on our non-busy street.


There is also an other problem with only trying to better than average driver. If your system is only slightly better than average. That means basically 1 out of 2 people are better than your software.

Autonomous, driving should be multiple sigmas better than average. Especially given the reaction times that are possible for a computer vs a human.

If its only good as average a large amount of drivers would be safer to drive themselves.

Basically it should be as good if not better than the most capable drivers. Average is way to low of a bar to aim for.


The reaction time is actually an interesting question. Reactions in humans which do not require thought/planning can be quite quick, and the human vision system is /very/ quick, especially for movement. How fast is the vision system in a Tesla? Not only frequency, what's the latency from vision change to decision? My guess is the Tesla is faster, but by less than an order of magnitude. I would not be surprised that it's slower for some cases. But I really don't know.


The Tesla Safety Report is so misleading:

1. The accident rate does not take into account of drivers age, credit score and prior safety record, or the age / value of the car.

2. Most people only turn on autopilot when driving is easy (e.g. on a highway).


Sorry, a non-American here. By "credit score" are you referring to the financial credit score or some sort of "points system" for drivers? If the former, then why would it be important to include it?


In some states auto insurance companies are using credit score because there is correlation between insurance claims and credit score [1]. I guess you can establish "crash-free" and insurance claim correlation even more easier.

[1] https://www.forbes.com/advisor/car-insurance/auto-insurance-...


Not some states but 94% of the states, according to your link.


Looks like that article may be out of date, according to experian [1] 4 states completely disallow using credit scores to set auto insurance rates, and 3 more restrict it. However, this is still significantly more than "some states"

[1] https://www.experian.com/blogs/ask-experian/which-states-pro...


Credit score is correlated with personality traits such as conscientiousness, risk-taking, etc, which in turn influence driving safety


While this may be true I always assumed the strongest reason for including this in insurance quotations was the lower risk of fraudulent claims from those with stronger finances.


Does any car company give a detailed normalized report like you're asking for in 1?

edit: by which I mean, if tesla autopilot gets into more accidents than rich white yuppies but less than the national average, it's not entirely obvious to me is the conclusion is rich white yuppies shouldn't use autopilot or that autopilot isn't safe enough. It also suggests its very useful for poor minorities.

Location and local driving conditions are the only real differentiator where this might make a difference on decision making. Those are going to be correlated with the demographics of the person driving them, but are weak proxies at best.


Car company? Probably not, but they'd be the wrong organisations to ask.

Insurance companies certainly would know a lot more detail.


Do any other car companies run a comprehensive surveilance system on all of their vehicles?


I'm not a car expert, but I think yes, its pretty common in newer cars.


Yes. OnStar is the most well known, but most of the other major automakers have similar capabilities in at least part of their range.


Does the safety report account for vehicle age and price? Because I imagine there's a difference in accident-free miles if you were to compare a new Mercedes-Benz S-Class to a 15-year-old Camry.


No it doesn’t, that’s one of the main criticisms along with comparing highway miles to city miles.


And Tesla owner demographics (presumably mostly affluent + older) with "everyone".


Volvo xc90 had no fatal accident from 2002 to 2018. Beat that for starters.

https://www.expressandstar.com/news/motors/2018/04/17/no-fat...


What about the people that the XC90 rams into?


That famous xc90 was modified by Uber and Volvo’s safety features were disabled. Literally the only xc90 that have killed are the ones driven by artificial neural networks.

https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...


> I'm not saying that Autopilot isn't safer than a human driver

I'm saying that Autopilot isn't safer than a human driver. The fatal accidents involving Autopilot so far, mostly crashing into stationary objects, were easily avoidable by a marginally competent driver. Tesla is playing with fire.


There are definitely accidents with Autopilot that could have been avoided by humans, but we need to compare those against the cases where autopilot prevents accidents that are unlikely to be avoided by humans, like this one: https://youtu.be/bUhFfunT2ds?t=116

It's pretty clear that human paying attention + autopilot is safer than either: 1) human only 2) autopilot only.


I would agree, if the Autopilot didn't need a significant amount of human attention to avoid a simple, potentially deadly crash. Your equation should be rewritten to:

human paying attention to driving + human preventing Autopilot from crashing + Autopilot

I seriously doubt this divided attention produces a safer system with today's technology. Except


> but we need to compare those against the cases where autopilot prevents accidents that are unlikely to be avoided by humans

Right, but do any examples of this exist, ever?

Your link is clearly not an example. Any aware driver would've done the same.

I'm sceptical there can ever be a scenario were tesla autopilot can outdo an aware, conscious driver.


Some of the examples in the video are actually pretty impressive. I'm not sure I would have seen some of these cars coming. But they only show that human driver with autopilot assist is potentially better than human alone, i.e. human driver, with autopilot backup. I would be surprised if that is not better. But the way autopilot is supposed to be used is that it's only autopilot with human backup. And I would argue that the "with human backup" often has a "because of law, not necessity" undertone)


I totally agree with your argument.

But playing the role of the devil's advocate here one might argue that the major benefit of autopilots is that the data will be accumulated to the set of happened accidents, so that they don't happen again in future.

When comparing accidents of manual vs automated driving, manual cases don't have any learning effect (let alone communication of it and availability of that to other human drivers). Automated driving on the other hand has the theoretical benefit, if it's openly shared, that all edge cases of untrained accidents go asymptotically to zero over time.

But in order to achieve that there must be a law that enforces the use of the same dataset, and that defines this dataset as public domain so that all autopilots can learn from that knowledge.

This was actually my initial hope for OpenAI. And oh boi have I been disappointed there. Nothing changed when it comes to reproducability of training or availability of datasets, and most research of OpenAI seems to be spent on proprietary concepts.


> manual cases don't have any learning effect

Not true. The traffic and safety regulations are in part based by analysis and lessons learned from crashes. The infrastructure, e.g. road profile, speed limit, signage, etc. benefits from the same.


So we should simply make more bumpers on the side and limit speeds to 10km/h to prevent crashes?

I was not talking about changing environment constraints. I was talking about changing the perception and measuring of inputs in human drivers.


Environment constraints and performance of human drivers are not independent. Your argument suggested that they are.


The problem of people overestimating the capability of the car or just losing their attention when Autopilot is engaged could easily wipe out whatever wins you do get.


Valid stat comparison will be average accident rate of all cars with autopilot vs same of all cars without, and with $30k - $50k current market value. This will equalise many things.

I, personally, wont trust some autopilot scripts, at least in this decade.


I don't want to detail the conversation too much, or distract from the excellent points you have made... But how is this simpson's paradox?

Simpson's paradox is easier to understand geometrically.

https://en.m.wikipedia.org/wiki/File:Simpson_paradox_vectors...

L1 and L2 in the diagram have smaller slopes than B1 and B2, and yet their sum is higher. It's not hard to characterize when this happens. So the canonic example is that a drug might be more effective against all sub-cases (e.g. mild, severe illness) and yet appear less effective overall.

You, on the other hand, seem to be describing selection bias.


Simpson's paradox can be thought of as selection bias as well. To make the Simpson's paradox clear, consider the following simplified scenario: autopilot has more accidents per city mile, and more accidents per highway mile, but because highway miles have fewer accidents, and autopilot is tested on a much higher proportion of highway miles, on average it has fewer accidents per mile overall.


To avoid #2, Tesla specifically counts any accidents within 5 minutes after autopilot disconnect as an autopilot accident.


Five seconds.

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before a crash, and we count all crashes in which the crash alert indicated an airbag or other active restraint deployed."

At the bottom of:

https://www.tesla.com/VehicleSafetyReport


But they'll still release press releases saying "The vehicle had warned the driver of inattentiveness and to keep his hands on the wheel"... and oh-so-conveniently ignore "... once, fourteen minutes before the accident" (which, knowing their system now, means that was the last warning, and the attention system hadn't been tripped between then and the accident).


That's an interesting problem. The right answer mostly depends on the distribution of crashes at time t since deactivating autopilot. I would personally guess the relevance of autopilot fades to near 0 once you're 30 seconds since deactivation for 99.9% of crashes.

5 feels a little too aggressive, but would probably capture the majority of the true positives. I would have picked 10-15 seconds based on my gut.


That depends. If you're taking over from autopilot after several hours of passively resting behind the wheel, perhaps it will take you more than 30 seconds to accustom yourself to the task.


What situation could you possibly be in where its autopilots fault but takes more than 30 seconds to cause a problem AND it was necessary for you to take control?


Car steers onto opposite lane on interstate at night/no traffic?


You're not wrong, but to my knowledge, nothing like that has ever happened, and it would have been very newsworthy if it had, even absent of fatalities.


Does that really avoid #2? My understanding of that situation was this:

1. The driver senses an impending accident or dangerous situation, so they disengage autopilot.

2. The driver personally maneuvers the car so as to avoid any accident or crash.

3. The driver re-engages autopilot afterwards.

In this scenario, there is no accident, so there's nothing for Tesla to count either way. The idea is that there could have been an accident if not for human intervention. Unless Tesla counts every disengagement as a potential accident, I don't really see how they could account for this.


You need to look at the whole system. The end result (of autopilot + human) is no accident.

If the human prevents 99% of autopilot could-have-been accidents, and as a result, 10 people die per X miles driven whereas through purely human driving 20 people die, then driving with autopilot is safer.

Unless you're trying to answer "is autopilot ready for L5", this is the right metric to look at.


> If the human prevents 99% of autopilot could-have-been accidents, and as a result, 10 people die per X miles driven whereas through purely human driving 20 people die, then driving with autopilot is safer.

No, because correlation isn't causation.

In particular, it's plausible that autopilot only gets used in situations where it's easy to drive and accidents are less likely. This would erase any hope of assessing autopilot safety by looking at simple statistics like the ones you mention.


What they of course should do is count any manual intervention as a possible autopilot accident.

When I say possible, what I mean is they should go back, run the sensor data through the system, and see what autopilot would have wanted to do in the time that the human took over.


There are a couple reasons why your criteria would get almost entirely false positives.

First: Most Tesla owners disengage autopilot by tapping the brakes or turning the wheel. This is typically more well-known and more convenient than the official way to disengage autopilot (push right stalk up, which if you do twice can shift to neutral).

Second: Tesla autopilot drives like someone taking a driving test. It accelerates slowly, signals well before changing lanes, makes sure there is plenty of room in front, etc. In my experience, the vast majority of interventions are to drive more aggressively, not to avoid a collision. I think maybe twice I've disengaged to prevent a collision. In those cases it was to avoid debris in the road, not another vehicle. (The debris was unlikely to damage the car, but better safe than sorry.)


> the vast majority of interventions are to drive more aggressively, not to avoid a collision

If it's to avoid a collision then the autopilot would have crashed, and it should be deemed an autopilot accident.


Those interventions are to get somewhere faster, not to avoid a collision. If anything, such interventions tend to increase the risk of collision, not decrease it. Training autopilot to behave more like humans in those situations would make it less safe, not more.


If the intervention was not to avoid a collision, they review the footage and find that autopilot would have done something safe, and therefore it is not deemed an autopilot accident.


That's a bit speculative, since your actions will affect the actions of others, but I agree if it were done correctly would give the best picture of autopilot safety.


Can you explain how that avoids it? Not sure I understand.


It avoids a variant of point 2. The case where the driver disengages the autopilot to avoid the crash and fails. It avoids chalking that crash up to human error. It does not avoid the initial point you made that the human accident avoidance avoids the crash (and thus statistic) on the N miles of autopilot usage before it is disengaged.


its really really annoying that even smart people puppet 'self driving cars are safer than human ones'.

self driving cars are orders of magnitude more dangerous than human drivers. its absurd to say otherwise, and to do so requires a level of stupidity that can only be explained by dogma.


Can't we easily avoid these pitfalls by just comparing accidents/km on autopilot-equipped cars vs others? And disregard whether or not the autopilot is engaged.


This is a tangent about the Simpson's Paradox that may or may not be relevant to Autopilot. Specifically about the archetypal example given in the Wikipedia article, "UC Berkeley gender bias":

Even if the deeper data analysis showed a "small but statistically significant bias in favor of women" in terms of admission rates in individual departments, it doesn't prove that there isn't another kind of bias behind the overall admission rates (44% for men, 35% for women). Specifically, why doesn't the university rebalance department sizes, so that all departments are similarly competitive? It would result in the overall male and female rates converging. It would also make a lot of sense from a supply and demand perspective. It is entirely possible that there was no urgency or desire to do so because of bias on the part of administrators, who were mostly male.

Might the quickness to dismiss the issue as a Simpson Paradox reflect another bias?


I believe autopilot's safety features were disabled so these statistics are meaningless. I'm not talking about simply forcing it into autopilot, but disabling autopilot's ability to control the vehicle's acceleration. The reason I think this is the case is due largely to the speed the vehicle was traveling which is... unlikely under autopilot which limits speeds to 5 MPH over the speed limit.

If you are pushing on the gas pedal, the car can only steer and has no control over speed.

This weird sort of hybrid riding where the car is controlling the steering and the driver is controlling the speed puts the car in an untenable situation. It is a driver with no brakes and no control over the gas pedal.

Maybe Tesla should disable this mode entirely. Tesla (very reasonably) limits speeds to 5MPH over the speed limit when you are in autosteer mode, so lots of people like the ability to bypass the speed. Personally, I very much like being able to push the speed when it's reasonably safe to do so. If you are operating the system as designed, it's no less safe than cruise control.


Tesla does not limit the speed over the speed to 5 mph over the speed limit.


It's not clear exactly what the rules it uses are, but there are definitely some roads where the set speed is limited to 0 above the speed limit when autopilot is engaged.


This is consistent with my experience:

> - On highways: Autosteer and Traffic Aware Cruise Control have speed limits of 90 mph.

> - Off highways: Autosteer is no longer restricted to 35 mph. Autosteer has a speed limit of 5 mph faster than the detected speed limit, unless you’ve specified a lower speed limit offset.

> - If Model S does not detect a limit, then the maximum speed the vehicle can drive is 45 mph.

https://insideevs.com/news/332446/tesla-autopilot-update-bum...

We know it wasn't a divided highway. Even if they were on a 2 lane highway the speed limit would have been limited to 60MPH. They wrapped the car around a tree, destroyed the integrity of the battery, and both passengers were disabled enough they couldn't escape the car.

The car doesn't peg the accelerator under autopilot so even getting to 60MPH in "A couple hundred yards" seems unlikely unless there was someone applying the gas pedal.

I suppose the alternative explanation is there was a malfunction which caused uncontrolled acceleration and ejected the driver into the back seat?


Ahh, that explains it, my "speed limit offset" is set to zero.

One could speculate that maybe the guy 'driving' from the passenger seat tried to plant his boot on the brake and got the accelerator instead?


In my Model Y, there is no way the passenger could reach across to put his foot on the brake or gas unless they were straddling the console.



The linked article didn't say what model it was other than the fact that it was a 2019. Even so, that console only looks marginally easier to get your foot over. Definitely not a maneuver you could pull off in the heat of the moment.


When auto steer is engaged, it limits the speed to 5MPH over the speed limit unless you are holding the gas pedal down. If you hold the gas pedal down, the automatic braking and speed controls are not active. At that point, the car isn't in control.

If you are on the highway, it is different. But this car was in a residential area.


> Authorities tried to contact Tesla for advice on putting out the fire; it’s not clear whether they received any response.

This will become a massive issue in the years to come unless we find a way not only to drastically reduce the number of crashes but also massively improve reliability.

High voltage battery fires are probably the worst kind of fire a regular emergency responder would have to deal with, between the hard to put out fire and the risk of electric shock. It also causes some massive damage to the surroundings (the actual road surface, surrounding cars, or any garage unfortunate to house the car at that time).

Today very few emergency responders are even trained to properly deal with such a fire, and it's a topic really lagging behind everywhere compared to the rate EVs are popping up on the streets.


There's a public set of first responder guides with detailed diagrams for every make and model available. While I'm not sure they have a hotline, Tesla has always tried very hard to provide accurate information how to douse flames, which cables to cut to render the HV system disabled, and how to take care to ensure the car doesn't reignite. See: https://www.tesla.com/firstresponders (e.g., a specific guide: https://www.tesla.com/sites/default/files/downloads/2016_Mod...)


Interesting information! "...can take approximately 3,000 gallons (11,356 liters) of water, applied directly to the battery, to fully extinguish and cool down a battery fire"


>3,000 gallons

In a lab. With rats.

In Houston. With People:

   Herman said it took firefighters nearly four hours and more than 30,000 gallons of water to extinguish the fire. [1]
[1] https://www.khou.com/mobile/article/news/local/tesla-spring-...


That is more than an entire fire engine's water capacity for the battery alone. This means any tesla car fires on a highway or other places away from available hydrants will need a tender or multiple engines.


Or you know... just let it burn


11 cubic meters seems like a fairly small amount of water for a car fire. That's less than the capacity of a typical water truck. What, "applied directly to the battery," means raises questions in my mind. Is this a calculation based on the exothermic potential of the entire battery pack?


As someone who has actually put out numerous car fires I would say that this estimate wildly depends on the time at which the fire occurs for ICE engines. These days most ICE based cars are pretty easy to deal with and if you arrive early enough to the scene you may even be able to put the fire out with as little as 1 11L Compressed Air Foam Backpack. In fact good car models can keep the spread of the fire contained to just the bonnet for quite a long time. I suspect with electric cars the challenge is that the flammable material is directly beneath the passengers and adding water may actually make the situation worse initially. I highly do8ubt a single fire engine will be able to carry enough water to combat the fire. At least electric cars are better than CNG cars which are a one way ticket to permanent retirement for firefighters.


Dumb question ... but do "inflatable swimmingpools" exist for firefighters? Something that you can roll up compactly but unrolls around e.g. a car and forms a more or less watertight seal so you can literally drown the battery in water?


An inflatable swimming pool relies on the watertight bottom to hold shape. Without a sealed bottom you're building a flood barrier. You need something that both molds to the shape of the surface, and is heavy enough to provide the downward force to prevent water from going under it. Inflatable flood barriers exist and are filled with water.


I guess you wouldn't really need it to be extremelly watertight, just tight enough that the hose puts more water into it. Seems like it would pay for itself with 30k gallons needed


What is it with US tech companies and refusing to provide basic information like this? How does that even benefit them?!? With GPUs and radios there's the nebulous "it helps protect trade secrets" bullshit but I can't imagine how refusing to tell a firefighter how to extinguish your burning crap does that.



Tesla's and other electric cars have been around for a decade now. It's completely reasonable to expect fire departments to have trained their staff to deal with EV crashes.


My municipality gives the fire chief a $900 stipend each year. That's it. The fire department is 100% voluntary.

In contrast, 60+% of the budget goes towards making sure police officers can make over $200,000 a year with overtime.


This is the conversation that needs to be had. Re-allocation of city resources towards non-violent, non-militarized civil society groups. Unfortunately, in our soundbite driven world, we got "Defund the police", which kills the conversation dead.


Fire service is a divided/conquered profession.

The paid fire and ems people hate the volunteer departments, the fire-only departments turn their nose down at EMS, and the police are good at swooping in and taking over stuff like paramedics.

Volunteer departments are often big political power bases too. In some states, that results in volunteer departments getting lavish firehouses and fancy gear. I live next door to a city firehouse, their “new” pumper is an 8 year old, $800k (new) truck that saw 70-80 calls a month. It’s new owner doubled the mileage in 90 days.


> I live next door to a city firehouse, their “new” pumper is an 8 year old, $800k (new) truck that saw 70-80 calls a month. It’s new owner doubled the mileage in 90 days.

What does this mean to imply? They bought an old truck, gently used, and then used it a lot more...

Is the new mileage because they need to use it? Is that bad? Is 800k a lot for a fire truck?


I believe they're implying that a volunteer department had a very high end truck and didn't really use it much, and it was later inherited by a non-volunteer department.


The dysfunction and stupidity is only bounded by the populace's tolerance for it and money to indulge in it.

Poor cities and rural counties might have underfunded services but they don't generally have the dysfunction you're describing because the money to support the dysfunction simply isn't there.


And which should never have been misunderstood.

When we say "Governor X defunded public schools," no one takes that to mean that they removed every single penny from the schools.



Some opinion article from a social justice warrior doesn't indicate that the vast majority of people who say "defend the police" are not saying "abolish the police." That kind of black and white thinking undermines the conversation.


What really undermines the conversation is twisting the meaning of words, saying what you (supposedly) don’t really mean, and then blaming people for interpreting them correctly.

Here are more examples:

https://www.theatlantic.com/ideas/archive/2020/07/how-i-beca...

https://thetyee.ca/Culture/2021/02/12/We-Actually-Mean-Aboli...

https://www.vanityfair.com/culture/2020/08/the-abolition-mov...


Again, if you're actively searching for abolitionists, you can find them, but the vast majority of the use of the term, from a count of the first 20 articles on any search engine of "What does defund the police mean," means reduce funding.

And, indeed, your first article actually uses the term exactly as we're using it.

While the author is arguing for abolition, and the first and only time she uses the word "defund" it is to say

> Defunding the police is one step on a broad stairway toward abolition. Cities can reduce the size and scope of police...


Likewise using that pejorative.


Serious question: how do you interpret "Starve the beast"?[1]

1. https://en.wikipedia.org/wiki/Starve_the_beast


The author does not speak for the Black Lives Matter movement, nor for the vast majority of people who use that phrase. The use of "we" in the title is egoising.

A quick search of "what does 'Defund the Police' mean" will find that the vast majority of proponents, and of articles written about them in almost all newspapers and magazines, interpret it as reducing the budget of the police and redirecting that budget into non-violent community aid.



Even Obama didn't like it: https://www.theguardian.com/us-news/2020/dec/02/barack-obama...

Above article refers to this long interview: https://www.vanityfair.com/news/2020/12/obama-urges-activist...

And I agree with him, if you have to add ", and by this phrase I mean a b c d e f" or "please search the Internet and click a good URL (not Breitbart, but also not Mariame Kaba) to see what we mean by this phrase", do you think people wouldn't just think you're an extremist after the snappy phrase and stop listening?

The self-sabotage (and then people like you defending the phrase) is so mind-boggingly dumb. Do you want change, or do you want to alienate people?

But eh, on the topic of alienating people, I've seen enough "online activists" snap at supporting voices that I thought "I wouldn't be surprised if that supporter stopped sympathizing with them.". Then you have supporters turn to quiet "Fuck it, I'll just shut my mouth" or worse, be opponents.


One of the things that has never failed to amaze me is why are police and fire departments locally run in the US vs. run by the state with federal checks and balances. It's almost a given that smaller places either cannot afford the right training and equipment OR they'll turn corrupt with the monopoly on violence such services give.


I think it certainly raises questions about the extent to which it's reasonable for public utilities to pick up the slack for negative externalities caused by profitable companies. Of course it's a hard thing to price because of course fossil-fuel vehicles have a laundry list of negative externalities of their own.


I generally agree, companies should pay for their externalies.

The efficient way to do that is to tax companies and use the tax dollars to fund public services like fire departments. Expecting Tesla to send in their own firefighters when a Tesla catches fire would be ridiculous. Public services are good, and the method for funding them is well established.

If we want to have additional levies for safety regarding lithium batteries, hopefully we are making sure to do the same for oil too...


The problem is the equipment to fight a petrol / diesel fire is already covered by most fire departments. Oil fires are common in commercial and industrial areas. Levying those users of petrochemicals would likely result in a fairly insignificant cost per user, such that the bureaucracy cost to collect would end up costing more. Thus it makes sense to pay for this from general government funds. The same if we extended it to wood and paper.

Large Lithium fires are fairly uncommon right now, they don't behave like most other classes of fires that firefighters are dealing with. Personal device fires are more common, but total heat / damage is less and the fire is frequently able to be controlled outside of confined areas (aircraft).

With a relatively low volume of large lithium battery packs, and the difficulty in containing them, it could make sense for a targeted levy to cover the cost. Lithium car fires are currently dealt with by trying to isolate the burning vehicle and overwhelming the fire with water (removing Oxygen and temp), however this is very inefficient. I have heard of a former fire chief arguing that to deal with the growth of large battery packs will require either massively increasing the number of appliances (fire engines) or defaulting fire appliances to using special foams as opposed to water. The foams will increase the cost to fight ALL fires, as many engines typically have to be ready pre-mixed before dispatch.

It has also been suggested that density limits, and battery packs be designed to include fire suppression and make this mandatory to allow vehicles on the road. This would create massive packaging issues for every EV company, add significant cost while reducing performance. All in all I think we need to consider if we charge the EV / household battery pack companies for this or we all bare it in general taxes for the environmental benefits. I think bearing it out of general funds is completely reasonable, however I wonder if we are going to get safer batteries if we pass on the cost and let the rate go down as the number and severity of fires decrease. The might be some marginal improvements that quickly get implemented in that case.


Would a good solution be to test battery designs in crash tests and tax more flammable designs more? Try and incentivize companies to come up with innovative fire suppression designs


Does it? This is the same issue it's always been: things catch on fire sometimes, we've decided that the best people to handle this are firemen, paid for by the local government.


But what previous thing is most analogous to batteries on cars? Cellphones and hoverboards are so much tinier. Maybe this isn't the same issue specifically, and some scale of government can require a tax to fund e.g. more education or training.


I'm confused why something needs to be equivalent to batteries in cars?

Do you know what's not equivalent to an electric car? A fertilizer plant catching on fire. And you guessed it, that's also handled by the fire department.


Does it? That sounds like the kind of thing that gets some sort of higher tier involvement. Is there not higher jurisdiction for managing larger fires that require more than the local crew?


Lots of stuff... hazmats, grain elevators, etc.

It’s not unreasonable for firemen to put out fires. It’s just a novel path that requires training.


>But what previous thing is most analogous to batteries on cars?

Large tanks of volatile, flammable, potentially-explosive liquid in proximity to ignition sources in the car?


Maybe cars powered by literally thousands of explosions per second?


I freely admit this is a pedantic point, but it's extraordinarily doubtful that any road car reaches 2000 ignitions per second (the threshold for "thousands").

In a 4-stroke engine each cylinder ignites once every two revolutions, so even a 12-cylinder engine wouldn't reach 2000 ignitions per second until 20,000 RPM.

A more typical case - say, a 6 cylinder engine hitting the redline at 7,000RPM is experiencing 350 ignitions per second - so, hundreds rather than thousands.


Thanks.


So batteries on cars is analogous to cars? I mean, could be. Maybe the EV battery is just like a gasoline tank, but something suggests that there are important differences.


Another reminder that we have a hybrid socialist free-market system, without even mentioning Tesla subsidies or how much of the science was publicly funded or "borrowed" from history without payment to the past.


Good question, that said it's not as unnatural as when gas became ubiquitous.

Now that there's a global shared effort to make everybody able to work safely around these issues should be mandatory.


I think you may be overestimating the resources fire departments have. Many are woefully underfunded as is, particularly the ones that are 100% volunteer. There are probably dozens of things (equipment and training) that departments could spend money on that would benefit the community they serve more than training for ev accidents.


How are 3 out of 3 replies discussing what the fire-department should do from a training and policy perspective and nobody commenting on the science/engineering of dealing with a EV fire.


Privatize the gains and socialize the negative externalities.


EVs have represented a relatively small fleet of new and mostly high end cars so far. Which means that they haven't posed much of a problem so far. The budgets for most fire departments are pretty limited and they focus on priorities. EVs are hardly a priority for most of them even now as the fleet is growing exponentially and perhaps more critically, it's aging thus increasing the risk of fires.


You can train all you want but unless you have a burning EV right in front of you, all training is only theoretical. Practice makes perfect and no fire department will set a Li-Ion car battery on fire just to demonstrate what happens.


This is really not good safety culture. I'm sure it's not the same experience when a pilot practices emergency procedures in a simulator vs in a real emergency, but it's still helpful.

Your point is maybe that the simulation is bad. But I think it can be helpful nonetheless. Even if the simulation is "you have a battery fire in front of you. Tell me what you would do?"


> This is really not good safety culture. I'm sure it's not the same experience when a pilot practices emergency procedures in a simulator vs in a real emergency, but it's still helpful.

If this is really a priority, then make it a priority. Start paying firefighters like cops are paid, and fund the fire department like the police department is funded.

In many places, fire departments are entirely voluntary while police departments are funded to the tune of tens to hundreds of millions of dollars each year.

In several places I've lived, the only way fire departments would get that training is if someone donated a bunch of EV batteries to be destroyed by them.


> no fire department will set a Li-Ion car battery on fire just to demonstrate what happens

Why not? That's exactly what they do with ICBs and houses to train here.


Up to a point - but knowing you don't pump water onto a battery fire should not be an issue.


There's been comments here for hours linking to multiple official sources that say exactly to pump water onto the battery fire.

There's not that much lithium and there is lots and lots of heat.


That's literally what's needed to stop thermal runaway.


Any fire department employees should no the basic classes of fire and what I used and not used on them,

Any company H&S rep will know this


What's the best way to actually put them out?


The official Model X first responders guide suggests 3,000 gallons of water (https://www.tesla.com/sites/default/files/downloads/2016_Mod...).


I mentioned this elsewhere, but our fire engine is only 2000 gallons, the tender is 3000. That means it needs an entire tender's worth of water for the battery alone.

A tesla car fire on a highway or rural area will immediately require far more apparatus support than normal.


Anecdotally from a good friend of mine who's a fire chief at a local station, when they finally extinguish electric vehicle fires they have to park the burned car at least 50 metres away from other vehicles in their holding yards, because of the risk of re-ignition. They often put out an EV fire and then have the thing catch on fire again days later.


This is standard practice for EVs in the towing industry. Nobody parks a crashed one near anything they care about.


A dump truck of sand.

Its basically the _only_ way to put them out. I doubt the average firetruck will have tools to put out a recently started EV fire for a couple of decades.


I'm afraid to even ask this question, because of the environmental implications, but are there chemical alternatives to sand?


Crystals of silicon dioxide. Heard they're easy to mine from these places called the beach.


You know beaches have been know to get stolen and sand is a finite resource on the shore? Having a dump truck with sand waiting at each firestation doesnt seem like a solution.


How about desert sand? I understand it's too fine to be useful in the construction business, unsure about the composition though.


> What's the best way to actually put them out?

Submerge them in a container filled with water, [1].

[1] https://www.carscoops.com/2019/03/firefighters-dropped-a-bur...


What if the lithium reacts with the water??


Sorry someone downvoted you without explaining it’s an alloy, not pure lithium:

https://chemistry.stackexchange.com/questions/52154/lithium-...


Thank you for the explanation


Quickly connect a bunch of fans to them, which will then blow the flames out or drain the batteries. Whichever comes first.


This sounds like horrible advice, flames don't "blow out"


I’m pretty sure they were being sarcastic. There is no practical way to get close to a fiery auto crash and attach a bunch of fans to a currently on fire battery.


There's the unfortunate middle area where the air makes the fire rage even hotter, like a blast furnace.


makes me wonder two things:

- what about draining the cells through the main charging port ?

- would that worsen the situation ?


Thermal runaway in Lithium-ion battery packs is one reason that I don't ever want an EV parked inside my garage. These fires are hard to put out.


I'm not really seeing a scenario where an out-of-control car fire has worse results in a garage for gasoline vs. lithium ion.

In both cases it's an absolutely massive thermal conflagration, with no hope of putting it out using anything a homeowner has on hand, and it will proceed so rapidly that the house is going to be totaled by the time the fire department shows up.

I have to figure this is just bias toward the familiar. I will grant you that I don't refill an ICE car inside my own garage, so maybe charging genuinely makes the electric more dangerous. But it probably doesn't do so in fact, I would guess in both cases the biggest risk is something like leaving rags soaked in linseed oil and getting spontaneous combustion.


This is a good discussion on StackExchange about the fire risk of Li-Ion batteries:

https://electronics.stackexchange.com/questions/230155/why-i...


Many newly built homes have automatic sprinklers - capable of containing and localizing the damage from fires.

A sprinkler system running on regular water supply pressure is not going to put out a lithium ion fire.


This is not common by any stretch. I have never seen a residential home with automatic sprinklers, only commercial real estate and apartments. The cost is prohibitive, and in many climates (where freezing is a norm) would be useless in garages.


The cost is absolutely not prohibitive, it's about $1-2/sq ft in a new home or $5-6/sq ft if you're retrofitting an old home[1]. For a new 2000 sq ft house that would be around $4000 dollars. Probably <1% of the overall house value given recent housing prices.

[1] https://www.bobvila.com/articles/465-residential-sprinkler-s...


This would depend on house prices. Where I live, you can buy a 2000 sq ft house for $300K or less. That's 1.3% increase in price to add sprinklers, and I would wager that 99% of homeowners would prefer not to spend $4K on this.


California is a big EV market. Sprinklers are required in new construction in CA.

https://www.nahb.org/-/media/NAHB/advocacy/docs/top-prioriti...


Your link shows that 48 out of 50 states don't require sprinklers. I would guess that less than 5% of SFR in the US have sprinklers.


>Your link shows that 48 out of 50 states don't require sprinklers.

Partial Mandate: NY, Mass Full Mandate: CA, Maryland

Those are 2 big states in there. By population the four are ~73 million, or a fifth of America.


Massachusetts only requires them in townhomes and houses over 14K square feet. In other words mansions. NY only requires them in three story buildings, otherwise it's up to the buyer. So these are nonsense arguments. Considering how many existing houses are exempt from all of this, I would estimate that less than 10% of the homes in CA have them, and less than 2% in these other four states.


All post 2018 construction in my area (including single family residential) is required to have automatic sprinklers).


This is true.

I mean, it's also true that water will not put out a gasoline fire, only spread it around and make it worse.

But what you said is true also.


I haven't heard of any ICE engines spontaneously combusting. I have heard of many lithium batteries, primarily in phones, combusting. We're better at keeping gasoline safe than charged electrical energy.


They are fairly common. 233,300 fires and 329 deaths per year in the US according to http://www.nfpa.org/news-and-research/fire-statistics-and-re...


That covers the number of times vehicles have caught fire. It doesn't reference the cause (e.g. spontaneous combustion vs. as a result of a crash).


The NFPA has data on that too. Fire causes are:

- 47% mechanical

- 21% electrical

- 7% intentional

- 6% exposure

- 4% crash, overturn, run-over

- 2% smoking

So crashes are a relatively rare cause. They also break it down by the area the fire started:

- 63% engine area, running gear, or wheel area

- 11% passenger/operator area

- 5% cargo/trunk

- 3% exterior

- 2% fuel tank or line

Interpolating between those numbers, you might guess that between 20% and 40% of fires could be described as "engine spontaneously combusting." That would be 45,000 - 90,000 per year in the US.

So despite you never having heard of it, it does happen.

[0] https://www.nfpa.org/vehiclefires, numbers above from report table 8 & 9


Thanks for finding that data. That's interesting. I'm amazed that that happens, because absent people bringing it up when talking about Tesla I've never really heard that before.

Most interesting is that the first part of an ICE to catch on fire is most frequently the electrical wiring. And even if it is a flammable liquid (the second most likely cause), it's only a 42% chance that liquid was gasoline!

Thanks again, it's interesting reading.


Likely the reason you’ve never heard of car fires before is that they are so common that they are not news.


I personally have had the car I was driving catch fire spontaneously. Once I was inside, once nearby. Funny thing, same car, different causes.


I'm glad you got out safely. Judging by all the replies it's far more likely than I guessed. I've been lucky.


I had a friend whose block heater shorted out while the car was in his garage. He was alerted by a smoke detector, went out in his PJ's, opened the garage door, and pushed the car outside, whereupon it burst into a pillar of flame.

He noticed significant tingling in his feet while pushing the car out. This was in the days before GFCIs. He's lucky he wasn't electrocuted.


Cars have stopped catching on fire nearly as often since GFCIs were introduced. My guess is that is coincidence.


When I was a kid in the early 90s our family car caught fire while parked in the driveway and turned off.

It'd been parked for an hour or two, and the trigger was something electrical in the engine cavity.


FWIW, a friend's Aston Martin V8 Vantage spontaneously combusted in front of his house a few years ago. Car was a total loss and took the insurance and manufacturer months to sort out who was liable for damages (fortunately, just replacement of the car - no other cars were parked nearby, and friend kept it in the street).


Electrical fires regularly kick off gasoline fires in ice vehicles.


You've never heard of any ICE car spontaneously combusting? You must not follow any car recalls, tons of cars have been recalled in the past due to fire risks.

https://www.caranddriver.com/news/a35782883/kia-cadenza-spor...

https://www.autosafety.org/ford-ignition-switch-fires/

https://www.mlive.com/news/2021/03/2021-ram-truck-owner-enco...

https://www.industryweek.com/operations/safety/article/21963...

https://www.auto123.com/en/news/gm-recall-risk-of-fire/61360...

ICE vehicles can have strange failure modes leaking flammable fluids places where they should go where its really hot. Sometimes this gets mixed with a spark from a bad electrical connector, sometimes it drips on something hot.


My friends house burned down due to a car fire inside his garage. The engine didn't spontaneously combust (any more than the Tesla power-ask in the article), but having a car in a garage does pose some risk.


I don't believe EV cars are anymore likely to catch fire than gasoline cars. Either way, if a car catches fire in your garage you are in for a bad time.

It's definitely good to spread the awareness that many fire extinguishers are not suitable to put out lithium fires, though.


> fire extinguishers are not suitable to put out lithium fires

Only Class D fire extinguishers can put out lithium metal fires. But lithium-ion batteries do not contain lithium metal. Lithium metal batteries do so, but no EV uses lithium metal batteries because they're too dangerous. Class B fire extinguishers work just fine on lithium-ion battery fires. The difficulty with lithium-ion EV fires is that they tend to re-light, but water is still the tool of choice for putting them out.


Today I learned!


Everybody else keeps trying to tell you ICE cars store more energy, or could spontaneously combust just as easily, and they're not wrong.

But when I'm filling up my car with a jerry can, I'm literally holding it. I know to have some sense of caution, and I can see spills. I do not leave it to slowly trickle fill overnight, like one would with an EV; not only am I not there in that scenario, but odds are I'm not even awake.

The odds are minuscule, and you're more likely to die in many other ways. But the fix is so easy - assuming it's not going to hit -30, just keep it outside. And the risk (probability multiplied by chances for it to happen) is going to get so great when everyone has an EV, that I can at least see your point.

I don't agree, and were I able to afford a house with a garage and a Tesla to park in it, I probably would. Doubly so as it hits -50 with wind chill here. But to dismiss your concerns outright doesn't feel quite right to me.


Well, you're not going to park a Tesla outside because you need to charge it overnight. And many people have to park their ICE cars in a garage not because it gets too cold, but because they live in a place (like San Francisco) where cars out on the street will all get broken into, vandalized, or stripped.


But when I'm filling up my car with a jerry can, I'm literally holding it. I know to have some sense of caution, and I can see spills.

This is true, but you can't see the static electrical charge on the can that's going to discharge with a spark when it touches the filler tube...


Probably not a major concern in your garage. These battery packs are built with fairly well-isolated cells that even if one fails should not result in a domino effect.

The reason you see issues in cases like this is because the wreck totally decimated that isolation with mechanical force and so you get runaway effects.


Many promising alternatives to lithium ion batteries are being experimented with now, so saying you don't ever want an EV parked inside your garage based on problems with lithium ion batteries may not be entirely reasonable.


I interpreted it as "I would never want an EV with the current tech in my garage". Considering he probably keeps his ICE car with a 12volt lead battery in his garage.


that seems like a pretty nitpicky response. I mean contextually it's clear what was meant


Gasoline has way more thermal energy. Yes, it's easier to put out, bit it gives you way less time before things really get out of hand.


Gasoline is not what makes a car fire bad. It's everything else of the car which is almost identical between ICE and EV.


I have literally never heard of a Tesla bursting into flames while parked in a garage. Have you?

I mean that’s fine if this is the hill you want to die on, but right now I think it’s just as likely your gas car fireball explodes like a Hollywood movie while parked in your garage.


> I have literally never heard of a Tesla bursting into flames while parked in a garage.

Maybe a quick search would help you find the answer for yourself before making such comment.

See for instance this article compiling a couple examples: https://www.thedrive.com/news/28420/parked-teslas-keep-catch...

Some of the occurrences are while the vehicle is being charged, some are simply when the car is parked.



aside from running into fire trucks, isn't that their primary claim to fame? https://futurism.com/the-byte/tesla-fire-shanghai-parking-ga...


I've also never heard of people burning alive in a car without a driver.


if solid state batteries materialize it will be such a boon


“They just get too used to it. That tends to be more of an issue. It’s not a lack of understanding of what Autopilot can do. It’s [drivers] thinking they know more about Autopilot than they do.”

Hmm, that seems like a rather stark contradiction.

Elon I think has some character flaws and when people are dying it's not the time to be defensive. I'm one of the least naturally empathetic people I know and yet I wouldn't be talking about anything other than condolences.

Finally he can't continue to defend the term 'autopilot' - in Public Communications, you're talking the masses, the lowest common denominator, and the 'laziest mode' of even high functioning people - you gotta use words that will shape outcomes. 'Autopilot' is just a bad choice - he needs to change it.


I agree. People here will cite the aviation term not refering to autonomous flying, but if you ask a regular person in the street, they think that Tesla's are self-driving. This is a dangerous belief that is held by a lot of people and needs to change. However, Tesla knows this adds an intriguing cachet to the brand, so they seem reluctant to downplay it.


It doesn't matter what a "regular person in the street" thinks, from a safety standpoint. What matters is what Tesla owners think (i.e. the people driving the cars). Are they fooled by the term? Search around on TMC, /r/teslamotors, or any other owners group, and you'll find it's pretty universally understood.

Additionally, the car reminds you of its limitations every time you use it. And it disables if it doesn't detect torque on the steering wheel, or weight in your seat, or the seatbelt clipped in. I understand how someone could be mislead by marketing early in the buying process. But by the time you get to operating the vehicle, it is borderline impossible to use Autopilot and still believe it requires no human attention.


Not sure that these guys are in the reddit demographic... but regardless, Tesla intends their cars to be a mass-market product. Peer pressure is a thing, and many of the people who buy these cars are showing them off to their friends who are eager to see the "autopilot" in action. When people are given conflicting information, they'll believe the story they want to believe. ("that fine print is just stuff the lawyers put in there") While drivers have the ultimate responsibility for sure, the situation this perception has created is foreseeable.

https://abc7ny.com/amp/tesla-crash-houston-fatal-car-autopil...

> two men who were found dead inside the car had dropped off their wives at a nearby home and told them they were going to take the 2019 Tesla S class for a test ride.

> The man, ages 59 and 69, had been talking about the features on the car before they left.


Disables if it doesn't detect torque on the steering wheel

Are you sure?

What if the driver had a heart attack? Wouldn't the responsible thing for the car to do would be to slow down, steer off the road and park? And then notify the authorities?


idk about notification but otherwise thats what it does, it slows down and stops :D


The big difference between car drivers w/ "autopilot" and airline pilots with autopilot is that the airline pilots have massively increased training requirements, regular re-certification, and physical fitness requirements.

Even if plane vs automobile autopilots are equivalent in functionality, the difference in operator training is separated by multiple orders of magnitude. Not to mention the frequent presence of a second pilot in commercial airlines and a dedicated ATF crew on the ground to monitor their situation.


Do you feel the same way about Ford's "Copilot 360"? Surely everyone agrees that a "copilot" is more capable than an "autopilot"...


Only one car manufacturer's CEO and fanboys speak about how technologically advanced it is and how driverless cars have been here since 2016.


i mean the word copilot does imply that there is another pilot


Exactly, copilot implies to me that I can get up and go use the restroom while the "copilot" is driving for me.


> 'Autopilot' is just a bad choice

It depends on what your goals are. It is a widely known term, associated with Tesla and seen favorably. Sure, it is supremely misleading but in this case that seems like a feature.

If Tesla's goal was safety they wouldn't have shipped this feature in the first place. Instead they are aiming for luxury.


He's defensive because the news media is talking about a product that can't be operated in the way the news media is describing. It's not physically possible. The car can't be operated with no one in the driver's seat. Autopilot shuts off automatically.


And yet there are plenty of videos like this... https://youtu.be/VS5zQKXHdpM?t=93

My guess would be Tesla makes the driver detection relatively simple to defeat compared to what they could enforce if they wanted.


Yeah in that case they had to put a weight on the seat, keep the seat buckled, and also keep applying force to the wheel. At some point you just have to give up as the threat model becomes too big.


For anyone else wondering why this is notable:

“Harris County Precinct 4 Constable Mark Herman told KPRC 2 that the investigation showed “no one was driving” the fully-electric 2019 Tesla when the accident happened. There was a person in the passenger seat of the front of the car and in the rear passenger seat of the car.”


I’m sure the Tesla crowd will jump in to say this is driver error because Tesla has a disclaimer that the feature requires active driver supervision, but the problem has always been the marketing that causes users to over estimate the system’s capabilities. Musk says that autopilot is safer than human driven miles, but this is apples and oranges since the human driven miles include city driving and all models of car, which have much higher rates of accidents than luxury cars like Teslas.

Full Self Driving’s marketing has been criminal. Tesla is trying to solve self driving without lidar, which it’s competitors are using. Waymo is way ahead of Tesla, but they create the illusion of being ahead by releasing features that are clearly extremely dangerous.

This video is a little over the top but highlights the abuses of FSD marketing better than anything else I’ve seen: https://twitter.com/FinanceLancelot/status/13752898727562731...


> Tesla has a disclaimer that the feature requires active driver supervision

Tesla has also had, since 2017, a prominent page on the website that says "The driver is only in the seat for legal purposes. The vehicle doesn't need it." (Ironic, given the circumstances of this accident).


It's really funny how their marketing and legal departments seem to say conflicting things almost as a prank.


Link?


Source?


https://www.tesla.com/autopilot

The actual wording:

> The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself.

although it looks like they may have yanked that as of today. However, plenty of sources:

https://www.tesla.com/videos/autopilot-self-driving-hardware...

https://www.reddit.com/r/RealTesla/comments/mth6ti/the_perso...

https://www.reddit.com/r/RealTesla/comments/dtjzm9/the_perso...


I went looking through the Wayback Machine's archive of that page to see if I could find a version of the text with those exact words. In none of the archives could I find a version; the biggest change to the text is in the description of regulatory status. Prior to ~March 2019, the text here used to read:

> Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction. It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval. Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.

(Hilariously, the "details of which will be released next year" was retained on the page for three years before being dropped.) The current text instead makes it more clear (although not by much) that the actual full-self driving capability of the car doesn't actually exist yet.

While those exact words may not be present in the text, the general whiplash theme of "this car can 100% drive itself!" and "you are totally required to supervise the car at all times, it cannot drive itself" was and is still present.


The revised words were absolutely present until today. Look at HN search, Reddit search - there are dozens of mentions of this with those exact words. There’s also in the linked video.


To Op's point, the most recent snapshot before today of April 5 does not show the quote you're stating:

https://web.archive.org/web/20210405091504/https://www.tesla...

The closest I can find to it is:

"The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."

I did find what you are referring to, however.

There is a video ( https://vimeo.com/192179726 ) embedded below "Future of Driving" section of the autopilot page. When you hit play the first message presented is:

"The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself"

That video with that statement is still on the autopilot page as of me writing this.

While a subtle nuance, I believe that wording is specific to the video demonstration ie. "what you are watching is completely automated, but if we filmed it without a person in the seat we would get in trouble" instead of "anyone can run Tesla without a driver, we just < wink wink > tell you to be in the driver's seat for legal reasons."


Thanks. It’s unnerving that such a dangerously misleading statement can persist.


I’m sure the Tesla crowd will jump in to say this is driver error

Did I read the parent post incorrectly? There was no driver in the seat.


It's a little awkward when as of today (and has been for years), Tesla on its website has wording around FSD saying "The driver is only in the seat for legal purposes. The vehicle doesn't require it."


"The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions."[0]

"Currently neither Autopilot nor FSD Capability is an autonomous system, and currently no comprising feature, whether singularly or collectively, is autonomous or makes our vehicles autonomous"[1]

[0] https://www.tesla.com/support/autopilot

[1] p30 of https://www.plainsite.org/documents/242a2g/california-dmv-te...


That text is for a technology demo of full self-driving, not for the production car.

Every sales page related to autopilot has visible warnings regarding driver attention and do not suggest you can sit on the backseat in any form...


They might have been using the orange hack to make Tesla think someone was driving (orange wedged in the steering wheel causes enough weight/pressure for the sensor to think it's a hand)


Suppose they had survived and killed someone else. I think the law would find at least one of them to be the driver.

That's the notion of driver that is meant by "driver error". e.g. the person who started the car.


That would be the error. You're supposed to be seated in the driver's seat and alert.


> I’m sure the Tesla crowd will jump in to say this is driver error

There was literally no one in the driver seat.


Aside:

>Musk says that autopilot is safer than human driven miles

Is there a breakdown for income with road safety? The best I could find is that poorer countries have higher road deaths / km, but I couldn't find any data on road safety within a country.


Here’s a report linking vehicle age to accident outcome and showing a correlation, I would assume higher incomes are associated with newer vehicles. I only took a quick look so not sure if they compare likelihood of an accident with vehicle age. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...


Have you ever driven a Tesla with full self driving?

If you engage self driving the car pops up with a giant alert to keep your hands on the wheel and the car doesn’t allow you to use full self driving unless you wiggle the steering wheel every few minutes. There is no mystery as to what is safe.

The driver... wasn’t in the seat. Yes I would say that’s driver error.

Or maybe... lack of driver error.


They should probably remove the text off their website that says:

> The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself.

That probably gives some owners the impression that the driver is only a legal formality and climbing into the back seat is a safe stunt to pull.


Where does it say that?


It is the opening text of the video here:

https://www.tesla.com/autopilot


[flagged]


"Eschew flamebait. Avoid unrelated controversies and generic tangents."

https://news.ycombinator.com/newsguidelines.html


[flagged]


It's fine to point things out, but on HN please do so in a way that is (a) informative and (b) avoids inflammatory rhetoric.

Even assuming you have a good underlying point, the comment you posted was internet flamebait as well as pointing way off topic. We're trying to avoid that kind of thing here, not just because it's below the desired quality threshold but more importantly because it evokes worse from others. If you think of the value of a comment as the expected value of the subthread it will lead to, this may make more sense.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


FYI the title of this post is misleading as it leaves out the key part of the original title, can we get it set to the actual article’s?


I've imported the title from https://news.ycombinator.com/item?id=26854994 since it's a bit more neutral.


Name one short seller who has been backed by the oil industry. Tesla is worth 3 Exxons, 3 Chevrons, or 10 BPs. This is a tired excuse that gets the motivations of short sellers backward. They do not short a company and then find damaging information, they find damaging information and then short the company. None of that excuses Tesla’s complete disregard for human life, particularly here by not having the advanced driver monitoring capabilities that their competitors have and instead relying on easily bypassed measures like steering wheel torque that will clearly lead to predictable abuse. Elon Musk has never condemned the social media stunts where people recklessly test self driving by getting out of the driver’s seat, and even commented on the porn video without condemning the behavior it encourages.


Did you see that video? It's clearly a propaganda video with various obvious lies, biases and cherry-picking. It contains false statements like "Elon musk is not an engineer, he is a scam artist". It's very clearly produced by oil industry or someone who benefits from delaying electric cars. It's professionally made to have an emotional effect. It's very similar in style to all those anti-vax propaganda videos.


The CEO of Waymo just resigned so I doubt they’re way ahead.


Tesla is only ahead on body count.


not sure why you're being downvoted.

Autopilot is not an AI but marketed as such, Tesla carries full responsibility for these deaths as well as false and misleading advertising.

Waymo may not be much further away in terms of technology, but at least they don't broadcast irresponsible claims as loud as Tesla does.


I don't see how autopilot was involved here. This was a tiny cul de sac (the address is in the article). Autopilot simply won't achieve the kind of speeds needed to cause that collision.

Frankly I agree with other posters here that the most likely scenario is that there was a human driver in the seemingly-undamaged drivers' seat who fled the scene. Absent that, you'd have to play games with launch mode or some kind of device to press the accellerator. You just can't do this with autopilot as I see it.


>> Frankly I agree with other posters here that the most likely scenario is that there was a human driver in the seemingly-undamaged drivers' seat who fled the scene.

Do we really need to imagine a third person, who left leaving behind no evidence of his or her existence, to explain this accident?


It would be far from the first time a car was found mysteriously crashed without a driver or with a passenger who swears up and down that they don't know who was driving.

When this happens to a non-Tesla (and it does happen) the usual assumption is a drunk driver. Why is that so hard to imagine here?


Because there is no evidence of such a third person.

If there was a third person, why not a fourth, and a fifth, and a sixth? Or a dozen, or two? It would also be far from the first time that a troupe of clowns fit in a too-small car.


And what evidence usually exists of a third person in a non-Tesla missing-driver crash?

Look, all I'm saying is that if this was literally any other car, we would all see this evidence and say "Yup, looks like a drunk driver fled the scene of an accident". It would be an obvious conclusion. We wouldn't be scratching our heads saying "Well, gee whiz, maybe there's some other explanation we haven't thought of".

This is not the first time police have found a crashed car with no driver. The only difference this time is that it's a Tesla.


The difference is that there were two people in the car, one of which was the driver (and owner) of the car.

But you are right to suggest that the police are themselves jumping to conclusions. Just because the accident happened in a tesla, doesn't mean that self-driving was somehow involved.

I don't disagree about that. I disagree about the necessity of assuming a third person, when the two present suffice to have caused the accident and when there is nothing to indicate that a third person was involved.

Finally, assuming a third person introduces a new question: who was it?

Do you know?


The evidence of the third person is that otherwise you've got to explain how the people in the car tricked the autopilot into driving at crashable speeds in a tiny cul-de-sac, where normally it wouldn't even engage, even apart from tricking it that there was a driver in the seat.

At this point, it's an Occam's Razor question, though different people will argue about which was simplest.

If there had been a third person who fled the scene, though, you would expect any physical evidence to have been left behind after that fire.


>> The evidence of the third person is that otherwise you've got to explain how the people in the car tricked the autopilot into driving at crashable speeds in a tiny cul-de-sac, where normally it wouldn't even engage, even apart from tricking it that there was a driver in the seat.

But that's not "evidence". That's just an unanswered question.

As I said recently in another comment, if you find yourself at the point where you're thinking "I can't think of a better explanation", that's not the time to say "that must be the answer"; that's the time to think harder until you find a better explanation.

In this case, if you can't find a better explanation than "there must have been a third person that we don't know was there", then it's time to sit down and think harder about how the accident could have really happened.

Because if you start assuming things we have no evidence for, you might as well assume invisible space leprechauns, for all the good it will do.


> you've got to explain how the people in the car tricked the autopilot into driving at crashable speeds

Or the autopilot did that by itself. I don't see why we need to assume any human trickery.


We need a human driver or some other mechanism for the car to have propelled itself into the tree. A human driver seems the most likely solution but others are of course possible.


There were two people in the car. That's plenty to crash the car. The question is "how" but for that we don't need a third person, especially when we have no evidence of such a third person.


Fire can burn away obvious and forensic evidence


If I understand correctly the reason the OP has to suspect a third person in the driver's seat is because the driver's seat appeared undamaged?


I don’t see anything in that photo that looks undamaged.


To be honest, I don't either, but if I understand correctly the OP thinks there must have been a third person because the driver's seat was "seemingly undamaged".


"Harris County Precinct 4 Constable Mark Herman told ABC News the two men who were found dead inside the car had dropped off their wives at a nearby home and told them they were going to take the 2019 Tesla S class for a test ride.

"The man, ages 59 and 69, had been talking about the features on the car before they left."

https://www.yahoo.com/gma/investigators-looking-explosive-te...


If so the investigators on the scene are wrong, from the article:

“Herman said authorities believe no one else was in the car and that it burst into flames immediately. He said it he believes it wasn’t being driven by a human. Harris County Constable Precinct 4 deputies said the vehicle was traveling at a high speed when it failed to negotiate a cul-de-sac turn, ran off the road and hit the tree.”


Again, that scenario (taking a high speed turn in a cul-de-sac) just doesn't correspond to any known accessible autopilot behavior. Given the choice between "initial investigation was wrong" and "heretofore unseen high speed residential driving by autopilot" (also "non-autopilot driving from passenger seat" probably needs to be in the list), I know which way Occam points.

To be glib: if you could get the car to drive itself at 60mph+ on a residential street, that shit would be all over youtube.


Occam's Razor says: "entities should not be multiplied without necessity", but it's your explanation that is "multiplying entities", specifically the occupants of the car and certainly without necessity: two car occupants suffice to cause an accident.


If you look at it on a map, it is very odd. The address where happened is maybe 200 meters into the cul-de-sac, which requires a hard right turn to enter. The crash site isn't very close to the right turn. And the street itself isn't long, maybe 300 meters total.

Trying to imagine how it was going fast enough to cause this, well after a right turn, with nobody in the driver's seat...isn't easy. Occam's razor is hard to apply, because I don't see a simple explanation.


I don't think Occam's razor is hard to apply: there's no evidence of a third person in the car, there's no reason to assume a third person in the car.

Occam's razor cuts out unlikely explanations. It doesn't help you find a likely explanation, but at least it helps you avoid wasting time in considering unlikely ones.

Also, the modern interpretation of Occam's Razor, that "the simplest explanation is usually the right one" (see wikipedia) doesn't mean that the most likely explanation should be simple. For example, it can equivalently be restated as "the least complex explanation is usually the right one", which clarifies that the most likely explanation need not be simple, just less complex than others.


Occam's razor can be used to rule out "more complicated" explanations for the same evidence.

The problem is that I haven't seen any explanation for how else the car got up to that speed. Without a actual competing explanation, you can't use occam's razor at to rule out unlikely explanations.


There is a very simple explanation: the car crashed because of the actions of the two people we know were in the car. Any explanation that requires a third person is more complex than that.

What where the actions of the two people in the car that caused the crash? That we can't know yet because we don't have enough information. So in that case, Billy Occam would say "sit tight and wait until you know more".


"they did something and it caused them to crash" isn't really an explanation...so don't go ruling out other valid explanations as "too complicated" till you have enough information to create a proper alternative.

Otherwise you are mis-applying occam's razor. Occam's razor isn't a rule of logic, it is a heuristic to help you navigate complicated epistemic situations.


I think what you're saying is that if you don't have a good explanation, then everything goes, you're free to imagine anything you want. Until "you have enough information to create a propper alternative" you can "multiply entities" to your heart's content- and make any kind of assumption you like.

Is that what you mean?


In saying that you are using Occam's Razor which is soley intended for distinguish two theories of identical explanatory value.

There are plenty of other epistemic tools to evaluate the likihood of various incomplete explanations with different levels of explanatory power.

Edit: Occam's Razor is like a tie breaker after you have brought out all your other epistemic tools and failed to break the tie. You are nowhere near that point.


>> In saying that you are using Occam's Razor which is soley intended for distinguish two theories of identical explanatory value.

Is it? The Razor says "entities should not be multiplied without necessity" (see wikipedia). Assuming a third person is "multiplying entities without necessity". Any explanation that assumes a third person is "multiplying entities without necessity". So it should be cut by the Razor, meaning we don't need to consider it. It doesn't matter if there is no better explanation yet. Any explanation that doesn't assume a third person in the car will always be better than any explanation that assumes a third person in the car. So the Razor can indeed help us distinguish between likely explanations even when we haven't yet formulated those explanations.

Also, I'm curious- do you really think that if we don't have a good explanation then we're free to imagine anything we want? That's an obvious error of reasoning that you would have tried to avoid if you were aware of it, yet it really seems to me that this is what you're doing. Would you like to go over that for a bit?

Edit: I've edited this comment repeatedly to make it less contentious, like HN guidelines advice. I suggest we refrain from discussion of technicalities and avoid veering off into technical language, otherwise we'll just make this conversation even more tedious than it already is.


I explained to you what kind of epistemic tool the Razor is. There are plenty of other tools that let you make arguments about the likihood that there was an additional person present. You can't make that argument with that Razor.

If you would like a more detailed understanding of why the Razor is limited to this use, you'll have to out more effort into learning epistemology than perusing wikipedia. It is an interesting topic and worth your time and attention.

The sort version is that this is the only way to use the razor that increases the reliability of your epistemic process.


See, when I said that we should refrain from discussion of technicalities, the reason was to avoid the tactic you're employing now, of trying to "win" the conversation by saying I don't understand the Razor etc. This is an underhanded tactic that does not honour you and demeans me as your interlocutor.

You suggest I lack a detailed understanding of why the Razor is limited to a particular use. I pointed to wikipedia because it's a resource that is easy to access. According to wikipedia, then, the Razor says:

"Entities should not be multiplied beyond necessity".

Do you disagree that this is the Razor?


I wasn't trying to win, I was trying to offer you an honest suggestion.

You do lack an understanding of Occam's Razor and I encourage you to go learn more about it. It simply cannot be used to rule out explanations like you are saying. It only provides a heuristic preference between explanations that make identical predictions.

Occam's Razor is an epistemic guideline that was originally formulated in the terms you specify, it is however a concept that has been widely discussed and refined beyond that original formulation.

If you aren't interested in going past reading on wikipedia, the article there does touch on the Razor's limitations even if it doesn't delve deeply into the reasoning behind them.


Even if you stretch Occam's Razor past it's limit and use it to establish preference between two theories that make different predictions but only about phenomenon that you personally can't practically check (i.e. a case such as this one), then you are still only establishing a preference for the "simpler" explanation and cannot actually rule out the other explanation without checking the difference in predictions (i.e. an investigation into the crash.)


So, to clarify, you disagree that "entities should not be multiplied beyond necessity" is the Razor?


Yeah, sure. He means that "3rd person fled the scene" is on par with "Space Aliens".


Ok, I'll bite. What happened, then, that's any simpler than a 3rd person? Because I have no idea what happened.


Bite what? I'm not proposing any explanations. I'm pointing out that it doesn't make sense to assume a third person, for whom we have no evidence, was in the car.


My understanding is that you use Occam's razor on a list of possible answers. A third person isn't less likely unless there is some more likely choice. I don't know what that would be.

Edit: You're sort of glossing over that "nobody in the driver's seat" is unusual by itself. All explanations I can think of for how this happened are unlikely.


You don't need to have a "list" of explanations. Any explanation would either assume there were more than two people in the car (why stop at three?) or it would assume there were no more than two people in the car. Any explanation that assumed there were more than two people in the car would "multiply entities" more than any explanation that assumed there were only two people in the car, so the Razor would always cut out the former in favour of the latter.


>> Edit: You're sort of glossing over that "nobody in the driver's seat" is unusual by itself. All explanations I can think of for how this happened are unlikely.

That means you can't think of an explanation for it, not that there isn't one.

Again as I said in other comments, if you find yourself thinking "I can't think of a better explanation", that's a sign you need to sit down and think of a better explanation.


You can't describe how one explanation doesn't fit with Occam's definition of simpler, without saying what the other explanation is.

That's the whole point of Occam's Razor: "simpler" (or any other adjective we might use there) requires a comparison.


The Razor, as originally stated by William of Occam, is: "entities should not be multiplied without necessity" (see wikipedia) [1].

Clearly, assuming a third person was in the car is "multiplying entities without necessity". Note also that the original formulation of the Razor makes no mention of hypotheses, competing or otherwise, or of any requirement for comparisons.

Even going by the more modern interpetation of (stated informally) "prefer the simplest hypothesis", a comparison can be made "on the fly": any hypothesis that needs a third person in the car is less simple than a hypothesis that doesn't need a third person in the car. So we can prune away the entire branch of the hypothesis search tree that begins with "suppose there was a third person in the car" without wasting any time considering those hypotheses even if we don't yet have any competing hypotheses.

And, btw, we sure do: there were two people in the car and they caused the crash. That's a competing hypothesis. I don't know why people keep saying "there's no competing hypothesis". That you don't like that hypothesis makes no difference.

You'll notice also that the more modern interpretation ends up agreeing with the original: any hypothesis that needs a third person in the car is "multiplying entities without necessity" (the entities being the people in the car), whereas any hypothesis that doesn't need a third person is leaving the number of entities (at least people in the car) well enough alone.

_____________

[1] No, please, really do see wikipedia. I think it will help clarify much confusion about what the Razor is. For instance, there are five or six different formulations of the Razor, each with its own, multiple, interpretations. But I can't see any one that "requires a comparison". That sounds to me more like a Calvinball rule. If you want to say I'm "using it wrong", then start by clarifying which formulation and what interpetation of it you are using, as I have done throughout this thread.


> (see wikipedia)

> Note also that the original formulation of the Razor makes no mention of hypotheses, competing or otherwise, or of any requirement for comparisons.

Well, the Wikipedia article states, in the very first line:

> ...or more simply, the simplest explanation is usually the right one.

and further down it defines what a "razor" even is:

> The term razor refers to distinguishing between two hypotheses either by "shaving away" unnecessary assumptions or cutting apart two similar conclusions.

So both of these do seem to directly relate to comparisons. It's not simply a misunderstanding of the term to say that Occam's Razor involves comparisons of competing hypotheses.

> any hypothesis that needs a third person in the car is less simple than a hypothesis that doesn't need a third person in the car.

The trouble is you need to define what you mean by "entities." Obviously this does not need to literally refer to bodies. It means facets and complications of the hypothesis.

So we need to decide which hypothesis requires more entities: one that supposes the possibility of a third person, or one that supposes the possibility of people tricking an autopilot to do what no one believes it ought to be able to do.

The point is, we need something beyond what we have been told. If it were simply "drunk man crashes into tree," we need no further explanations, no further entities. That story explains itself. But if the initial story isn't by itself sufficient -- "autopilot drove itself at high speed in a narrow cul-de-sac with no one in the driver's seat, against all programing and prior experience of Tesla cars" -- then we must hypothesize additional entities. Wonky programming. Hacked system. 160 lb weight on the driver's seat and fake hands on the steering wheel. Third person.

I don't know that the third person hypothesis is, in fact, simpler. But I don't think you can say that Occam's Razor suggests that that should not be a rational hypothesis. I don't know how many additional entities would be required to get a Tesla to do what was suggested in the article.


Yes, wikipedia gives different interpretations of the Razor that, as you say, "directly relate to comparisons". None of them requires a comparison. I mean, you won't find that rule anywhere: "to use the Razor, you must make a comparison, otherwise you can't use the Razor". That is Calvinball.

>> The trouble is you need to define what you mean by "entities."

I think this is splitting hairs. A person in the car is clearly an "entity". Assuming a third person in the car is "multiplying entities".

>> Wonky programming. Hacked system. 160 lb weight on the driver's seat and fake hands on the steering wheel. Third person.

... or a 160 lb bag of stones. Or three giant rabbits. Or a pair of labradors. Or... etc.

We really don't need to assume any of this. The two people in the car and sufficed to have caused an accident. Assuming more people, or anything else, is multiplying entities without necessity.


> That is Calvinball.

Stop with the condescension. The wikipedia article literally defines "razor" as "distinguishing between two hypotheses." If you disagree with that, edit the article, but don't tell me to read the article ("No, please, really do") and then dispute it.

> The two people in the car and sufficed to have caused an accident.

You keep saying that. I don't know it's true. Do you know it's true?

Like others have said said, this would go against every prior experience with Teslas, and our understanding of how the system works. It would be the first Tesla, ever, to have gotten into an accident on a narrow residential road with no one in the driver's seat.

You need something else in the explanation, even just "the system actually works in some other way." That is also multiplying entities. That was my point, which you missed.

You can't simply state, ipso facto, that the description given "sufficed to have caused an accident," when numerous people have disagreed with that.


There is a misunderstanding I'd like to clear up. By saying "Calvinball" I am not being condescending. I am calling you out for trying to invalidate my entire line of reasoning by introducting a new rule to Occam's Razor, that it must be used in a comparison (between two hypotheses). This is a rule that is not there in the original formulation of "entities should not be multiplied beyond necessity". It is also not there in subsequent interpretations, including the interpretation you quote from wikipedia. It's a rule that you came up with and it is a rule that you came up with for the sole purpose of helping you win the internet argument. So it's a Calvinball rule.

Besides which, as I said before, there do exist many hypotheses that we could be making and the Razor is useful in prunning many of them away before we have to waste our time formulating them and exploring them in detail. For example, all the hypotheses that start with "Suppose there was a third person in the car" need not be considered because "there was a third person in the car" is "multiplying entities without necessity", and so hypotheses that make such an assumption are less likely to be true than hypotheses that do not.

The misunderstanding I'm pointing out and that I advised you to read wikipedia to clear up, is that you seem to understand the Razor as requiring at least two competing hypotheses to be fully formed to the point that they could be written down, perhaps even in a formal language. There is no such requirement. That's your Calvinball rule, made up on the spot and for the sole purpose of winning the internet argument. Or, perhaps, only for the purpose of forcing me to waste my time by considering all the unnecessary hypotheses that start with "suppose there was a third person in the car".


This is so frustrating, because you're not seeing my point that, without a simpler hypotheses, it doesn't make any sense to dismiss hypotheses on the basis of the Razor alone.

And, again, from what everyone understands of how cars and Teslas work, the initial description doesn't have enough elements to be a complete story.

This is honestly what this whole conversation sounds like to me: Suppose we find a baby stuck high up in a tree.

Me: "How do you suppose it got there?

You: "No idea. All I know is that there's a baby in a tree. The baby got there."

"Perhaps someone put him there?"

"Occam's Razor states you shouldn't multiply entities unnecessarily. Another person would be another entity."

"Ok... I mean, maybe there was a flood and-"

"And maybe a flying pig put him there! What don't you get about Occam's Razor???"

"So how do you think he got there?"

"I have no idea."

"Ok... So, I think that someone putting him there is the most likely -"

"How do you not understand that Occam's Razor doesn't require comparisons!"

"WTF?"

At this point, I don't give a damn about what Mr Occam said. This nitpicking over the original formulation is completely irrelevant. All I've been trying to think from the beginning is what the most likely explanation is.

A third person may well NOT be the most likely, or simplest explanation, but since you haven't provided one this conversation has been pretty useless.

Occam's Razor says you shouldn't multiply entities unnecessarily. That last word, "unnecessarily," I realize in retrospect, is the one I've been focussing on this entire time, while you've been talking about "entities."

If the initial story isn't sufficient -- and the fact that this would be the first Tesla ever to drive at high speed in a narrow cul-de-sac with no one in the driver's seat suggests it isn't -- them we MUST multiply entities. Bugs, trickery, GPS malfunction, whatever. We can then argue about which entities are reasonable and which aren't, if we like, but that's separate from the fact that we need to multiply them.

(And, indeed, it is this word "unnecessarily" in the original formulation that does require simplER hypotheses, which is why that wikipedia article that you told me to read all the way through is chock-full of reference to "simpler" or "competing hypotheses." Without another explanation, you have no idea if the additional entities are necessary or not. All explanations require at least one "entity," except perhaps the Big Bang. The particulars of the case determine how many entities are necessary and sufficient.)


I am seeing your point. You say that there needs to be a comparison between at least two hypotheses and then the Razor will help us choose one if it's simpler than the other.

There are many competing hypotheses to choose from in this case. They include all the hypotheses that assume there was a third person in the car; and all the hypotheses that do not assume there was a third person in the car. Any hypothesis that does not assume there was a third person in the car is "simplER" than any hypothesis that assumes there was a third person in the car. Assuming there was a third person in the car is to multiply entities beyond necessity, i.e. as you say "unnecessarily".

You can check my comments in this thread to verify that I've said this many times. The comments that wonder about what "entity" means are yours, not mine.


> Any hypothesis that does not assume there was a third person in the car is "simplER" than any hypothesis that assumes there was a third person in the car.

Nonsense.

A hypothesis that there was a third person in the car is simpler than a hypothesis that there were only two people in the car and an advanced animatronic ice sculpture in the driver's seat which melted after the crash. (I remember my minute-mystery solutions.)

A hypothesis that there was a third person in the car is simpler than a hypothesis that there was a rare combination of a bug in the software, a GPS malfunction, a solar flare, and a paint spill that looked like a painted lane marker.

There are an infinite number of hypotheses that don't contain a third person that are more complex than the hypothesis that there was a third person.

> You say that there needs to be a comparison between at least two hypotheses and then the Razor will help us choose one if it's simpler than the other.

If you think that was my point, you didn't read my comment above at all. That was not my point. It was that, IF the known facts of a story aren't sufficient cause to result in its conclusion, then there is required to be at least one more cause than has been explained.


>> There are an infinite number of hypotheses that don't contain a third person that are more complex than the hypothesis that there was a third person.

Yes! You're right, and I'm very excited now because you seem to understand how it works.

Note that all those hypotheses that you bring up also "multiply entities beyond necessity" - so they are "more complex" than any hypotheses that don't, and we don't need to examine them, we can just prune them out without even having to state them.

So to correct what I've been saying above that was indeed too general, "any hypothesis that does not assume there was something else in the car is simpler than any hypothesis that assumes there was something else in the car" ("something else" meaning "something besides what was actually found in the car"). Please correct me again if you think I'm still wrong.

I think we're getting to something we can agree on now, yes?


Yes, but you haven't accepted the central premise of all my posts yet, which is that entities should only not be multiplied "without necessity."

If the initial known facts of a story aren't sufficient to describe an event (a baby in a tree, a car doing something it never has before) then the actual cause must involve additional entities, whether a person or a solar flare.


I agree it doesn’t look like there’s much ramp up, I’d want to know why the people actually there sound so confident there was no 3rd party. Here’s the approximate location of the crash: https://goo.gl/maps/4Rk3DPdtnnRQuGd69


> https://goo.gl/maps/4Rk3DPdtnnRQuGd69

Do they hold contests for the most complicated roof building, in this area?


"Village Of Carlton Woods neighborhood is located in SPRING (77382 zip code) in MONTGOMERY county. Woodlands - Village Of Carlton Woods has 437 single family properties with a median build year of 2006 and a median size of 6,371 Sqft., these home values range between $789 - $3045 K" https://www.har.com/pricetrends/woodlands---village-of-carlt...

So, yes, sort of.


It seems it's not just limited to that area :)

https://mcmansionhell.com/


The WSJ article says they are almost 99.9% sure there was no driver. https://www.wsj.com/articles/fatal-tesla-crash-in-texas-beli...


They are pulling that number out of someone’s ass.


You're saying that Autopilot cannot be blamed for this failure, because you take it as an axiom that Autopilot doesn't cause this kind of failure? That's just an empty tautology.

Failures are failures, you can't just assume they don't happen on such a complex system.


Autopilot has not been observed to behave like this. That's not an "axiom", it's "evidence". I'm not saying it can't happen, I'm saying that the data we have suggests alternatives as more likely.


Doesn't Tesla record everything? I imagine there should be records somewhere that would resolve such questions.


From the article:

KPRC 2 reporter Deven Clarke spoke to one man’s brother-in-law who said he was taking the car out for a spin with his best friend, so there were just two in the vehicle.

It happened entirely within the culdesac, and there were multiple witnesses.

Tesla Autopilot has killed two more people. (The entire rest of the self driving industry combined 1 pedestrian fatality at night).


I found this notable as well:

> Authorities said they used 32,000 gallons of water over four hours to extinguish the flames because the vehicle’s batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.


Fire departments are going to need to learn how to put out battery fires and have the relevant equipment available. If they are in the profession of putting out fires, at some point it's on them to know how to not waste 32,000 gallons of water as these vehicles become more common.


"Use lots of water" is actually the best practice for extinguishing these kinds of fires.

First of all, there are actually parts of the car that are on fire (plastics, fabrics, etc.) and may spread fire to the surrounding environment. You need to extinguish those.

Secondly, the battery system is not "on fire" in the classical sense. It's undergoing a self-sustaining thermal runaway. You pour as much water as you can on it to remove heat and break the chain reaction.


> You pour as much water as you can on it to remove heat and break the chain reaction

...For long enough to remove the immediate danger of the fire to the surrounding people. Such a damaged fully charged battery will probably undergo thermal runaway and reignite repeatedly as soon as it stops being cooled. The best bet is to just keep the fire under control and not let it expand while it burns itself out.


I had a safety course and we were supposed to pour salt on the batteries. The extinguisher tubes were yellow instead of red.


The water wasn't wasted. It was used to put out the fire.

We're talking about $50 worth of water. Negligible compared to the overall costs.


I think it's closer to $250 judging by https://www.midlandtexas.gov/505/Current-Water-and-Sewer-Rat... (the first set of prices for water in Texas I could find, presumably representative of scarcity etc.)

That assumes household consumption. It could be as high $420 if the highest marginal rate applies.

Still fairly inconsequential overall.


How do you put them out?


From their own Manuel: FIREFIGHTING USE WATER TO FIGHT A HIGH VOLTAGE BATTERY FIRE. If the battery catches fire, is exposed to high heat, or is generating heat or gases, use large amounts of water to cool the battery. It can take approximately 3,000 gallons (11,356 liters) of water, applied directly to the battery, to fully extinguish and cool down a battery fire; always establish or request an additional water supply. If water is not immediately available, use dry chemicals, CO2, foam, or another typical fire-extinguishing agent to fight the fire until water is available. Apply water directly to the battery. If safety permits, lift or tilt the vehicle for more direct access to the battery. Apply water inside the battery ONLY if a natural opening (such as a vent or opening from a collision) already exists. Do not open the battery for the purpose of cooling it. Extinguish small fires that do not involve the high voltage battery using typical vehicle firefighting procedures. During overhaul, do not make contact with any high voltage components. Always use insulated tools for overhaul. Heat and flames can compromise airbag inflators, stored gas inflation cylinders, gas struts, and other components which can result in an unexpected explosion. Perform an adequate knock down before entering a hot zone. Battery fires can take up to 24 hours to extinguish. Consider allowing the battery to burn while protecting exposures. After all fire and smoke has visibly subsided, a thermal imaging camera can be used to actively measure the temperature of the high voltage battery and monitor the trend of heating or cooling. There must not be fire, smoke, or heating present in the high voltage battery for at least one hour before the vehicle can be released to second responders (such as law enforcement, vehicle transporters, etc.). The battery must be completely cooled before releasing the vehicle to second responders or otherwise leaving the incident. Always advise second responders that there is a risk of battery re-ignition. Second responders may choose to drain excess water out of the vehicle by tilting or repositioning it. This operation can assist in mitigating possible re-ignition. Due to potential re-ignition, a Model S that has been involved in a submersion, fire, or a collision that has compromised the high voltage battery should be stored in an open area at least 50 ft (15 m) from any exposure. Warning: When fire is involved, consider the entire vehicle energized. Always wear full PPE, including a SCBA.


Water, in general, isn't specific enough. Hot water definitely won't help. It's low temperature, high thermal capacity / high latent heat of vaporization extinguishants that are best. The FAA has a whole protocol and training materials on how to put out Li-ion fires.


The latent heat of vaporization of water is so high that the temperature of liquid water can’t possibly make a big difference (specific heat is 4.18kJ/kg/K, latent heat of vaporization 2,260kJ/kg.)


I meant LHO fusion. HNing too tired.


I’ve never seen a hot water fire truck. Are they common in your area?

eta: Even hot water still has a high latent heat of vaporization.


Lol. It's chilled water/ice water vs. ambient because water doesn't stop NMC fires directly.


German cities are in the middle of equipping all fire departments with containers that they can flood with water to submerge EVs in. The idea is to let the EV burn, cool it with water, and then tow it under supervision of a fire truck to a fire station to put the EV into water for 1-2 days. Compared to ICE vehicles it's very complicated and binding a fire truck over a span of multiple hours instead of 1 or 2.



Tesla has a relatively comprehensive set of first responder guides on their website. They are all PDFs, which might not help people in the field, but they seem like a good thing to print out and throw in to a binder in every truck. Honestly, if I was tesla I would print, bind, and ship these to every fire dept in the country.

https://www.tesla.com/firstresponders


I mean the manual pretty much instructs to do exactly as it was done here:

> USE WATER TO FIGHT A HIGH VOLTAGE BATTERY FIRE. If the battery catches fire, is exposed to high heat, or is generating heat or gases, use large amounts of water to cool the battery. It can take approximately 3,000 gallons (11,356 liters) of water, applied directly to the battery, to fully extinguish and cool down a battery fire; always establish or request an additional water supply. If water is not immediately available, use dry chemicals, CO2, foam, or another typical fire-extinguishing agent to fight the fire until water is available.

> Battery fires can take up to 24 hours to extinguish. Consider allowing the battery to burn while protecting exposures.


I have no idea. I'm also not a firefighter. As electric cars become more common, we'll need to have a solution that is not "spend 4 hours trying and then call the manufacturer."


Li-ion NMC and similar battery chemistries shouldn't be used in safety-critical applications or near human occupancy.


Elon said he wanted to put rockets on them in the future... I wonder how many more hours we need to put out such fire hazard?


He said cold gas thrusters aka compressed air. Not rockets exactly.


It's not water that puts it out, but low temperature to stop thermal runaway like the FAA advocates for Li-ion battery fires. <0 C salt water would be best.


Maybe fire departments will need to start dispatching liquid nitrogen tankers to Tesla fires now.


Liquid helium. ;)


The unsatisfying response in this case might just be: some fires can't be put out. At least, not quickly enough to matter for anyone inside the car. The priority in cases like this might just be to remove occupants from inside the car if possible.

According to Tesla's information, battery fires can be extinguished by using a lot of water. So, it is possible to do with an ordinary fire truck (and self-contained breathing apparatuses so firefighters can get near a burning car without breathing the fumes), but it might actually be a rare case where putting the fire out is consequential to the outcome for the occupants: I'd imagine in most cases they either got out in time or it was already too late by the time putting the fire out is a plausible option. Maybe I'm wrong about that, though.


Isn't removing occupants of any burning enclosure (be it a car, building, boat, ...) always the first priority? I can't imagine any case where a fire department would pour water on a burning car without first getting people out of it.


I would assume that they'd try to get people out first.

My comment is more addressed to people such as myself whose natural inclination is to regard a burning car as a bad situation that the fire department aught to be able to quickly remedy, and to suggest that most of the time putting out the fire may be a low priority.

I could imagine the fire department dousing a car if it was already too hot to get people out of safely, but I'm not a fire fighter and I don't know what they'd normally do in that situation.


Well how do you put out a battery fire? What new materials does every single fire station in America need to acquire? What new skills/training do local firefighters need? Who's going to pay for all this?


Ice water, or something even colder. Water itself isn't what puts out a Li-ion fire, it's ending thermal runaway.

The people will pay for it because it's necessary and times have changed.


Why would 0c water be better in a meaningful way compared to 25c? Most cooling is from the steam phase change, right?


Ice, dry ice, liquid nitrogen would be even better. It's not about water putting out the fire directly, it's getting the Li-ion battery below thermal runaway to stop the self-immolation.


Maybe because it's 25C cooler than 25C water. Takes more energy to convert to steam. I guess.


My point is:

Water 1c change is 4184 J/kg (1 kcal)

Water to steam 2260 kJ/kg.

So 25c cooler is 25 * 4.184 kJ extra, which is:

25 * 4184 / (2260k + 75 * 4184) = 104600 / 2573800 => 4% more energy than 25c to steam.

And then you need a refrigerator to cool tons of water, instead of just carrying more water.


No wonder my steam shower is the most power hungry appliance in my place at 10 KW.


Hah, hacker news...


Or they will attempt to pass on the cost to EV manufacturers and/or owners.


How novel. I wonder when they’ll pass on the cost of smog, climate change, emissions, etc to ICE owners ;)


They do, in the form of taxes on gasoline. (Whether the government uses that tax revenue to address those issues is another question)


At least at the federal level the gas taxes haven't risen in about 25 years... [1]

[1]https://www.npr.org/2018/10/05/654670146/its-been-25-years-s...

I understand why they're not raising it, though.


Vehicle registration fees for NMC EVs.


They really should


Maybe they can learn some techniques, but also car manufacturers must make the cars more ready for those kind of situations. It won't be the first time more "dangerous" vehicules are created, and technical solutions to be found (think of the liquefied petroleum gas design which had to have a safety valve).


I would design the batteries element with little robotic legs, so that when a thermal runaway occurs, they would disassemble and run away dispersing from the hot elements, to cool down each individually. </me mode="overengineering">


Brings a whole new meaning to thermal runaway


Yes, perhaps run into a house or school. Or orphanage.


lol or to your local tire storage facility.. oil refinery... nuclear power plant... lumber yard with thousands of cut trees waiting for processing... paper mill... there’s lots of good options to pick from


Alternatively, perhaps this troublesome aspect of their design should be changed before they become more common?


What do you propose? Damaged batteries short and burn, it's the way it works. The energy needs to go somewhere. Damaged fuel tanks leak fuel much (MUCH) faster and burn much hotter and more dangerously. Do you demand that that be fixed before we allow ICE engines to become more common?

This whole "we couldn't put out the fire" nonsense is click bait. Battery fires burn longer, and that's important to know and requires different techniques to manage. But objectively they are safer than gas fires. Period. There is no serious debate on that point.


I think Tesla's car design is, for the most part, reasonably safe (their driver assist technology is debatable, but that's a side issue). However, there are design trade-offs they made to favor range and power over safety and weight. For instance, the cells they use are substantially more dangerous than lithium iron phosphate (LFP) cells. The latter have lower energy density, so they aren't the best to use in a range-optimized car. (I believe Tesla is now buying LFP cells from CATL for some of their model 3s. Not sure if those are just for the China market or if it's available elsewhere.)

I don't know if they could have done more with their batteries to prevent damage from causing thermal runaway. Maybe they could isolate modules from each other better, or use the built-in liquid cooling system in clever ways. (Maybe use the coolant to boil a reservoir of water, so all the excess heat gets used to make steam?) Maybe add heat shielding between the battery and the rest of the car. Maybe design the battery to detach from the car if it gets too hot.

At any rate, I don't think cars can be made to be perfectly safe, and a modern Tesla is reasonable, especially compared to a gas-powered car. However, the possibility always remains of improving upon Tesla's current design along every desirable vehicle attribute, including safety.


I propose that Tesla may need to rethink "the way it works" rather then chalk it up to the price of doing business.


You think Tesla needs to rethink the fact that... batteries store a lot of energy? I really think you're missing the point.

Going full-on didactic on you: Vehicles have huge energy requirements to move them around. To meet those requirement they need to store energy on-board in some manner. This creates known failure modes where damage to the vehicle releases that energy in an uncontrolled way. That's bad. But it's completely unavoidable given the constraints of the system.

You seem to want Tesla to do the impossible and invent batteries that don't burn. Which seems ridiculous, given e.g. Ford's nearly-century-long failure to invent gasoline that doesn't burn.

The question you should be asking is "Are battery fires safer than gasoline fires?". And... duh. Yes, they are. And it's not even close.


Can you share the data that shows that car battery fires are "safer" then car petrol tank fires?


No no no, logic works the other way around: you are the one claiming that this is a new and more dangerous technology. If you want to do that, you are the one who needs to bring evidence.

I'm simply arguing from first principles: car batteries store less energy than fuel tanks and release that energy slower and over a longer period of time in a fire. Ergo, they are safer for pretty obvious reasons.


Most car fires do not actually involve the entire gasoline tank catching alight; as the gas tank is well-protected and not particularly near sources of ignition. An engine bay fire is much more common.

If gasoline is spilled, foam works well to extinguish it and keep it from igniting.

Once a gasoline fire is out and cool, it is going to stay that way,

Lithium-ion battery fires are self-sustaining thermal runaways. You cannot put out such a battery fire by smothering it; it does not need oxygen from the air. All you can do is try to keep it cool by running water on it.

Even after such a fire seems cool, it can reignite unexpectedly.

The actual gross volume of energy is not necessarily what makes fighting a fire dangerous or not; it's the unpredictability. Firefighters may be more worried about compressed gas-strut explosions (from hatches, hoods etc) than they are about the gas tank exploding.


> You think Tesla needs to rethink the fact that... batteries store a lot of energy?

Just as the environmental externalities of fossil fuels need to be internalized, so do the public safety externalities of the kinds of batteries in use, here. If its going to be a “price of doing business”, the right parties ought to pay the price.

Then, whether or not it is sensible to mitigate in manufacturing will be handled by the properly-aligned incentives.


But what are the externalities?! You and the upthread poster aren't elucidating any. All we have at hand is the linked article, which is... a fire. Batteries burn, in a safer and more controlled way than gasoline.

How is that a bad thing? What are you asking for battery or car manufacturers to do that they aren't already doing simply by replacing existing more-dangerous technology?


Where are you getting this bit about batteries being safer from? So far you haven't supported this claim in any way.


Reducing the battery capacity would make things better. We don't need such massive battery capacity for daily use but we buy them to support rare use cases.


Yet another absolutely deadly device we are introducing to our neighbourhoods because people hate public transport so much. This is insane.


Because what powers public transport isn't dangerous at all either...


LOL, what? Just wait until you find out about gasoline.


Some "authorities" they are. Took me about 5 seconds to find the answer. Not one person thought to Google it?


Maybe in a situation like that, going by the first search engine result isn’t the best idea?


I would expect professionals to have this information in cache. I’ve been driving a production BEV for ten years, this isn’t exactly new-fangled.


Better than going with the first result out of your head (pour water on it) which is what they did.


Which turns out to be the right answer.


The first result for my query says "copious amounts of water are recommended as the best means to extinguish a high voltage vehicle fire." What did you find?



I'm guessing those strategies work well for phones but maybe not cars which are absolutely enormous amounts of battery and possibly wrapped around a tree at the time of the fire.

Here's what FEMA has to say (https://www.usfa.fema.gov/training/coffee_break/061819.html)

Secure a large, continuous and sustainable water supply — one or more fire hydrants or multiple water tenders. Use a large volume of water such as master stream, 2 1/2-inch or multiple 1 3/4-inch fire lines to suppress and cool the fire and the battery.


Right. A fire extinguisher is a "first aid" response to a fire. The kind of advice that's relevant to fire extinguisher usage is predicated on a small fire; general advice is that any fire larger than a small trash can is too big to fight with an extinguisher.

The fire department plays from a different rule book.


So it seems pretty possible that they followed the standard procedure and it didn't work?


A lithium battery fire is considered a class B fire, so pouring water on it is not standard procedure. They should have known that in the first place, but if they had googled it, they would have found out very quickly anyway.


Tesla recommends spraying the battery with copious amounts of water. Read all about it:

https://www.tesla.com/firstresponders

Lithium-ion is different than lithium…

(my other comment is a question because I haven't seen any information about what procedures they did follow)


My point it is

1. firefighters should know this already

2. if they didn't they could have googled it like you just did instead of having to wait around to get in touch with someone from Tesla


Yes, my point is that a possible explanation for using 32,000 gallons and calling Tesla is that spraying 3,000 gallons on the battery did not successfully extinguish it.


What you are overlooking is that the correct procedure as documented by Tesla, which is what the firefighters had been doing for four hours, did not stop the fire.


Sounds like they did already know this and that's why they did it. If they googled it, they might have got the wrong answer like you did. So your criticisms aren't valid.


Lithium-Ion batteries are class C fires: https://www.usfa.fema.gov/training/coffee_break/061819.html

Water is used on electric car fires, because you need to cool the thing down and water is the best for that.



I’m confused how this is possible. I have a 2020 Tesla and you have to turn the wheel every couple of minutes while Autopilot is on otherwise it will just turn off after beeping a ton.


Most of Tesla’s ass-covering mechanisms can be circumvented easily. The steering wheel nag in my Model X is easily fooled by any kind of asymmetric weight on the wheel. There are other mechanisms, like seat belt and seat weight monitoring, but these can be easily circumvented as well.

To Tesla’s credit, they have reportedly recently started using gaze tracking on cars equipped with a passenger-facing cam, which is much harder to circumvent. If you look at literally every other public sale automaker doing self driving, they use gaze tracking.


If you're actively circumventing your car's safety features to use it in out-of-spec ways it's hard to argue that an accident is anything but your own fault.

It's like pulling high negative G in a Cessna 172 and then blaming the manufacturer when the wings fall off.


It is the users fault,

but to honest, when left to its own devices the car drove at high speed into a tree. Its not a subtle obstacle. If the car can't figure out what is going on in this senario, thats not a good look.

Diving autonomously is hard, but not running into static obstacles seems one of the more basic things.


If you turn off autopilot and put a brick on the accelerator the Tesla may also drive into a tree! Or any car made in the last 100 years for that matter.


> when left to its own devices the car drove at high speed into a tree.

Was it "left to its own devices" though?

You can't put the car in drive from the back seat.

Autosteer is limited to 5MPH over the speed limit unless you are on a divided highway.

The car seems to have accelerated faster than it normally accelerates under autopilot.

Something doesn't add up here.


Was it confirmed that autopilot was on? Hopefully it was wirelessly reporting its black box info since it was burned so badly.


What if the Cessna salespeople and marketing was all doublespeak about high G flying (allegedly the salespeople on the floor go way beyond doublespeak sometimes)? Yeah the pilot would be to blame, but in that case Cessna would also be to blame.


If you actively circumvented multiple mechanisms that prevented you from making those manuevers, then yes, you bear full responsibility.


> What if the Cessna salespeople and marketing was all doublespeak about high G flying

The driver deliberately set out to sabotage mechanisms which are there to protect you.

A more apt comparison would be if the pilot disabled the stall warning on the aircraft. Then attempted to do low flying maneuvers at the threshold of the aircraft's stall speed.

If someone deliberately disables safety features, they hold the bag.


You have to actively ignore multiple safety warnings and deliberately bypass failsafes.

The big thing I have to wonder about is how on earth did they get the car going so fast on residential streets? Autopilot is hard-coded to limit you to 5MPH over the speed limit. I know you can push the gas down with a brick or something stupid like that, but doing that disables the car's ability to slow down or stop automatically which is a pretty fundamental part of what Autopilot does and how it operates.

So many people look at this as if the driver pushed a simple bypass button to get around a safety feature, but getting a Tesla going fast enough to wrap itself around a tree on a residential street is not simple at all.


Tools to fool Teslas minimal hands on detection have been available for years. Amazon even sells them[1]. Anyone buying them should have their license suspended as precautionary measure.

[1] https://www.amazon.com/QCKJ-Steering-Autopilot-Assisted-Auxi...


It is perfectly possible to nudge the steering wheel from the passenger side.


Tesla has not thought of incorporating a sensor to ensure that someone is actually behind the wheel and not at the side of the wheel? I would find that unlikely.


Well, you'd be mistaken. The only use the torque sensor in the steering wheel. There've been news articles with Tesla engineers asking for a better system, and Musk's explicit response was that technology like eye tracking or such would be ineffective.


AutoPilot will also disable if you unbuckle your seatbelt or lift your butt off the seat.


It's easy to fool, and it appears these two folks were doing so deliberately.


Totally agree with you. I own a Tesla and I am definitely not a big fan of how they communicate or treat their customer (customer service is just horrible). But in the present case, this is not a tesla issue, just an issue of two irresponsible people. The car would have beeped thousands times before crashing (at least very loud beeps for not applying force on the steering wheel and other loud beeps because the car was going off road and hitting a tree - obstacle detection).


Yeah, it seems to me that this sort of safety measure only has to be strong enough to force people to go out of their way to circumvent it: anyone who circumvents it has ipso facto displayed malicious intent and should be liable for any damage caused.


I was thinking the same.. the driver had to be up to some kind of shenanigans. The front end damage from the tree didn't look too bad. Maybe the driver was drunk and fled or got in the back seat to avoid arrest?


>or got in the back seat to avoid arrest

and then purposely burned to death to really sell the story? that's dedication!


Right.. there could be a few scenarios where it could have happened, but I highly doubt that he would have went to the back seat if the car was engulfed in flames. e.g. he was really drunk, the fire didn't start until later, he buckled himself in to wait for cops, dozed off and succumbed to smoke inhalation. You really don't think one of the first things that a drunk tesla driver that was in an accident would think would be to blame the automated driver?


Reach over and turn the wheel from the passenger side?

This one sounds like an ID10T error.


How the heck was the car driving itself with nobody in the drivers seat? Surely that's a minimal requirement the car must pass before it engages autopilot.


It's not physically possible. The car can't be operated with no one in the driver's seat. Autopilot shuts off automatically. Further, it was in a cul-de-sac, some place autopilot doesn't operate.


I've been wondering about that too.. could have sworn that was the case. The front end damage from the tree looks minimal, have been wondering if the driver was drunk, the car crashed, he didn't think it was going to burst into flames, so he got in the back seat and buckled in to try to avoid getting arrested. Or maybe they were screwing around and the driver changed seats, then it crashed?


That seems fairly plausible, the driver got into the backseat hoping that it could be claimed they weren't actually driving.


Also:

> Authorities said they used 32,000 gallons of water to extinguish the flames because the vehicle’s batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.



lol:

> Tesla’s recommendation is to fight a Tesla Energy Product fire defensively. The fire crew should maintain a safe distance and allow the battery to burn itself out.


It's not physically possible. The car can't be operated with no one in the driver's seat. Autopilot shuts off automatically.


Rapture.


New information released says it was a 2019 Model S and the 2 men were talking about the car’s features with their wives before they took it for a test drive. Police believe the vehicle was on autopilot and traveling at a high rate of speed at the time of the crash.

https://abc7ny.com/amp/tesla-crash-houston-fatal-car-autopil...


Some details in the article:

1) The police stated "no one was in the driver seat at the time of impact" of the Model S

2) Two people died in the subsequent fire from the crash into a tree

3) "it took firefighters nearly four hours and more than 32,000 gallons of water to extinguish the fire."

4) "At one point, crews had to call Tesla to ask how to put the fire out, Herman said."

5) "The owner, he said, backed out of the driveway, and then may have hopped in the back seat only to crash a few hundred yards down the road. He said the owner was found in the back seat upright."

Another link with some additional details: https://www.khou.com/article/news/local/tesla-spring-crash-f...


Is there a particular reason why they had to fight the fire instead of waiting for it to put itself out? IIRC this is what Tesla itself recommends if there is no danger of spreading to nearby objects.


People are commenting on this story without the benefit of all of the necessary facts.

1) Autopilot will not activate without lane lines on the road

2) FSD will not activate without lane lines either

3) The car was not equipped with FSD software

4) There were no lane lines on the road where this happened

https://twitter.com/WholeMarsBlog/status/1383855271710056460


So what's the conclusion here? Who needs to do the double check? The coroner, because the event did not happen thus the passengers are alive and stationary due to lack of a driver or the fire department because there's a driver on the drivers seat but they missed it?


To me, there's lots of possibilities. Driver couldn't open the door after the crash, so moved to either the passenger or back seat to try and escape...but failed. Or, third person was driving, fled the scene on foot. Both kind of unlikely. But how likely is it that they circumvented the seat belt, put a weight on the seat, and kept torque on the wheel, and then engaged autopilot?

Whatever happened here is going to be something weird and unusual.


Of course a full investigation is needed but it seems like a good case to use Occam's razor.

People with Tesla's often use defeating devices to overcome the measures against unsafe use, there are many videos on Youtube of people doing just that. It's colloquially known that people are attaching small weights etc in order to do that, pretty basic stuff. Also software often has bugs, it's known for Tesla to release software under the "not ready for daily use, use at own risk" pretext. There are also previous known crashes due to the failure of Autopilot as well as videos showing failure of Autopilot where crashes were avoided thanks to human driver sitting in the drivers seat.

The other alternative is that these people acted unreasonably, crashed the car, failed to exit but changed seats before perish and the professionals who reported in and investigated the situation couldn't tell that.

Which one is more likely o have happened?


>Which one is more likely to have happened?

No idea. "Professionals couldn't tell in the immediate aftermath" isn't that odd. Especially given the state of the vehicle.


Nitpick: there's a difference between "will not activate" and "will deactivate if it was already active". (For the record, my opinion is that this was entirely the driver's fault and not Tesla's fault at all.)


Autopilot also supposedly wants the seat belt buckled and weight on the seat, right?


Article says it’s unconfirmed whether the car was in auto-drive. Part of me (without any knowledge) thinks someone was showing off the auto-drive and turned it off accidentally. But more details will come out I hope.

One thing in particular sticks out as concerning: the fire service did not know how to deal with the fire.

That’s not something specific to Tesla, Tesla does not make all battery powered cars, the fire service should know how to suppress electrical fires.


I once unhooked my belt to take off my jacket while on autopilot. It immediately started screaming at me, disabled autopilot and started slowing down gradually.

I've also heard it uses the seat sensor to do the same. Unless they've found a way to bypass multiple safety features, then the car wasn't in autopilot.


Funny how it turned a disengaged safety belt (something endangering the occupant) into something endangering people not in that particular vehicle.


I'm not sure what you mean. I was dangerous to myself, yes. So then the car pulled over and started to gradually stop on the shoulder. And then it would not let me re-enable the autopilot because I couldn't be trusted.

Not sure where endangering other people comes in. If there was someone standing on the shoulder of the highway it would avoid them obviously.


Cars on the side of the road, or slowing down for no apparent reason, are a hazard to other traffic.


It stops applying power to the wheels automatically if you unbuckle or lift yourself off the seat, once AutoPilot goes into the “Take over immediately” state.

Of course the human who is actually driving can re-apply power at any time.

The car will not pull over unless you leave it in the full-on alarm state for a significant amount of time. The alarm is pretty loud. It’s not a state a driver would leave the car in unless they were incapacitated or doing it intentionally.


So what's a better alternative? Autopilot continuing to drive with the driver's seat belt unbuckled, or have the car slow down slowly to a stop?

Breaking hard for no apparent reason is dangerous, slowing down slowly to a stop is not.


What's the safer alternative?


Not having this feature in the first place. Either that, or do it right.


Auto pilot immediately switches off if it doesn’t sense pressure in the seat, which would result it tons of beeping and the car slowing down and moving to side of road.


Tested this. Unbuckling turns it off, but lifting off seat doesn't trigger anything. Obviously didn't try actually not being in seat completely. More here: https://twitter.com/Singularitarian/status/13841868748104417...


Lithium-ion fires are hard to extinguish, especially with thermal runaways. There are flame retardant products that can extinguish lithium-ion fires, Class D extinguishers can be used.

I would guess the fire crews that responded were not equipped with this type of extinguisher.


Lithium-ion fires are not Class D. Class D is for lithium metal fires, and lithium-ion batteries don't contain lithium metal.

To put out a Tesla fire, you just use water. Lots and lots of water. Which is exactly what these guys did.


> “[Investigators] are 100-percent certain that no one was in the driver seat driving that vehicle at the time of impact,” Harris County Precinct 4 Constable Mark Herman said. “They are positive.”

This would only be possible if they were using the autopilot feature.

> the fire service did not know how to deal with the fire.

Tesla's advice is "let it burn":

> Tesla’s guidance suggests it’s better to let the fire burn out than continuing to try to put it out.


> This would only be possible if they were using the autopilot feature.

I thought autopilot had a safety feature to prevent no-driver operation, though there is a “SmartSummon” feature intended for parking lots which does not (but requires continuous press of a fob button.) So, there’s no way this should be possible, absent a major malfunction to even allow self-driving in the reported condition.


According to multiple Tesla owners in another thread, Autopilot will continue to drive as long as the seatbelt is buckled. There doesn't need to be weight in the driver's seat.


Reminds me of a separate incident, where the fire department ended up dunking the Tesla in a container of water to prevent re-ignition: https://electrek.co/2019/06/01/tesla-fire-supercharger/


There are a bunch of more pertinent things to say, but I was struck by just how thoroughly demolished that car looked, and by the fact that it took 32k gallons of water to put the fire out because the battery kept re-igniting. I'm no expert on what typical high-speed crashes look like, but this seems ... problematic.


These are good points. I am not qualified in this area, but it would be interesting to hear if 32k gallons of water is a lot to extinguish a total wreck fire. Is water the best thing to use for these battery fires? (Hint: Is a foam product more suitable, but more expensive?) Also: Do the batteries in hybrid cars have similar issues?


Something in this story doesn’t add up. There is no implementation of AutoPilot on Tesla cars that doesn’t require intervention by the driver every 15 seconds. Perhaps the driver undid their seat belt and reached behind to get something and was then thrown from his seat elsewhere?


TFA doesn't agree:

> The company’s cars only check that attention with a sensor that measures torque in the steering wheel, though, leaving room for misuse

So they could have held the steering wheel from the passanger seat.

> Tesla CEO Elon Musk has rejected calls from Tesla engineers to add better safety monitoring when a vehicle is in Autopilot, such as eye-tracking cameras or additional sensors on the steering wheel, saying the tech is “ineffective.”

In my country there was recently an event where a Tesla on Autopilot crashed out with the driver fast asleep behind the wheel. I'd say regulation to make it mandatory to install fool proof safety tech ensuring the driver is actively observing the traffic is needed, and fast. And this tech is trivial compared to even semi-autonomous driving, no matter what Musk is claiming.


> So they could have held the steering wheel from the passanger seat.

We don’t have the diagnostic information that indicates AP was engaged. So, which is more likely: driver error and no seatbelt, or driver sitting in the passenger seat? I contend the latter is extremely unlikely, since there would be basically no benefit to doing this given the requirements of AP currently.

It’s possible the driver was pulling a stunt. In which case, the deaths are attributable to deliberate misuse, no different than “ghost riding the whip”


I live in this area, which is highly wooded and full of curvy roads. It would be a cinch to crash into a tree here in 2 or 3 seconds. It wouldn't take 15.


When automation does a 95% job, sometimes it isn't worthwhile using it because of the overrides required for the extra 5%. If you require full concentration while using driving assist, it might actually be easier to just drive the car regularly or you'll struggle to maintain that ability to intervene immediately when required.


When you deliberately rig your car so the driver seat can be empty, yeah those extra 5% suddenly become a very tall mountain.

Sane people can work with the software so they complement each other. People make mistakes, the software makes mistakes. Both together make fewer mistakes.

If you start watching shows or playing games on your phone, or sleeping, that won't happen.


Let's say your car has a problem with a slip road in a particular bit of road you are about to hit in about 30 mins. If on autopilot, it will start to take the slip road, but confuse the hard shoulder with a lane, crashing into the barrier. If you have spent the last 30 minutes trying to stay awake because you have practically no input into the driving, you might not be alert enough to avert the accident. If you've been driving, not only would you be alert enough, but you also wouldn't have been in that situation.


So basically like a rich parent supporting their children often leads to them being unsuccessful in life, become drug addicts and so on. I know, bit of a jump in argument there. But that's where my train of thought autopilot brought me.


Yes that's probably a good analogy. I think the problem is the current systems require being able to take over the controls immediately. If this is required more than an insignificant number of occasions, with potentially fatal outomes, it may be an unreasonable thing to ask of a human who hasn't been driving recently, is not engaged with the road situation, and is likely to be distracted / not concentrating / getting sleepy.


Maybe, but let's be clear that right now investigators think this driver was in the backseat. These are interesting conjectures but not at all related to the crash at hand.


Oh sure, if that's true, then this case doesn't really show anything. But, I was talking more generally about the term 'autopilot' and its benefits. My example used a 2018 crash where the autopilot half took a slip road [1]

[1] https://www.carthrottle.com/post/tesla-defends-autopilot-aga...


Two men ... nobody driving.

This is a "hold my beer and watch this" accident.


Feels like a stupid reenactment of this Simpsons scene from two decades ago: https://www.youtube.com/watch?v=29ToNp1MY3c


On the face of it, it appears we're putting waaaay too much trust in technologies that aren't anywhere near the levels of autonomy we expect them to have.

Whether that's an issue of the relevant car companies not communicating with their customers properly; or their customers being ignorant - or worse still, wilfully negligent, is probably something for the courts to decide.

Either way, it seems to me an extended period of reflection over what we as human beings are currently doing to one another with these new-fangled driving technologies, is needed.


"Authorities said they used 32,000 gallons of water over four hours to extinguish the flames because the vehicle’s batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery."

Wow. Have there been other electric car fires which are so difficult to put out?


I was under the impression they are all this difficult to put out.

But maybe it depends on how many cells ruptured. Hitting a tree dead center is going to take out a lot of cells I guess.


This is even worse than the deadly autopilot, EV cars are much more likely to burn and the fire is much hard to extinguish, but no one who tries to sell you an EV will tell you "hey you are more likely to die by being burned alive!"


> Wow. Have there been other electric car fires which are so difficult to put out?

(In)famously the electric super car that Richard Hammond crashed.

The Rimec One Concept car is all-electric. Hammond's model was left destroyed and took five days to fully extinguish

https://www.thesun.co.uk/motors/5092851/richard-hammond-cras...


There have been many situations like this, including cars reigniting days later. Also a few cases of cars randomly catching fire while parked: https://www.businessinsider.com/why-tesla-cars-catch-on-fire...


>The owner, he said, backed out of the driveway, and then may have hopped in the back seat only to crash a few hundred yards down the road. He said the owner was found in the back seat upright.

How is it possible, that the car doesn't stop immediately as soon ad there's noone in the driver seat?


It's the age of stupid smart vehicles. My favorite is 2018 Panamera heading into the sea[0] with no one at the driver's seat during a test.

[0] RU - https://youtu.be/nKMk3q7bHjA?t=564


Wow, someone would think smart cars would be smart enough to detect if someone is in the driver seat.

In case of the article's Tesla, they must have put some weight into the drivers seat or the owner has somehow hacked the weight sensor.


From [1]:

Bloomberg asked Urmson about Tesla's Autopilot technology—and particularly Elon Musk's claim that Tesla vehicles will soon be capable of operating as driverless taxis.

“It’s just not going to happen,” Urmson said. “It’s technically very impressive what they’ve done, but we were doing better in 2010.”

That's a reference to Urmson's time at Google. Google started recruiting DARPA Grand Challenge veterans around 2009. Within a couple of years, Google's engineers had built a basic self-driving car that was capable of navigating a variety of roads around the San Francisco Bay Area.

A couple of years later, Google started letting employees use experimental self-driving cars for highway commutes—an application much like today's Autopilot. Google considered licensing this technology to automakers for freeway driving. But the technology required active driver supervision. Urmson and other Google engineers decided there was too great a risk that drivers would become overly reliant on the technology and fail to monitor it adequately, leading to unnecessary deaths.

[1] https://arstechnica.com/cars/2021/04/the-largest-independent...


Does failing to monitor adequately include jumping in the back seat? There's inattentiveness and reckless endangerment, I think the driver here should be charged tbh.


I think the point is that the driver was fully confident that the car would do a much better job at driving compared to them.


is it reckless endangerment when elon constantly tells people to do this and says theyre gonna be robotaxis later this year and says autopilot is safer than a human driver and calls it "autopilot"?


Safer doesn't mean invincible. Even if there are arguments to be had on the data, zero people are saying rig your car to drive with no one in the front seat and have a great time.


Elon tells people to jump on the back seat? Is that what you are claiming?


he intimates constantly that the driver is only required for regulation purposes and that they dont actually verify youre paying attention or in the drivers seat and calls it "full self driving"..


No, he doesn't. The car doesn't let you do this either.


What's really odd is that it happened about midway onto a very short ( < 300 meters) dead-end, cul-de-sac street where you have to take a hard right turn to enter. The address of the place is in the story...look at it on a map. Really odd.


I'm not a car safety specialist or anything but I think someone should have been driving.


Did the car start driving with nobody in the driver's seat? Or did someone move out of the driver's seat after it started?


The car won't even go into Drive without weight in the driver seat. These people had to put a lot of effort into making this happen.


I live in this neighborhood and it's easy to see how you could get a Tesla up to high speed and quickly encounter a tight turn. The entire area is full of that kind of layout. It's all long roads with curves around trees.


I think it was more people doing stupid things than the environment since no one was driving the car. I wonder how old the people in the car where.

Also they need to stop calling this thing autopilot.


From Elon Musk on Twitter:

* Data logs recovered so far show Autopilot was not enabled & this car did not purchase FSD.

* Moreover, standard Autopilot would require lane lines to turn on, which this street did not have.


dang or other moderators, can someone add the official title to this post? It’s unclear from the submission’s title that no one was driving the car, which is the only thing that makes this post notable.


I live in this area and I definitely think the terrain here was a contributing factor. It would not take 15 seconds of hands off the wheel to have an accident. It would take 2-3 seconds.

The Woodlands is not your typical residential area. It was carved out of two old lumber mills with the intent of keeping as many trees as possible, hence lots of curves, small roads, and trees alongside them. At the time it was built (by an oil wildcatter who was worried about urbanization's impact on his family), The Woodlands was the largest master planned community in the world. https://en.wikipedia.org/wiki/The_Woodlands%2C_Texas

Also, a recent video made after the crash shows Autopilot can be activated without lane markings: https://news.ycombinator.com/item?id=26870417

Finally, local police here reported that the couple's spouses said the two men, aged 58 and 68, were talking about the Autopilot before they left on the drive and were excited about trying the features...


Maybe Tesla can help design some portable wall/dam system that could be setup around vehicles to capture water used in a pool rather than have it all immediately lost to drainage.


I wonder how many more people have to die before Tesla actually fixes it. Is either self driving or not, there is not such a thing as "supervised driving"(that's called driving), I also wonder if those people saying -actually, this link over here says self driving is safer....- will say the same if a family member die because of the "self driving" mode not working properly.


Fix what? There wasn't a driver in the driver's seat?


the self driving part? SELF driving? They also can stop advertising the SELF driving features so people dont think o it as a thing?


My first thought on seeing the tree in the right front quarter of the car and the fact that the driver's seat area is uncrushed, is that there was a third person, who departed the scene. I'm guessing whatever data the car collected to tell one way or the other perished along with the electronics. But don't these cars also upload telemetry all the time?


From the article

>The owner, he said, backed out of the driveway, and then may have hopped in the back seat only to crash a few hundred yards down the road. He said the owner was found in the back seat upright.

My first thought was they were making a YouTube video


Yup, that or just showing off a sort of autopilot variation of ghost ride the whip to his buddy. Really sad and dumb reason to die.


My car gets irritated that I let go of the steering wheel for a few seconds. I've never dared to see how it reacts to a continued violation but presumably it could disengage the gas or engage the brakes.

Doesn't Tesla have some kind of seat or steering wheel sensor to prevent this? Could the owner have defeated the checks somehow?


They do have a steering wheel sensor. And yes, it can be defeated.

Per the article this happened on a cul de sac in a residential neighborhood. My question is how on earth they managed to get the vehicle to a fatal speed in that environment (seriously: Google the address, there's almost no runway available). It seems likely they were playing with launch mode and not autopilot, or perhaps manually messing with the accelerator from the passenger seat.

This definitely doesn't smell like an "autopilot" failure, though we'll have to wait for details.

(In fact as others are pointing out based on the location of the damage: it actually seems not unlikely that there was a human driver who fled the scene.)


Well Teslas are known for their incredible acceleration, and then if the occupants weren’t wearing seatbelts, even a relatively low speed crash can cause severe injury.


Tesla autopilot doesn't exploit that, though. The question is how they launched the car, and how they managed to launch it into a tree. You just can't do that with autopilot to my eyes.


How fast would they need to be going for this to be fatal? I could definitely see autopilot missing this turn and crashing into the tree, and it’s possible they were trapped inside rather than killed on impact.


The whole front corner wrapped around the tree. That's a highway-style crash, so figure 50+ mph I'm guessing? I have a hard time imagining the front passenger survived the impact given the way the cockpit collapsed, but maybe the back passenger was trapped.


Software can have bugs...


>My car gets irritated that I let go of the steering wheel for a few seconds.

My passengers get irritated.


Two big questions for me are : - Will this change their decision to remove radar? - How did driver move to passenger seat (or idk exit the vehicle!) with autopilot still going? That should be impossible. car should have weight sensors for seatbelt in passenger side, maybe need one on driver side too!


race cars (and tanks and submarines - closed areas with high chance of fire and/or catastrophic consequences of fire) have automated fire suppression system. I wonder if EVs need to have one too. Right now the EV fires in the accidents call for memories of 196x movies where ICE cars would burst on fire in accidents. It was solved for ICE cars and need to be solved for EVs too - looking at those accidents, specifically at the damage from impact and thus related speed/etc. - it seems that the fire killed the people not the impact itself (i myself been in an accident in a regular ICE car where front of the car got smashed while we inside, being seatbelted of course, got only bruises, and if the car burst into flames the outcome would be completely different)


Hanging out in a Tesla online community, I've noticed there are two types of Tesla fans (and investors even).

The first are those that believe that EVs are the future and love the cars Tesla is producing.

The other are robotaxi/FSD evangelists and believe Tesla will usher in the age of autonomous cars sooner rather than later. These people genuinely believe we'll have fully road legal autonomous cars (as in, requiring zero human input) by 2025 and that human driven cars will be outright banned in many developed countries by 2035 or even 2030. Some of them actually want Tesla to stop selling cars to the public in favor of stocking them for the robotaxi fleet which they are absolutely sure will be widely deployed in a matter of a few years.


Regarding the latter group, this video[1] that was archived by an HN user[2] is pretty telling.

[1] https://troll.tv/videos/watch/54bc7bd0-8691-4359-aa7d-dc5148...

[2] https://news.ycombinator.com/item?id=26810351


It's really not that telling. Tesla short-sellers really will stop at nothing to try to recoup their bet.


Someone on HN has put together a fairly detailed timeline of Elon's comments on FSD:

https://news.ycombinator.com/item?id=26519357


There are two types of people: those who divide people into groups and those that don't.

Seriously though: there are many more Tesla fans, there are also those that hope the company succeeds but that wished that that self driving feature had been postponed until it actually worked because in the longer term this is bad for the brand and bad for electrics as a whole.


Don't get me wrong - I too am a Tesla fan, and my next car will most likely be one too. And disclaimer: I also own stock.

But yeah, I wish both Elon and the rabid FSD evangelists would quiet down and stop hyping FSD and robotaxi. I agree that hyping it right here and now today is going to hurt the brand more than help.

The bad attention is already happening. You hear about any other company making incremental advances in autonomous vehicle tech, and it's generally met positively. News about greater, more significant, advances from Tesla, are met with second guessing, doubts, and worries about how many people will get hurt.


> These people genuinely believe we'll have fully road legal autonomous cars (as in, requiring zero human input) by 2025 and that human driven cars will be outright banned by 2035 or even 2030.

I'm not a Tesla fan, but I think that's going to happen eventually. Probably much closer to 2100 though.


By 2100 people will realize how dumb people in the 2020 were and have efficient public transportation for that "autonomous" travelling.


Unless we've discovered how to teleport from place to place, I think people in 2100 will prefer private cars to public transport, as they have done ever since the car was invented.


Do they, for all types of trips? Personally, I much prefer trains for every trip over two hours.

Being able to stretch my legs, get coffee and food, and comfortably work are worth it.

Cars are very nice for when there is no (direct) train connection available to where I want to go, as well as for shorter trips, but they don‘t scale well at all in terms of density of traffic as well as efficiency.


Distance is definitely a factor, as is frequency. I'd prefer public transportation systems (trains, buses, planes, etc.) if it's a longer and occasional trip.

For short, regular, frequent travel, i.e. a commute? Give me a personal car anyday.

But my opinion is jaded from the awful urban public transportation systems we have in the US. Give me something like what Tokyo, Seoul, etc. have, and I might prefer public transit for commuting too.


It was only a matter of time, no one was driving the car.


I thought hey had systems to ensure that an attentive human was in the driver seat. Someone found a way to circumvent this?


The system detects torque on the steering wheel: you can find various hacks online to fake the input.


Bodes well for the narrow tunnel with no egress meant for these cars to self-drive in.


In fairness there are very few trees pictured in the TBC tunnels so far.


Trees sure. But a disabled car or something drops out/off of a car? That seems plausible.


Why would someone use autopilot when Tesla is not liable for damages? Its like you hire a driver but you are liable for any crash they do.


Because autopilot has nothing to do with the crash. You're supposed to pay attention and take over at any time.

Why does anyone drive any car when the manufacturer isn't liable for mechanical failures that could cause a crash? I'm much more worried about a suspension part failing and my wheel flying off than autopilot doing something dumb.


They behaved irresponsible, but Tesla could really use actual driver attention monitoring. It should not be possible to have no one sitting in the driving seat. They seem to have a broken attention monitoring system that is easily tricked.


Yeah I generally agree with this. Forget the camera stuff the other commenter mentioned, the seats have a weight sensor already for other stuff like making sure you don't get out of the car when it's not in park. I can't believe AP doesn't use that, unless the owner here bypassed that by shorting the circuit or something.


They are working on a camera based solution though it's imperfect. You can see examples of it running at https://twitter.com/greentheonly/status/1379928419136339969


> Why does anyone drive any car when the manufacturer isn't liable for mechanical failures that could cause a crash?

Manufacturers absolutely are liable if there is a design problem. This is the entire reason recalls exist.

One could certainly argue that Tesla's laissez faire attitude toward autopilot safety constitutes a systems engineering design failure, making it grounds for a recall.


They're not liable when the componets are used outside of their reasonable limits. You don't see recalls for shocks that fail after 200,000 miles, and leaving the driver's seat entirely should fall under that bucket too.


>when the manufacturer isn't liable for mechanical failures that could cause a crash

uhm. Ever heard of the GM ignition switch defect? Ford Pinto? I'm sure there's more where a manufacturer is liable for mechanical failures that cause a crash.


Completely different situation. That doesn't involve the consumer not following instructions.


Gotta treat it like cruise control.


There is something called personal responsibility that most people believe in. We aren’t children. You could ask that question about literally anything that you buy that could potential kill you.


That doesn't apply to cars, which have the potential to kill anyone who shares the road with you (or people on crosswalks, the sidewalk, cafés, inside shops, etc). So it is my business that you drive safely.


>There is something called personal responsibility that most people believe in.

If you are talking about the personal responsability of Elon Musk by misleading the public into thinking his cars have anything close to a self driving car, I agree. If autopilot is not autopilot, then why name it as such?


Because it is similar to an actual autopilot. Aircraft autopilots are also not fully-autonomous, and basically just follow a predetermined course.


Turns out following a pre-determined course on the streets is harder.


Which makes what Tesla calls “autopilot” all the more appropriate IMO. Both will get you _most_ of the way from A to B, while you have to pay very close attention at any critical point, and generally supervise it at all times.


[deleted]


You are expected to pay full attention.


To make driving more relaxing.

Instead of actively driving you switch to be a driving supervisor instead.

Think about it as a really advance cruise control.

Autopilot doesn't mean it won't crash, thats absurd.


Does anyone have any idea of how easily (or how difficult) it is for an EV battery fire to occur vs. a traditional combustion engine?


so, aren't tesla coming with an attention system and aren't tesla autopilots allowed on the pubblic road to the condition that a driver is always attentive?

I'd say there have been enough death from inattentive drivers and it's about time the legislators and licensing bodies start looking into the issue.


They measure torque on the wheel from the drivers hand. It is possible to fool via defeat devices etc (ex www dot autopilotbuddy dot com).

There is a WIP system that uses the selfie camera to monitor the driver but it's still possible to fool (image taped in front or block it with tape etc) so unlikely it can catch all cases of drivers being willfully being dangerous. https://twitter.com/greentheonly/status/1379928419136339969


Wow, that fire must have been intense. I have never seen so little left of a car. Is that from the battery fire?


>>The owner, he said, backed out of the driveway, and then may have hopped in the back seat only to crash a few hundred yards down the road.

How did the car not stop recognizing no one was at the driver seat who could put hands on the steering wheel in case of an emergency?


> The company dissolved its press office and doesn’t usually respond to media inquiries, however.

Well, that's one way to solve the problem.


You can see from the red brake calipers for his is a performance model. I wonder if it was a Model S or a Model 3. The newer Tesla battery packs have put serious work into heat dispersion and preventing thermal runaway like this.


It seems like Tesla should at least have a safety feature that checks if someone is in the drivers seat. Cars already do this for warnings about seatbelts and enabling the air bags. Maybe the sensor is already even there.


They do, I mentioned in another comment I once unhooked my belt to take my jacket off while in auto pilot. It started beeping horribly, disabled autopilot and started slowing down. It then wouldn't even let me re-enable it until after stopping and putting it into park.

I've heard it does the same with the seat sensor.


Why is this comment downvoted? It is 100% correct.


I'm not sure why people are believing the cops here. This is physically impossible. If you're not in the seat or have the seat belt unbuckled it slows to a stop and disengages autopilot.


Vehicles with self driving capabilities should in my opinion be required by law to have black boxes like the ones from jetliners. And the tools to read the data must be given to the the government.


Teslas do have black boxes. Whether they would survive a fire like this is questionable though.

Tesla's "black box" data is obtainable like any other kind of crime scene evidence, via court order.


Hey, it's all fine that someone keep running experiments to find solutions that lead to less deaths on the roads. Just keep those experiments far away from me as possible. Thanks.


why is Tesla allowing auto-pilot driving when no one is sitting in the driver seat? people will always try stupid things, but seems like Tesla is at fault for allowing it.


Back in the 1970s, I heard a story (from my dad, a USAF officer) about a young officer in a foreign air force, from a wealthy family, who was in U.S. pilot training in San Antonio. Supposedly the young gentleman bought a tricked-out van, with a bar and a bed and everything. He got onto the freeway, put the van on cruise control, and went into the back to mix a drink. Fortunately, no one else was injured in the ensuing crash that killed him. (I have no idea whether the story is true and am disinclined to research it.)


This was a common urban legend when I was a kid, updated periodically to make the ignorant driver different nationalities.



Would the two have survived in an ICE car?

The car doesn't appear to be wrapped around the tree, and the body looks intact.

It looks like the battery fire killed them.


We should rethink this as figuring out how to make self-crashing cars less efficient to a point above a ROC curve.


Why does the car move when there's nobody in the driver seat? Shouldn't a sensor have stopped the car?


The article only has "beliefs" for evidence. Obviously a great piece of journalism...


I blame the NHTSA for even letting tesla sell this autokill software. And maybe elon should be slapped with a manslaughter charge for claiming robotaxis are coming this year and that autopilot is safer than human drivers and calling it "autopilot" and "full self driving"


Government needs to step in and turn off these 'features'


I still wonder why Tesla doesn't use lithium ferrophosphate or another type of battery that won't catch fire. Shouldn't the safety risk of the batteries they use outweigh the slight increase in energy density?


Is not using water to fight a lithium fire not a good idea?


When it comes to extinguishing fire, you play the cards you have to mitigate the situation. If what you have is a residential fire hydrant system that can supply 36,000 gallons, you use water in overwhelming force. As evidenced by the article, it will not quell the reaction but it will dampen the overall thermal situation enough to permit humans to be extracted safely.

Presumably it would have been better to dump a few thousand pounds of sand on it but there are few sand hydrants in USA residential neighborhoods.


Thinking about a sand hydrant is the most entertaining thing I've done all day!


Ugh, for me it just takes me back to the horrifyingly frightening miniseries Chernobyl.


What is though? My first thought was halon gas but I think you still have outrageously hot temperatures in the cells that will reignite after the halon has dissipated.


Put a large bell jar over the car and then pump out all the air? Or maybe pump liquid silicone into the battery compartment? (The theory there is you need a flame-retardant 'foam' at very high temp but you could submerge the battery compartment in...something like silicone, to isolate it from atmosphere; there would be heat as the batteries discharge, but no combustion.


You need to cool the battery enough though. So you need water to carry away enough heat.


Lithium ion battery is not the same a lithium fire by itself.


How fast was the car going? Why how did they die? Did seatbelts and airbags fail?


Evolution in action!


The more I read Elon’s responses to these things, the less I feel like he actually cares. Elon’s response for me translates to “It’s not a misunderstanding of the name, it’s a misunderstanding if the name by experienced users” In the end the name Autopilot is a really cool name, but it is by actual definition misleading af. Given I don’t think Tesla’s cars are inherently more dangerous, it just seems like people think Autopilot is more than it is, experienced or not. They absolutely should implement features to make sure the driver is at least in the seat!!


This is the exact reason why Google/Waymo moved on from a similar system a decade ago to directly making level 4 autonomy work. It provides users a false sense of security. Same with FSD feature set as well. Tesla and Musk’s constant misleading statements and marketing doesn’t help either.

At the very least, these kind of systems must have fairly strict driver monitoring. Musk says it’s not effective because they don’t try hard enough. The wheel nag and seat weight check currently implemented are too easy to defeat. There are literally products you can buy online to overcome that. I don’t know what’s stopping them from implementing an eye tracking system using cameras the way GM Super Cruise does. It’s a much more effective solution IMO.


its pretty wild that you start out trying to automate driving but are on a route that requires you to automate warning a human who is giving too much trust to your automation.


I find the implication highly questionable that naming this feature something else would have prevented this boneheaded incident. Stupid people do equally reckless things in their normal cars every day. As Musk says, experienced Tesla drivers know the limitations of the system and just get complacent. Or for all we know these people were just suicidal.


There's been aggressive self-driving promotion for Tesla cars from the company and the naming of the 'autopilot' feature is just a small part of this. I don't know if this specific accident happened in a way that related to this, but I think it's likely that the general way that Musk and Tesla talk about their cars has led to accidents where people trust autopilot more than they should, or are trying to show off how great their car is because they think it can do more than it can.


Honest question: What is your opinion on the Coca Cola Vitamin Water case [0]? I feel this is pretty much the same thing.

[0] https://www.reuters.com/article/coca-cola-vitaminwater-settl...


I like to imagine the word pilot in autopilot is the adjective form:

serving as an experimental or trial undertaking prior to full-scale operation or use


Largely frivolous.


If you turn on the autopilot in an airplane no one sane believes you can hop in the back seat and take a nap, and they don’t have eye tracking or control sensors. It’s not reasonable to blame Elon for someone who jumped into the back seat of a moving car.


So what you are saying is that Tesla should require certification with practice sessions with an instructor and a written test for all users of Autopilot? Or do you mean that Tesla should assign a controller to each car to keep it 1km away from other cars?


Those features exist. The article mentions that it is not specified whether or not the driver assist features were even in use in this instance.

> One of the men killed was in the front passenger seat of the car, the other was in the back seat, according to KHOU. Harris County Precinct 4 Constable Mark Herman told KPRC that “no one was driving” the fully-electric 2019 Tesla at the time of the crash. It’s not yet clear whether the car had its Autopilot driver assist system activated.


How would the car still be moving forward if driver assist was not in use?


Friend of mine thought they activated the driver assist and let go of the steering wheel, only to go straight and off the road in the next curve of the road.. Had only activated cruise control or so.


If no one was ever in the driver's seat, how did the car begin moving at all?


Brakes deactivated and rolling downhill?

IDK if this is even possible in Teslas or if there is a safety mechanism that prevents it.


Is there more than anecdotal evidence confirming that more people than average are getting killed by this? HN is usually so critical of anecdotes and single sample points, unless it comes to Tesla.


There is not even anecdotal evidence. I am 100% sure that the rate of deaths per 1000 autopilot-driven miles is at least an order of magnitude lower than the rate of deaths per 1000 human-driven miles. Tesla has that info and occasionally publishes it... people just like to get freaked out when there is an accident. Sure, someone died doing something dumb with autopilot... but what, 50 people died doing something dumb in a normal car in the time it took them to put out the fire? Why are we even having this conversation? it's ridiculous.


I think his main argument is that if it's measurably safer than a person, that's good enough.

Dunno where the sweet spot is. My fear is that anything that isn't 100.00000% safe won't be allowed autonomy, it's easier to make a thing not happen than to make it happen.

I expect that at this point it's all about data acquisition. Keep pushing out new releases as you learn something, hopefully the system needs less and less driver input. Crashes are probably pretty interesting to the engineers.


Can you point to data that shows these systems are currently safer than people?

As far as I am aware, neither Uber, nor Google/Waymo, nor Tesla has released data showing these systems are safer than people.


I suppose that if I had said that, I might feel obligated to find a study for you.


It feels to me like Job’s response to the iPhone 4’s antenna issues. “You’re holding it wrong” just comes off as “stop besmirching my product...”

Blaming the user for a failure of a product is tacky.


Not really. Even the iPhone 4 wasn’t fatal if held wrong. At worst you might drop a call.


> The more I read Elon’s responses to these things, the less I feel like he actually cares.

He doesn't. Why would he, would you?


Please don't take HN threads further into flamewar. We're trying for something else here.

https://news.ycombinator.com/newsguidelines.html


While certainly not required, caring about whether one’s company’s products are killing people is generally seen as a favorable quality in a CEO.


[flagged]


Generally gun company CEOs care a lot if their products are killing people by accident.


At 40-60k deaths per year in the U.S. alone, clearly the CEOs of car companies have a bigger weight on their shoulders.


Is there an example of a gun mfg CEO that didn’t care that their product hurt someone because of failure to understand/explain a feature?

If there is a gun with an “autoshooter” feature and people think it’s something it’s not, that seems relevant.

HN disagrees you should throw digs at people you don’t know but don’t like for owning tools you are fearful of. I don’t mind. I like when people are upfront about their positions, and in many place that will definitely get you internet virtue points.


Some people do have that ability, yes.


Of course he doesn't. Autopilot is already saving lives. It sucks when people die, but THEY DID IT TO THEMSELVES. How many others died in car crashes while OP was typing that question? 20? 50? 100? Where are the tears for them??


[flagged]


So, I'm confused about this argument. What do people think airplane autopilot does? Do they think it takes off, lands, navigates, and avoids unexpected obstacles in its path? Like honestly, people do know actual autopilot doesn't do any of that, right???


Of course the autopilot doesn't land the plane; that's what the autoland system is for. Autoland has been available on some planes since the 1970s.

Not a whole lot of unexpected obstacles in most flight paths to avoid either. However, collision avoidance systems are available, I don't think they're commonly connected to the flight controls directly though (I could easily be wrong though, I only fly an armchair)


the term is literally: automatic pilot. So in the context of a car? Yes, I would expect it to navigate turnings and other obstacles.


Because that's how they sell cars, and nobody has gone to jail yet.


They should change the name, but the name is correct, because that's what autopilot means. But because everyone has the wrong definition of autopilot in their mind, they really should rename it.

It's the right term, but that term has evolved in the public's mind and shouldn't be used anymore even though it's accurate.


Wasn’t there a recent article posted here how safe Teslas are?


[flagged]


Occam’s razor


"Ah I see they died, we released a bug fix for those scenarios yesterday, just make sure to update before going on your holiday trip."


> Two men are dead after a Tesla traveling in Spring crashed into a tree [...]

What do they mean "traveling in Spring"? Is it the name of a place? That sentence is rather baffling to my ears.


It's the name of a city just north of Houston.


Texas seems to have very silly names for places.

https://en.wikipedia.org/wiki/West,_Texas (not to be confused with https://en.wikipedia.org/wiki/West_Texas)


Yep, The three others I know of are Center, Texas, which is not in fact in the center of Texas; Earth, Texas, which I suppose is in fact on the Earth; and Texas City, Texas, which is indeed a city in Texas.



I’m wondering how often does Elon Musk use autopilot? If he does not do that all the time, users should not either; if he does, I’m worried about him and Tesla share holders.


A coworker at KSC saw him driving in through the South Gate in a large black Bentley, rocking out behind the steering wheel, so apparently he’s not always being driven around / in a Tesla!


I would be shocked if he drove at all. His time is worth more than that whatever it costs to hire a driver.


6 deaths from Autopilot now. At the very least, Tesla ought to drop the name.

https://www.tesladeaths.com/


If anything the name is the most accurate part of the marketing. My Garmin autopilot will gladly fly me straight into the ground if given the chance to do so. While they are getting more advanced, aircraft autopilots are even less autonomous. But yes, their marketing is aimed at making people think it’s a self-driving feature.


That's without counting for the Houston deaths as the use of FSD/Autopilot hasn't been verified.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: