Hacker News new | past | comments | ask | show | jobs | submit | vladoh's comments login

We've been using Bluedot for couple of months already and it has been a huge boost in productivity. We often go back to meetings (especially with clients) and review things. We are already starting to use it as a knowledge base for our dev team.


I worked at BMW for many years. I didn‘t do anything relate to their F1 program but once I talked to an engineer that did.

He was a mechanical engineer that designed different parts for the road cars. Once, people from the F1 team went to him because they had an issue with a part breaking frequently and asked him to improve it.

He designed another part and told the F1 guys that this will easily last for thousands of hours. They weren‘t happy. They told him they only wanted the part to last for 2 hours and the one he designed was too heavy. He had to do it again…


> They told him they only wanted the part to last for 2 hours and the one he designed was too heavy. He had to do it again…

As someone in the racing industry, the ideal engine/parts/etc are what will get you across the finish line (and meet the other requirements like how long engine components have to last) as fast as possible, and no more.

There is a lot of money put into the sport, and extracting much of the money comes from the win/place, not whether or not the car will last 40k miles.

If something blows up/breaks after crossing the finish line, it doesn't matter much... just rebuild it for the next race.

Everyone is looking for every advantage they can find.


One aspect I didn't see much discussion on is about tradeoffs. Yes, computing demand is increasing, but does it all have a negative environmental balance?

Example - modern cars. They are full of electronics (there are around 80 ECUs in a modern car), but these electronics make the cars safer and lower the fuel consumption. Just look at how much the CO2 output for the same engine power decreased over time. Some of it comes from better mechanics like turbos or filters, but a lot of it comes from better electronics in the engine ECU.

Another example - logistics. Delivery companies are able to much better plan demand, routes and inventory thanks to big data processing in data centers. Yes, a data center consumes a lot of energy, but it also helps reduce emissions from trucks driving around less.

I don't have exact numbers, but I think everybody can imagine how increased computing in some areas (not all) improves the environmental impact. These are things that need to be considered in order to avoid unintended consequences.


In the case of cars, according to this [1] (seemingly based on US Federal data), the improvements in car efficiency since the 60s, is:

> Noteworthy fuel-economy trends, taking into account the length of time represented:

> A minor decrease between 1966 and 1973 (from 13.5 mpg to 12.9 mpg).

> A modest increase between 1973 and 1991 (from 12.9 mpg to 19.6 mpg).

> No change between 1991 and 2004 (19.6 mpg for both years).

> A modest increase between 2004 and 2008 (from 19.6 mpg to 21.8 mpg).

> A minor increase between 2008 and 2017 (from 21.8 mpg to 22.3 mpg).

The term "modest" is probabling misleading, but looking at the graph in the linked site is more telling: basically, efficienty increased after the first gas shock, and "more or less" stalled in the 2000s.

So maybe the addition of electronics in the 70/80s had an impact in consumption, but I would argue it's less obvious for the additions in the 90s/2000s/2010s - although it would take a bit longer to show as the older cars are still on the road.

Impact on safety, though, I don't know.

As for the CO2 emission, I have a hard time taking any number at face value, knowing that, ironically, some of the electronings on board can be designed to cheat emision tests [2] - but that's probably unfair.

[1] https://www.greencarcongress.com/2019/09/20190930-sivak.html

[2] https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal


Electronic fuel injection definitely made cars more frugal back in the 90s. Also improved aerodynamics helped a bit. On the other hand, due to crash test requirements cars are now significantly heavier (I guess around 20%), people are also driving faster and enjoy more power and dynamics from the vehicle, which not only wastes fuel, but also leads to the use of wider tires, which are more expensive to produce and impact fuel economy negatively.


this why I don't trust platitude-ing about the environment that isn't back by economics. real solutions require thinking of the world as a complex, dynamic system, where influencing one variable can have unintended consequences on another.


While driving, do you understand how the engine ECU of your car computes how much fuel to inject into the cylinder, how the ECUs distribute the power, and the braking force on individual wheels, or how your rear wheel steering calculates the turning angle of the rear wheels based on your seed and steering input?

You don't need to know most of the details how your car works in order to drive. You need much more knowledge to build one, yes, but not to drive.

There are different levels of abstractions and depending on your problem you need to understand them only up to a certain level. And different people have different problems to solve.

In most real-world problems today, the difficult part is indeed the data, not the underlying math of the activation function, loss function, or optimizer. Just Google "data-centric AI Andrew Ng" to read more on the topic from one of the most well-known people in ML.


Except that we can build cars that work. Whereas for DL AI, in most practical applications, there is like 10% edge cases where things just randomly explode. But don't take it from me, just read "Distributional Reinforcement Learning with Quantile Regression" by Google Brain and Deepmind and they'll tell you

"Even at 200 million frames, there are 10% of games where all algorithms reach less than 10% of human. This final point in particular shows us that all of our recent advances continue to be severely limited on a small subset of the Atari 2600 games."

In short, current AI approaches cannot even reliably win video games from 40 years ago, no matter how much $$$ you burn on GPU power.

How do you expect a non-expert to know if their problem is in the 10% that works well, the 80% that works tolerably, but worse than traditional algorithms, or the 10% where all bets are off?


This is a very interesting idea, even if not really practical. The author has more details in the paper: https://arxiv.org/abs/1904.12320


Participants really receive value and I think this is why the model works. However, they don't receive the value they deserve...


I agree that it works, that's why there are so many popular communities. However, how much better it will be if people can actually pay their bills with this work?


I have very fond memories of Delphi - it got me into programing and I earned my first money from a job using Delphi. Nowadays, while doing web development I still miss some features. I haven't used Delphi after version 7 (just before the .NET switch).

However, Delphi is still used by some companies! My mother works in a small company developing business software like accounting, payroll, inventory tracking etc. They still used Delphi (I think version 7) and have a small, but thriving business. No incentive to switch...


Yes, it will be really interesting to read a post-mortem of this.

I also got a notification just saying "foo" 5 minutes before that and thought that was somebody testing in production.


Got the same message. Also thought it was a developer testing :-), but a few minutes later a message about security issues was pushed...

Love it ...


Some people are quoting the recent Tesla safety report [1] as evidence that Autopilot is on average much safer than a human driver. This is a classic case of the Simpson's Paradox [2].

On the first look it seems that Autopilot is 4x safer than driving without any safety features (1 accident every 4.19 million miles vs 0.978 million miles). However, the data used to compute the stats is different in two important ways:

1. Autopilot cannot be always activated. This means that is some particularly difficult situations, the driver needs to drive himself. These are more dangerous situations in general.

2. If a driver disengages Autopilot to avoid an accident and engages it again straight away on a 10 miles drive, then you will have 9.99 miles driven on Autopilot without accident. The statistic misses the cases where the human driver intervened to avoid an accident.

This means that we are comparing the same measure (accidents) on different datasets and therefore in different conditions. This is dangerous, because it may lead us to wrong and often opposite conclusions (see Simpson's Paradox [2]).

I'm not saying that Autopilot isn't safer than a human driver, given that the driver is at the steering wheel and alert, but that this data doesn't lead to that conclusion. If the driver is not sitting at the driver seat, then it is certainly much more dangerous.

[1] https://www.tesla.com/VehicleSafetyReport [2] https://en.wikipedia.org/wiki/Simpson%27s_paradox


Just for the record, people who study the problem space concerning traffic safety have disavowed the word "accident" because it all too often dismisses the preventable root causes that can be learned from here.

context:

* https://laist.com/2020/01/03/car_crash_accident_traffic_viol...

* https://usa.streetsblog.org/2016/04/04/associated-press-caut...

* https://chi.streetsblog.org/2021/04/05/laspatas-ordinance-wo...

It'd be nice if folks here would be mindful of the role language plays. Here's also a preemptive "intention doesn't matter" because the first post I share covers that in the section "The Semantics of Intention", where it argues that the decisions have already been made in both the designs of our streets and in the choices people make behind the wheel, and those have known and changeable outcomes.

last edit I swear, but a good catchphrase I've seen recently that I'll be pinching is "Accidents happen, crashes don’t have to."


> Here's also a preemptive "intention doesn't matter" because the first post I share covers that in the section "The Semantics of Intention", where it argues that the decisions have already been made in both the designs of our streets and in the choices people make behind the wheel, and those have known and changeable outcomes.

The argument made here isn't very good. "Traffic violence" implies intention to do violence. Intention to speed is indeed intention to incur the risk of harm, but that risk is in general quite low, whereas 'traffic violence' strongly implies direct intention to cause harm. Intention to take risks that have harm as a potential consequence is not equivalent. It may be true that 'accident' cuts too strongly in the other direction, but the correct term is clearly closer to 'accident' than it is to 'violence'.


Every crash that is not deliberate is an accident. Even if from negligence (drunks) the actual crash is accidental. Ascribing intention to every mistake, turning every tragic death into a homicide, ends in dark places. We may hate the person whose mistake causes a crash. We dont make them a murderer.


> Every crash that is not deliberate is an accident.

I don't agree, probably because we have different meanings and implications around the word "accident".

For me at least, there's some subtle semantics surrounding the word "accident" which are at the very least unhelpful in the context of traffic incidents.

There are lots of incidents which are not "accidents", they're the result of a driver choosing to do something, or choosing to not to do something.

A crash shouldn't be described as "They accidentally ran a red light", or "They accidentally went too fast into a curve", or "They accidentally failed to notice the pedestrian".

In those scenarios, "intention" doesn't matter, and the driver fucked up and cause the crash and should be held accountable/responsible. There's no wriggle room there for reducing the responsibility by saying "accidents happen, it wasn't my fault..." There was no "accident", there was a fuckup on someone's part.

Motorcyclists here in Australia have a term "SMIDSY" - which stands for "Sorry mate, I didn't see you", which is pretty much guaranteed to be the first thing out of a driver's mouth after they've just driven straight into you. It's not that they didn't "see you", it's that they didn't bother looking, or that they looked and didn't take any notice. Those are not "accidents". They are fuckups.


>> In those scenarios, "intention" doesn't matter, and the driver fucked up and cause the crash and should be held accountable/responsible. There's no wriggle room there for reducing the responsibility by saying "accidents happen, it wasn't my fault..." There was no "accident", there was a fuckup on someone's part.

Intention absolutely always matters. People often do drive through lights unintentionally. If that is a proximate cause of an accident they are punished for negligence, not murder. Even intentionally driving through a light is a form of negligence. The only intentional crashes happen where people use the vehicle as a weapon[1]. If you deliberately run someone down you are a murderer, whether you broke any traffic laws while doing so is irrelevant. If you kill your wife by deliberately crashing into her, the cops aren't going to write you a speeding ticket while they read you your rights.

[1] There are also rare cases where crashes are deliberately caused by non-drivers. A crash resulting from someone icing a road, throwing a brick at a car from an overpass, or tampering with a vehicle are deliberate crashes even though the driver(s) involved are not at fault. But if the road is icy because of something like a broken water line, it is very possible for nobody to be responsible: a truly blameless accident.


> People often do drive through lights unintentionally. If that is a proximate cause of an accident they are punished for negligence, not murder.

Oh sure. I'm not proposing people who fuck up be treated as murderers. It's just that in my head, saying it's "an accident" is minimising the blame. If you drive through a light unintentionally, that's totally your fault. You should have and could have avoided doing that by paying attention. You're right that it's negligence. I'm less sure than you are that it deserves to be called "an accident". It seems to me way to close to the schoolyard bully claiming "I was just waving my arm around, he put his face in front of my fist."

> But if the road is icy because of something like a broken water line, it is very possible for nobody to be responsible: a truly blameless accident.

I don't live in a place where it freezes, so I'm kinda not the right person to have an opinion here - but I would have thought that if you live/drive somewhere that ice or black ice happens, then there's at least some "driving to the conditions" argument to be made that the driver was negligent in that case?

I guess bottom line there is that to me, negligence is not covered under the term "accident". If the incident should have been avoided by a driver not being negligent, then it wasn't an accident.

(But I'm not a linguist, and am questioning whether that's a widely held opinion. Thinking about it, it's likely because I'm a long-term motorcycle rider. I cannot afford to have "accidents", because they hurt _way_ more than people driving cars having "accidents". I consider myself at fault for any situation _I_ could have avoided by "being less negligent", and perhaps unfairly impose that burden on other drivers as well. But I will none the less get angry when they say "Sorry. It was just an accident" while I'm the one lying bleeding on the road because they were negligent...)


Fellow biker here, and I think you're just stating an arbitrary line.

The first negligent action could be climbing on the bike.

I think I'm safer slowing and rolling through a stop sign in my truck than I am getting on the bike in the first place.

I realize it's different-- causing a threat for others and all. But I think the point stands-- you're just debating where negligence starts.

I tend to agree with the GP. It's an accident anytime it wasn't intentional, even if it's a preventable accident.


> People often do drive through lights unintentionally

Generally this is in the context of driving too fast.

Speeding is a choice. You are either intentionally disregarding your speedometer, or the designers of the street have made it too easy to get to higher rates of speed rather than apply calming designs to slow traffic down (another intentional choice).

The intention is baked in.


> People often do drive through lights unintentionally. If that is a proximate cause of an accident they are punished for negligence, not murder.

No, traffic violations are generally strict liability, intent, or mental state more generally, does not matter.

Sure, if you intend to kill and do, then you will also be guilty of murder, but that is a separate and additional issue.


Liability is not criminality.



If we are going down this route of the semantic of road incidents, then lets commit fully and using the law as it is where I live. Any traffic incident here always involve two people who failed to comply with traffic laws.

A pedestrian crossing a street is required to look for cars and avoid the road if there is a risk for a collision. It is in the law, even if the crossing sign is green. No person has a right to cross a road or driving a car over a crossing. Both individuals is only allowed to do their thing if it can be done without causing an accident.

The same is true for traffic lights. Green does not mean you have a right to drive through, but rather means that drivers are allowed to drive through if such thing is possible without causing a traffic incident. Basically ever single action on the road under any circumstance is qualified under the condition that it won't cause an accident.

Naturally no pedestrian is going to be charged for being hit by a car unless there is some truly extraordinary circumstances, but the intent of the law is to make people understand that traffic is about mutual communication and responsibility.


> Any traffic incident here always involve two people who failed to comply with traffic laws.

Are you sure about that? It would mean that it's technically illegal to cross a road unless you're absolutely certain that there can't be a speeding driver around the next corner who's gonna get to you faster than you can react. Or that you aren't allowed to cross the street at a green light even when all the cars are waiting because you can't actually be certain that a car isn't gonna start driving suddenly and run you over. Or that someone tailgating you is somehow caused in part by you.

It's pretty ridiculous to claim that both sides failed to comply with the laws in every incident.


Yes, 100%, it is illegal to cross the road unless you can be reasonable certain that doing so won't cause an accident.

Look at the perspective of the law maker. The goal is zero traffic accidents, the vision zero as it is called (https://en.wikipedia.org/wiki/Vision_Zero). If pedestrian think that they have a given right to cross the road then there is a risk that people won't apply common sense but rather just step out on the road without looking or thinking.

Similarly, the driver of the car is always guilty of negligence, regardless if the pedestrian is at fault. This include if the pedestrian walk against red. Just because the other party is also at fault does not diminish the obligations of the driver to not hit a pedestrian. Regardless of traffic light, the driver is not allowed to drive over the crossing unless they are reasonable certain to not cause a traffic incident.

In the extreme, if both driver and pedestrian wanted to be 100% guarantied to not cause an accident then both would simply stand still. In practice both hopefully act with common sense and work together to avoid an accident, which happens to align with the goal of those who wrote the law.


>> Any traffic incident here always involve two people who failed to comply with traffic laws.

> Are you sure about that?

There are at least single vehicle incidents which do not involve two people. Over cooking a corner and spearing off into the scenery is just one person fucking up.

But I agree with the thrust of the GPs argument, in any two (or more) vehicle incident, there was probably one party who “caused” it, and another party who should have avoided it. Not _every_ incident, but I’d argue that it applies to the vast majority of multiple vehicle incidents.


Just so you're aware, there is a crime known as negligent homicide (although it's usually treated as manslaughter instead of murder).

When you drive at speed (above ~25-30 mph for someone not protected by a giant steel cocoon, I think ~50 mph for someone who is), you are now in control of a loaded weapon that will kill or seriously injure anyone against whom it goes off. If you are unable, or unwilling, to handle a loaded weapon with the requisite amount of care to avoid negligence, then perhaps you are not deserving of the privilege of handling that weapon.


I am aware. It was on the test.

>> negligent homicide (although it's usually treated as manslaughter instead of murder)

Not usually. It can never be treated as murder. If it was negligent then the killing lacked indent per se.

>> If you are unable, or unwilling, to handle a loaded weapon with the requisite amount of care to avoid negligence

Good luck selling that to the AARP. People have medical incidents behind the wheel every day. A heart attack/stroke/seizure can easily result in a crash, including deaths. We don't lock people up for that unless they were negligent, unless they had some reason to know it would happen. Everyone one of us may suffer an aortic dissection or brain aneurysm at any moment. That's why large commercial aircraft have two pilots. Driving while human is not negligence.


Criminal negligence applies when you could and should have taken steps to prevent the incident but failed to do so. Having a heart attack while driving is not negligence, unless you are somebody who has a medical condition that makes a random heart attack while driving a likely foreseeable outcome (say, angina).

That said, many car crashes are probably caused by negligent actions anyways. Someone who is speeding through a stop sign without stopping is negligent in doing so, especially because the kind of people who do that once tend to do it all the time.


To further your position and to loop it back to the original point about crash vs accident: even in the case of an AARP member who had a heart attack that leads to a collision, accident is not the right word. There is nothing accidental about it. We know that building infrastructure that requires driving into old age will result in deadly collisions, yet we continue to choose to invest to grow this infrastructure. The resulting incident is a crash or collision, not an accident. A good book on the subject is Normal Accidents by Charles Perrow.


> Every crash that is not deliberate is an accident.

If you were to define the word "accident" as "everything not deliberate", then, sure ok. But that's not how the word is typically used.

There are vanishingly few real accidents in traffic. Nearly all crashes are also not intentional.

The vast middle is crashes due to driver incompetence or negligence. Not intentional, but totally avoidable if the driver had done their job (pay attention, be lucid, have suitable skills).

People like to call things "accidents" because it shifts blame to destiny. But reality is that nearly all crashes could have been avoided by the driver.


> risk is in general quite low

Let's phrase it differently. Suppose you have a button. If you press it you can save a total of just under 5 days over your lifetime of driving, but there's a ~1.5% chance somebody instantly dies. Do you press the button?

I think most people make that choice because they don't understand how dangerous their actions are, not because they think that's an acceptable return on investment (and in that sense, you're probably right that "violence" isn't quite the right word), but the fact remains that we have a system where pedestrians can't feel safe crossing the road at a crosswalk at a stop sign in front of a stopped car, and almost all pedestrian deaths would be prevented by just not breaking traffic laws or otherwise driving recklessly. I definitely think "accident" cuts too strongly in the other direction.

A brief reminder just in case anyone reading doesn't know yet and will change their driving behavior positively in response:

- Blowing through stop signs kills people (if you don't have enough visibility, stop first and _then_ pull forward)

- Speeding drastically increases the chance of killing pedestrians you hit; please at least slow down in residential areas?

- Tailgating kills people

- Texting while driving kills people


Putting others at risk is harm. The amount of harm was larger than expected, but that doesn’t excuse the intent to cause harm anymore than someone dying during a beating is excused because they didn’t mean it.

Reckless Endangerment for example is inherently a criminal offense even if no one was injured.


No, intent matters. Intending to kill someone with a car is punished far more severely than killing someone because you were speeding and lost control.


That’s irrelevant. While attempting to kill someone is considered a worse offense their both harm.


To be more clear, the amount of harm harm intended is the only difference. Manslaughter is still illegal even if the penalty for murder is higher. The difference is simply in the degree of harm intended not in the lack of intent to cause harm. If someone dies through because say you happened to have parked a normal car in a normal parking space then that’s perfectly fine, but kill someone because you where speeding and that’s your fault.


You can't have an accident without a risk of accident first.

Risk and harm aren't always inherently crimes. For example a cosmetic surgeon harms people and exposes them to unnecessary risks. In contact sports people willingly subject themselves to risks of serious injury. Same when people drive on public roads. The degree of risk is critical, hence the "reckless" requirement for the crime.


The critical difference is in sport and cosmetic surgery the other party willingly puts themselves at a specific level of risk. It’s not accepted for a cosmetic surgeon to increase risk by randomly preforming a second unrelated surgery while their under without telling the patient. Similarly, nobody agrees to have people do 200 MPH on a public road, in fact all speeding is breaking the law the only thing that changes is penalties getting more severe.


It seems you missed both my points.

1. People do indeed willingly put themselves at risk by driving on a public road, which was in the same list with the cosmetic surgeon. And doing so also puts others at risk.

2. The degree of risk is what matters. Ergo while both obeying the speed limit and driving 200 mph (assuming it was possible) puts others at risk, the latter risk is considered to high a degree.

It is not a fundamental choice between risk and no risk, but between unacceptably-high risk and acceptable risk.


Speeding is by definition both an unacceptable risk and illegal.

Even driving the speed limit is only acceptable in ideal conditions. Driving at those speeds is illegal in rain, fog, snow, etc as it puts others at what is considered an unacceptable risk.

You and many others may decide to ignore public safety and the law, but that doesn’t somehow make it safe. While you personally may feel homicide is a reasonable trade off for convince, don’t assume everyone that happens to be near a public street and who’s life your willing to sacrifice, agrees with you. The occasional person killed in their own homes might have considered it a reasonable trade off, but we can’t ask them. EX: https://www.wfft.com/content/news/Woman-in-Fort-Wayne-killed...


Another miss. Now you have it backwards. I am not defending speeding (good lord). I am pointing out that legal behavior is risky too. Therefore your point about risk equating to harm is useless.


Plenty of things are acceptable harm. Exhaling CO2 in the same room with somebody is actually physically harmful but nobody generally worries that breathing becomes lightly more difficult when you’re around. The bounds for acceptable harm seem invisible, but they definitely exist.

Therefore some levels of risk being acceptable is in no way a counter argument.


I agree with everything you said except the last statement, which is an assertion, not logic.

Since you agree that one cannot avoid creating risk, what is the logical value of introducing the idea that risk is "harm"? You can just as well talk of unacceptable risks, rather than "unacceptable harms". Further, you apparently agree that all "accidents" result from parties taking a risk, whether small acceptable risks (more rarely, no doubt) or by larger unacceptable risks. So what point are you left making about the existence of "accidents"?


The last statement was countering your train of logic, it wasn’t support for the idea on it’s own.

I am not introducing the idea that risk is harm, I am acknowledging that society has agreed that risk is harm most clearly in the case of Endangerment laws. Therefore that must extend to all levels of risk not simply the most extreme cases. Therefore someone deciding to put others at risk must as generally agreed by society be seen as willingly doing harm to others.

As to “accidents” the outcome may have been undesirable by the person at fault by speeding, failing to maintain proper distance, failing to maintain your car, driving in unsafe conditions, driving when you’re incapable of maintaining control etc are hardly accidental. Their the result of deliberate risk taking with others lives. When you anti up the health and safety of pedestrians you can’t then say their death was anything but your fault.

That said, sure their might be a few hundred actual accidental deaths in the US from manufacturing defects, or undiagnosed medical conditions. But, calling a drunk driver hitting someone by going down the highway in the wrong direction an accident is a completely meaningless, at that point just call it a bad thing and move on.


You can't counter my train of logic by using the same points I used to support it, at least not without providing something more. Both "high risk" and "low risk" behaviors are qualitatively the same; they create a risk. They only differ quantitatively in the amount. And while you have been running for several posts with the idea that a high risk behavior is a sure cause of an accident, most are also generally very unlikely to cause an accident in absolute terms (that's usually why people take such risks, after all), making them very low risk compared to actual intentional violence (the point of the poster you responded to). Drawing a line for legal/illegal risks just allows us to punish people for taking risks we don't want them to take. It is still meaningful to call it an accident to identify these qualitative similarities.


That’s why I specifically said low levels of harm are acceptable.

Your argument boils down to saying it’s ok to play Russian Roulette with unwilling people if their is X chambers in the gun, but not ok if their is X-1 chambers in the gun. I am saying it’s never ok to do so but there is a polite agreement where people ignore low levels of harm based on a host of factors.

This is consistent with events that have already happened and events that have yet to happen. You can reasonably argue the risk was low for events that didn’t happen, as in the building was strong enough see it’s still standing. In that case speeding cameras should be legally different than a cop pulling someone over. The cop is stopping you from speeding, but the camera doesn’t.

On the other hand if risk is inherently a harm then past or future harm is irrelevant. Which is how things treated, you can’t argue the outcome when you have put others at risk.


> The word "accident" suggests nothing could have been done to predict or prevent the collision.

I mean, that's clearly rubbish, isn't? When I accidentally shut my thumb in the door, I use the word accident to indicate I didn't do it intentionally. It doesn't minimise the fact that I was a blithering idiot and could have avoided a broken thumb with the minimum of attention.


The language is definitely a bit tricky: think about how many accidents are caused by driving aggressively or choosing to use a phone while driving. Nobody chose the accident but it was a direct, easily predicted consequence of a deliberate choice and wouldn’t have happened if they had followed the law. That seems to be rather different from you hitting your finger with a hammer, unless perhaps you were trying to check Facebook at the same time like the average commuter.


To me it's just a matter of stakes. No one will die if you close your door without taking care. On a road that's a realistic threat so people should account for it in how they act. For me that moves it from accident to negligence (in an everyday language sense, not legalese).


The traffic people avoid the word, the people that study commute patterns and write policy papers about how reducing the number of vehicles reduces accidents (have a phd for that one). The people who design crumple zones, who decide traffic light brightness, who build the brakes that prevent crashes and the seatbelts that make them survivable ... the people who represent 99% of your safety still call them accidents.


I happen to agree (I've had this exact debate before), but to a lot of people it's a loaded term.


Wow, how unhelpful. I don't know the acceptable way to call this out, but how about focusing on substance instead of just bringing pedantic language stuff into the mix. I see this happening a lot, when people can't find a real way to contribute, they start debating the position of commas or whether it should be called inquiry or enquiry or whatever. It distracts from real debate, and maybe gets you some attention (lots of bureaucratic leader types love this sort of thing) but it's really wasting everybody's time.

(Edit, the irony isn't lost on me of providing a low value comment that doesn't contribute to the discussion in response to one I accuse of something similar. But I've seen so much time wasted and so many people getting ahead and in some cases basically build a career on engaging with these kind of language things instead of doing any actual work, I wanted to bring it up)


His comment added a lot more to the conversation that your comment (or mine.)

Sometimes pedantism is important and useful. In this case I have no problems imagining that we could reframe our understanding of how to design traffic systems by reframing the language we use to talk about how those systems fail.

Not that I necessarily agree, but I don't think you can dismiss the argument by waving your hands and saying "pedantism is bad".


Emergency vehicle driver training (which is acceptable in some states in place of a CDL, and also covers other rules) used to be called "EVAP" (Emergency Vehicle Accident Prevention).

It's now called EVIP (Emergency Vehicle Incident Prevention).


A friend of mine has a saying

"There are no accidents, there are only fuckups. Maybe getting hit by an asteroid would be an accident - but hitting or getting hit by another vehicle, or driving into the scenery, is _always_ a fuckup on somebody's part, not an accident."

(Though I'll bet "people who study the problem space concerning traffic safety" have a more socially/professionally acceptable word for "fuckup")


That's a stupid phrase. I can think of many cases where there was not a fuckup yet vehicles collide. Example: someone has some kind of acute medical issue happen (maybe a seizure) and crashes. Or maybe there is black ice that you cant see. Maybe a truck flings a tiny rock it couldnt see in time into the windshield of a car behind it. It is really easy to come up with examples like this.


> Or maybe there is black ice that you cant see

This deserves expanding upon. You’d rightly assume people who live in cold places know to look for ice on the road.

The only traffic accident my ex was involved in was also the only time I ever saw ice on the road in Lisbon, in 30 years of living there. We saw a pileup, she barely touched the breaks, and we entered a skid. Ended up bumping against a car in the pileup (at pretty low speed so it was harmless). Still — freak accidents do happen.


> I can think of many cases where there was not a fuckup yet vehicles collide.

And yet the examples are extreme cases, which while do happen, are very rarely the cause of crashes.

The vast majority of crashes is not because someone had a seizure, it's because they weren't paying attention or were incompetent.


Right, but if you actually read the parent comment you will notice that "_always_" was used and even emphasized, along with "there are no accidents", and the only example provided of a non-accident is a non-car crash example of a meteorite. My examples are much more plausible, and the list goes on. I came up with those examples in like 5 seconds. The point is that there are lots of cases where it truly was an accident.

> "There are no accidents, there are only fuckups. Maybe getting hit by an asteroid would be an accident - but hitting or getting hit by another vehicle, or driving into the scenery, is _always_ a fuckup on somebody's part, not an accident."


The NTSB openly defines transportation investigations as accidents. The entire purpose of the agency existing is to issue recommendations based on the root cause for transportation accidents to prevent the same thing from happening in the future. Any foreign transportation safety agency counterpart in the world has the exact same definition. Whoever the “people who study the problem space” are it sounds like they are trying to twist language for their own personal beliefs and not really studying the problem at all.

https://www.avweb.com/aviation-news/ntsb-accident-investigat...



That line in the movie has always stuck with me. It was almost too good, since it was so correct and insightful that there was no real comedy and it took me out out the movie.


Interesting - I never thought about this aspect! This crash was of course 100% preventable by... driving.


> crash was of course 100% preventable by... driving.

By following the guidance indicated to you in the manufacturer's owner's manual that every new car is supplied, yes.


The word "accident," as it pertains to traffic collisions, is actually just translated to "collision" in my head. In no way does my brain understand it to mean that "this was done unintentionally," it instead basically acts as a homonym to the word which has anything to do with intention.


> Just for the record, people who study the problem space concerning traffic safety have disavowed the word "accident" because it all too often dismisses the preventable root causes that can be learned from here

Nah, it's just a line from law enforcement and prosecutors who want to feed more people to the justice system. More convictions means more revenue and career advancement.


People who study safety are not synonymous with law enforcement and prosecutors.

Edit: having previously worked with safety officers in aerospace who take this exact stance on definitions, I can say they aren’t neither law enforcement or very much concerned with putting people into the criminal justice system. Their concern is mainly to understand the systemic root causes that f accidents in order to prevent them from recurring.


The marketing and messaging around auto-pilot simultaneously argues that auto-pilot is safer than a human driver but blames the driver when there is an accident.


Heads I win, tails you lose. What's so difficult to understand? /s


Autopilot in a plane can make things significantly safer by reducing cognitive load for the pilot. However the plane autopilot will in no way avoid a collision. Pilots are still primarily at fault if the plane crashes.


Teslas aren't planes, though, so how does the etymology of the word "autopilot" help here?


Um.

> Autopilot is such a misleading term.

>> The functionality is almost identical to the only other time we regularly use "autopilot", in airplanes.

>>> Yeah but like, who cares about etymology and stuff? Misleading af.


Ok, I'll try in other words: Statistically speaking, about zero persons know how the autopilot in a plane works (me included), while they do know the word autopilot. Therefore, they can't infer the limitations of Teslas autopilot from a plane's autopilot.


I seriously don't understand this disconnect. You know the word autopilot because it is a technology in airplanes. That is the only reason you know of the word.

Statistically speaking, 100% of people know that 1. Airplanes can have autopilot 2. Passenger jets still have multiple pilots in the cockpit, even with autopilot.

You don't need to know the intricacies of how autopilot functions to recognize the significance of those two facts (which I'm sure you knew) and apply the same to Tesla.


The etymology doesn't help.

It was an intentionally misleading word for Tesla to choose.


A human and Autopilot working together is safer than just a human driving. Autopilot by itself is currently less safe than just a human driving (which is why it's still level 2). There's no mixed messaging.


> A human and Autopilot working together is safer than just a human driving

This is not my understanding from colleagues who studied the human factors of what is now called level 2/3 automation many years ago. Partial automation fell into an "uncanny valley" in which the autopilot was good enough most of the time that it lulled most human participants into a false sense of security and caused more (often simulated) accidents than a human driving alone.

Since then I've seen some evidence [1] that with enough experience using an L2 system, operators can increase situational awareness. But overall I wouldn't be surprised if humans with level 2+/3 systems end up causing more fatalities than human operators would alone. That's why I'm relieved to see automakers committing [2] to skipping level 3 entirely.

[1] https://www.iihs.org/api/datastoredocument/bibliography/2220

[2] https://driverless.wonderhowto.com/news/waymo-was-right-why-...


This is absolutely correct. And related to the issue of situational awareness, Tesla Autopilot has utterly failed at the basic design systems concept of "foreseeable misuse."

Having worked in the driver monitoring space, it pains me to see a half-baked, black box system like Autopilot deployed without a driver camera. Steering wheel and seat sensors are not up to the task of making sure the driver is attentive. Don't even get me started on "FSD," which proposes to work in harmony with the human driver in far more complex scenarios.


There's no mixed messaging?

The driver is there just for regulatory purposes, all cars self driving in 2016!, cross country summon in 2017, coast to coast autonomous drive in 2018, Tesla self driving taxis in 2019, FSD making teslas worth 250k$ in 2020! Etc etc

There are a lot of statements by Elon Musk


Those are all "coming soon". Tesla and Elon Musk are 100% clear that today, you still need to be an attentive driver while using Autopilot.


All those years are in the past.


But those were never guaranteed dates, just very poor/optimistic predictions


"I'll sell you this box. I know it is empty today, but I assure you, tomorrow it will contain a lump of gold!"

On the next day: "I'll sell you this box. I know it's empty today, but tomorrow..."


They were, Musk always used that phrase "not a question mark".


> A human and Autopilot working together is safer than just a human driving.

I am not so sure. The data from Tesla is always comparing apples and oranges and I have not seen a good third-party analysis confirming this hypothesis.


The problem is these are not independent. Autopilot can lead to inattentiveness or other things that come from the sense you are now being assisted. So it boils down to a question similar to “is one driver at skill level X better or worse than two co-drivers at skill level Y+Z” where Y is less than or, unlikely, equal to X and Z is currently known to be less than X.


I have read the criticism of how the Autopilot miles aren't apples-to-apples comparisons with national averages many times. However, this cherry-picks a single number from the safety report and ignores the other reported statistics. If the explanation for why Autopilot miles were so much safer than non-Autopilot miles is that people turn it off in dangerous situations — and thus equal or greater numbers of crashes were occurring for Autopilot users overall compared to the national average, they were just occurring when Autopilot was off — the crash rate without Autopilot engaged would have to be higher than the national average. Otherwise, where would the crashes go?

However, it isn't. The crash rate with Autopilot off (but with other safety features on) is about 4x better than the national average. And with all safety features turned off, it's still 2x better.

I don't think you can explain away the high safety record of Autopilot by claiming the crashes are concentrated in the non-Autopilot miles, because they aren't. While Autopilot miles are safer than non-Autopilot miles, non-Autopilot miles are no more dangerous than the national average (and in fact are less dangerous).

Autopilot+human is considerably safer than human alone.


Even if what you argue is true, it doesn't follow from this report. Why is the accident rate of Tesla with Autopilot and all safety features off 2x better than the national average? Because there is a difference in the demographics - Tesla drivers are probably younger and more enthusiastic about driving than the average driver.

Now, if you do the same statistics on the same demographics for all non Tesla cars, you could actually get less accidents than Tesla - here are where the hidden accidents went. Again, I don't have the data about this and I don't claim it is true, but without knowing this, you cannot make the conclusion you are making as well.

Otherwise I agree with you - I also believe that Autopilot+human is safer than just human. Unfortunately, the usual way that people interpret these numbers is that Autopilot is safer than human...


I agree that the demographic skew probably accounts for some of the difference. Probably also that Teslas need less maintenance (esp brakes, due to regenerative braking), so are less likely to fail for mechanical reasons — although I don't think most crashes are due to mechanical failure, it should show up to some degree in the stats.

I think the argument that the Autopilot numbers are essentially fake because the true crashes are concentrated in the Autopilot-off scenarios is hard to make a case for though, given the stats on Autopilot-off driving being so comparatively good. You would need incredibly good demographic skew to account for that if the crash rate is concentrated — you don't need to just equal the average after correcting for demographic skew, you need to be considerably worse than it. So while it's not a perfect metric, I would be much more surprised if Autopilot+human was more dangerous than human alone.

I 100% agree with you that this is only an argument for Autopilot+human though. Current Autopilot without humans, at least anecdotally (I have a Model 3 with Autopilot), does not seem safe. However, I think the concern among some that Autopilot is unsafe as it currently is typically operated — i.e. with a human in the loop — is largely contradicted by the evidence.

My personal anecdote is that I feel much less fatigued by driving with Autopilot, especially on longer drives. It's imperfect, but it actually generally helps improve my alertness because I don't have to constantly fiddle cruise control settings based on which car is in front of me or make micro wheel adjustments to stay centered in a lane; I usually take over proactively whenever there looks like a sketchy situation is coming up like multi-lane merges with trucks for example. And when those situations happen, I'm able to stay more alert and focused because I haven't been spending my energy on the simple stuff that Autopilot is good at, so I think I end up being safer overall even when it's disengaged. I notice a pretty large cognitive difference — which was unexpected for me when I first got it, because I thought I probably wouldn't use or like Autopilot, and initially was quite mistrustful of it.

Obviously this is just a personal anecdote, and not data! But what data we have, while imperfect, seems to support that conclusion much more than it supports the opposite.


Expanding upon your personal anecdote: Is there scientific research on this matter? (Measuring alertness/fatigue on non-assisted vs assisted driving) It could be valuable.

Personally, I think driving is nearly always a waste of my time, so I avoid it when possible. Plus, I don't think of myself as a very good driver. Reading your anecdote made me think about how I feel after a long drive vs a long train ride. I cannot put a finger on it, but fatigue from constant required adjustments when driving /might/ be a factor.

More likely: I like how I can spend my free time when riding a train vs driving a car -- which is somewhat limited to passive listening: radio/music/audiobook/podcast/etc.


> Tesla drivers are probably younger

Don't younger (hence less experienced) drivers generally have more accidents? If this is true, isn't it more evidence that Tesla's safety features are helpful?


I think "younger" here is meant more as "not old." 16 year olds are less safe drivers, yes, but on account of the price they're not going to be a big part of Tesla's demographic.

Since there's no affordable model, and they're a newfangled gadget with strange refueling infrastructure and a giant touchscreen for a console, Tesla owners probably skew toward middle aged. So they'll have fewer drivers in the less safe age ranges at both ends of the spectrum.


That depends on how you define younger. Not many teenagers can afford a Tesla though so in this case younger probably means mid 30s to early 40s. That largely removes very inexperienced drivers and the elderly.


Risk by age decreases from 16-25, bottoming out from 30 to 40, before increasing again. 30-40 is, likely, a huge part of the Tesla demographic.


Your last paragraph is the most important one. Autopilot is driver assistance, and it shouldn't be a surprise that it helps. But these results are comparing human + computer vs human, and does not in anyway indicate that the computer alone is better than a human, let alone a human + computer, which should be the benchmark.


I agree these numbers only argue for human+computer vs human, and not computer vs human.

I'm curious why you think the benchmark should be computer vs human, though. Autopilot is very clearly a human+computer system; it states you need to be alert, and it forces you to keep your hands on the wheel and apply wheel torque occasionally to make sure you're actually paying attention. Why would Tesla benchmark something they don't ship (and how could they even do that)? The question for the general public, and for Tesla owners, is whether the current system is safe. It appears to be.


Theae stats are often quoted by Musk and Tesla to suggest that driverless cars are here and safer than human drivers, and the only thing preventing them are regulators. They are never quoted to imply that driver assistance makes driving safer, which i believe they would.

So, one has to compare computer vs human. In fact, more than that. One cannot compare modern technology to one from the previous century. So one must compare computer to the best passive driver assistance that one can develop for humans. So Tesla must compare a driverless solution to their own driver assistance solutions aiding drivers, and not the "average car on the road"


> Theae stats are often quoted by Musk and Tesla to suggest that driverless cars are here and safer than human drivers, and the only thing preventing them are regulators.

That's surprising, I hadn't seen that. Could you link to an example?


This video from 2016 (https://www.tesla.com/videos/autopilot-self-driving-hardware...) saying "the driver is there just for legal reasons, the car is driving itself"

This page (https://www.tesla.com/support/full-self-driving-computer) says "Will help us enable a new level of autonomy with regulators approval"

And many many more for Elon Musk's Twitter and various appearances.


Yeah, and in the part of that second link that directly addresses the question:

> *Will the FSD Computer make my car fully autonomous?*

> Not yet. All Tesla cars require active driver supervision and are not autonomous. With the FSD Computer, we expect to achieve a new level of autonomy as we gain billions of miles of experience using our features. The activation and use of these features are dependent on achieving reliability far in excess of human drivers, as well as regulatory approval, which may take longer in some jurisdictions.

That clearly states that there is still a technical challenge to overcome which is prior to any regulatory issues.


When has Tesla said that driverless cars are here?


This video from 2016 (https://www.tesla.com/videos/autopilot-self-driving-hardware...) saying "the driver is there just for legal reasons, the car is driving itself"

This page (https://www.tesla.com/support/full-self-driving-computer) says "Will help us enable a new level of autonomy with regulators approval"

And many many more for Elon Musk's Twitter and various appearances.


> However, it isn't. The crash rate with Autopilot off (but with other safety features on) is about 4x better than the national average. And with all safety features turned off, it's still 2x better.

You still can't figure that out from Tesla's stats. It'd have to be "compared to the same roads in the same conditions". Tesla only knows where its vehicles have been driven, not every vehicle on the road. Let's be honest, this stat is just marketing.


The total crash rate in Tesla cars is not necessary less than that of say Prius cars.

Comparing Tesla cars crash rate with that of the overall population is dishonest:

1. drivers are biased population 2. the age of the car is biased


It is not "dishonest." Toyota, AFAIK, does not publish these numbers; comparing to the national average is just the best you can do. Publishing the numbers without any comparison would be silly; what does it mean to know Tesla's accidents per mile if you not only don't know it for any other manufacturer, you also don't even know what the national average is?

And while I couldn't find numbers for Prius specifically, it seems that hybrid cars are actually on average more dangerous than other cars, so I would be surprised if Tesla were not handily besting the Toyota Prius given Tesla's safety record: https://www.thecarconnection.com/news/1022235_hybrid-drivers...

Yes, there may be biases in driver population that make Tesla owners slightly more or less likely to crash. However, I think it is a very large stretch to claim that this would result in the fairly astoundingly different safety numbers.

As for the age of the car: car age is mostly a statistical factor due to safety systems in newer cars. (It is also important in terms of deaths due to safety standards like crumple zones and airbags, but we are talking about a count of accidents, not deaths; if a crumple zone has been used, it is an accident.) Tesla publishes the statistics both with safety features on (4x better than national average), and the numbers for if they have been disabled which is still 2x better.

I think if the claim that the crashes are concentrated in the non-Autopilot miles were true, and that Autopilot+human is more dangerous than human alone, it would be very hard to understand how the crash rate was still 2x better than the national average with safety features disabled and Autopilot off.


> It is not "dishonest." Toyota, AFAIK, does not publish these numbers; comparing to the national average is just the best you can do.

When you know, unequivocally, that you are missing huge swathes of information, and drawing all manner of conclusions and inferences not supported by those statistics, it's not "just doing the best you can", it's "being disingenuous and misleading with numbers".


The real world is not perfect. People have to make the best decisions they can with the data available.

I think it's better to publish the numbers that are available. Tesla can't publish Toyota's numbers, because they don't have them. I don't think they're at fault for comparing against the only benchmark available, and I think it's better to have that comparison than not. Many, many writers have claimed that Autopilot is inherently unsafe and believed it would cause massive numbers of crashes compared to traditional cars. The data shows that not to be the case.


If you make your decisions based on partial information, knowing that large parts of the information is missing, and then pretend that it's not, you are not making the best decision. The best decision must take into account that information is indeed missing.


Really? I think I remember reading that accidents in newer cars are more rare. How does anybody know that? Can we not at least compare to similar aged cars?


Can we not at least compare to similar aged cars?

I would love that. Do you know where to find that data though? I don't think it is published anywhere, which is why it's hard to use as a benchmark.


The quoted statistics on either side are not helpful here. See:

>> Driving to Safety

>> How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

>> Key Findings

>> Autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.

>> Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use.

>> Therefore, at least for fatalities and injuries, test-driving alone cannot provide sufficient evidence for demonstrating autonomous vehicle safety.

https://www.rand.org/pubs/research_reports/RR1478.html

Note also that Tesla's numbers are reported after several years that Tesla cars with Autopilot have already been driven on public roads. Whatever the numbers say now when Autopilot was first released there was no evidence of it being safer than human-driven cars, only wishfull thinking and marketing concerns.


Please correct for demographics. The average Telsa owner does not include poor people driving beaters with bad brakes, so there's a heck of a lot of self selection going on that is probably skewing the statistics.


As I like to point out to people when they quote this self driving statistic, student drivers have the best driving record out there. No fines, no accidents. Yet nobody would ever confuse a student driver for a good driver even if they are probably better than current self driving tech.


Why do you say that? Student drivers certainly do get into accidents, despite the fact that some driver's ed cars allow the instructor to take partial control. When my partner was in a program, the student driving the car they were in rear-ended another car.

Maybe you mean that it doesn't go on their driving record, but is that really true? The one reference I could quickly find of this happening says that the student was issued an infraction: https://www.upi.com/Odd_News/2018/04/04/Student-driver-crash...


You're missing the point: if you don't have a good sample size, then you don't have good data.

Data which also excludes situations (i.e. autopilot throwing control back to the human and counting that time as "autopilot not in use") is bad data.


I think you replied to the wrong comment, I'm talking about student drivers, not Teslas.


You're getting really into the weeds on a broader point. Suppose we give a new driver their license - but they've never driven before. Technically they have a perfect driving record. Even 1 or 2 hours in on road, still a perfect driving record.

Most student drivers in fact will have in fact, completely perfect driving records. That accidents happen is irrelevant - just think - those stats for the first couple of months probably look spectacular compared to the normal population.

The original comparison is all about this use of a biased dataset to draw an invalid conclusion. Except with student drivers we know that actually we shouldn't trust that conclusion, because in practice they have an insufficient amount of experience and have not likely dealt with many challenging road situations.

With Tesla, it's the same sort of problem.


> That accidents happen is irrelevant - just think - those stats for the first couple of months probably look spectacular compared to the normal population.

This is exactly the point I'm disagreeing on. I have no reason to think that student drivers do better (in, say, number of crashes per thousand hours driven) than other drivers. In fact they probably do worse. The person I replied to suggested that student drivers all have perfect records, and I'm genuinely perplexed about how that could possibly be the case... my question is genuine - I simply don't understand what they meant.

> student drivers have the best driving record out there. No fines, no accidents.


> I'm genuinely perplexed about how that could possibly be the case

I live in Europe where student drivers must be accompanied at all times while driving by a professional instructor with secondary controls at their disposal. There's no "just have an adult with a driver license dozing off in the passenger seat and you're good to go".

The instructor holds a lot of responsibility, they are responsible for everything that happens in/with the car, so they make sure to be very conservative with that brake pedal and instructions. "No accidents" may have been a small exaggeration, surely a couple of them will eventually have one. But statistically students are by far safer than regular drivers because their mistakes rarely if ever turn into an incident. The special conditions (constant supervision, lower speeds, controlled route, etc.) make sure of this.

But this just makes my point in a way my comment above couldn't: when you're lacking data even reality can be genuinely perplexing.

Tesla's statistics are misleading because this serves them. Comparing the number of accidents between a fleet of modern cars to one where the average age is 12 years, excluding city driving because nobody does that anyway, and not counting every driver made adjustment as a failure of the AP is specifically meant to give the wrong impression.

Any car can drive itself on the highway if you just tweak the steering wheel one in a while to keep it on the road. That's what, 0.1s every 10s? But saying it drives itself 99% of the time is misleading at best.

I'm sure driver assists help reduce accidents and they're the way to the future. But Tesla's "conclusion" that AP is safer than a human driver based on their misleading statistics is a flat out marketing lie.


> I live in Europe where student drivers must be accompanied at all times while driving by a professional instructor with secondary controls at their disposal.

I don't know if you're familiar with Asterix comics, but this is one of those "All of Gaul?" type situations. Belgium allows parents to teach children on a probationary license. Nowadays, the requirement is the parent has held a license for 8 years and hasn't been in an accident in I believe the last 3. The student driver also isn't allowed to drive between 10pm and 6am during weekends and the nights preceding and following official holidays.

It also allows people to drive fully autonomously up to 18 months with a probationary license before they take their official test - which they fail once on average. During Covid, those 18 months might have become 24 or more, because test centres were closed.


Mea culpa, I wasn't aware of countries in Europe where this is allowed and I generalized because I'd rather not share specifics. Is this a years or decades old thing?

Maybe it was just my bias making me assume it's the kind of field where you'd want professional training in a car with dual controls, not a regular car with an instructor who might have never driven after getting their license 8 years ago, or one that has an abysmal driving record but took a break for 3 years and it's "clean". And even a great driver would have difficulties avoiding an accident when the controls are on the other side of the car.


In Belgium? This system's been around since the late 90s at least.


That's a pretty weak argument regardless of your stance on self driving cars. Student driver records aren't meaningful because we don't have enough data to make a judgement. We have lots of data on self driving cars. There are other ways to cast doubt on self driving car records but this isn't one of them.


Tesla's self driving cars are students that are under constant supervision of their teachers.


This shouldn't be downvoted to light grey. The analogy is excellent.

I presume this comment is talking about student drivers where the teacher has an override steering wheel. So the student gets into accidents at a lower than average rate because every time they get close to doing so the teacher takes over.


1) Most student hours aren't in override steering wheel cars

2) For this analogy to make sense, it needs to be an average driver, not a driving teacher

3) Unless you're claiming AI and human drivers are uniquely suited to solving different types of driving, if the effect you're claiming is true, you would expect the rate of human/teacher driving hours to be much worse than average because they miss out on all of the "easy" miles. So far, no evidence of this.


That's just a bad analogy though. If a student driver accumulates a million miles of driving with no accidents, they're probably a fine driver even if the teacher was in there the whole time. Conversely, you're not a safer driver if tomorrow a driving instructor decides to sit in the back seat of your car tomorrow.


Not if the student gives control back to the teacher every time there is a problem the student can't handle. The real metric is "dangerous/complicated driving situations mastered" and "miles driven" is only a stand-in. A bad one if the student can deselect the few miles which had the dangerous bits.


If we're really pushing this analogy however, then you would expect the teacher to have a much higher accident rate than the average driver because you're claiming the teacher only does the hard stuff, or, in the very least, misses out on the majority of the easy stuff (assuming, for comparability, that the teacher is an average competence driver).

Specifically, if you're claiming only 99/100 miles are easy and have no chance of crashes, then if a human only drives for the 1/100 miles that are hard, they should have a 100x higher crash rate than the human that drives all 100. They should probably have an even worse crash rate because of the switching cost of suddenly taking control, unless you want to make the weird argument that suddenly taking control of an autopilot car is safer.

The tesla report says autopilot experiences a crash every 4 million miles. With autopilot disengaged, it's every 2 million miles. The baseline national average is every 0.5 million miles.

I can't find the perfect statistics, but one study suggest uneducated people are 4x more likely to die in a car crash, so let's give some generous rounding and say, normalized to wealth, tesla drivers not actively using autopilot are at comparable levels to the average driver (1 per 2 million miles).

Unless telsa drivers are phenomenal emergency handlers, its difficult to explain how the non-autopilot crash rate could be so low, while also claiming tesla is hiding the true crash rate of its autopilot features by pushing difficult miles to human drivers, because the human drivers are receiving normal crash rates on what you claim is a much more difficult set of miles.

Its possible (probable) that the autopilot would experience a higher crash rate if it were not allowed to call in a human. But to ask generally if autopilot is reducing the total number of accidents that the drivers would experience otherwise, I'd say 'probably'.


It means they are probably fine to drive the courses they have been driving. However, we know with self-driving that it isn't necessarily representative of what they will be asked to drive. It is also not a particularly relevant comparison to human drivers, unless we normalize a bit for road/driving conditions.


You are artificially adding in a detail that the student is only driving in limited courses to imply there might be unfamiliar conditions to a self driving car where we shouldn't trust its performance.

That is a better point, but also just a different point from the one originally made.


Do you mean that any individual student driver is likely to have a perfect record because they haven't driven much? That seems like it wouldn't apply here, because Teslas have, in fact, driven a large number of miles.

Or are you claiming that supervised student drivers are much safer because they, like self driving cars today, are supervised?

Maybe I'm totally missing the point of your analogy, but it seems like it doesn't clarify much.


A student driver ran into my neighbors yard on our non-busy street.


There is also an other problem with only trying to better than average driver. If your system is only slightly better than average. That means basically 1 out of 2 people are better than your software.

Autonomous, driving should be multiple sigmas better than average. Especially given the reaction times that are possible for a computer vs a human.

If its only good as average a large amount of drivers would be safer to drive themselves.

Basically it should be as good if not better than the most capable drivers. Average is way to low of a bar to aim for.


The reaction time is actually an interesting question. Reactions in humans which do not require thought/planning can be quite quick, and the human vision system is /very/ quick, especially for movement. How fast is the vision system in a Tesla? Not only frequency, what's the latency from vision change to decision? My guess is the Tesla is faster, but by less than an order of magnitude. I would not be surprised that it's slower for some cases. But I really don't know.


The Tesla Safety Report is so misleading:

1. The accident rate does not take into account of drivers age, credit score and prior safety record, or the age / value of the car.

2. Most people only turn on autopilot when driving is easy (e.g. on a highway).


Sorry, a non-American here. By "credit score" are you referring to the financial credit score or some sort of "points system" for drivers? If the former, then why would it be important to include it?


In some states auto insurance companies are using credit score because there is correlation between insurance claims and credit score [1]. I guess you can establish "crash-free" and insurance claim correlation even more easier.

[1] https://www.forbes.com/advisor/car-insurance/auto-insurance-...


Not some states but 94% of the states, according to your link.


Looks like that article may be out of date, according to experian [1] 4 states completely disallow using credit scores to set auto insurance rates, and 3 more restrict it. However, this is still significantly more than "some states"

[1] https://www.experian.com/blogs/ask-experian/which-states-pro...


Credit score is correlated with personality traits such as conscientiousness, risk-taking, etc, which in turn influence driving safety


While this may be true I always assumed the strongest reason for including this in insurance quotations was the lower risk of fraudulent claims from those with stronger finances.


Does any car company give a detailed normalized report like you're asking for in 1?

edit: by which I mean, if tesla autopilot gets into more accidents than rich white yuppies but less than the national average, it's not entirely obvious to me is the conclusion is rich white yuppies shouldn't use autopilot or that autopilot isn't safe enough. It also suggests its very useful for poor minorities.

Location and local driving conditions are the only real differentiator where this might make a difference on decision making. Those are going to be correlated with the demographics of the person driving them, but are weak proxies at best.


Car company? Probably not, but they'd be the wrong organisations to ask.

Insurance companies certainly would know a lot more detail.


Do any other car companies run a comprehensive surveilance system on all of their vehicles?


I'm not a car expert, but I think yes, its pretty common in newer cars.


Yes. OnStar is the most well known, but most of the other major automakers have similar capabilities in at least part of their range.


Does the safety report account for vehicle age and price? Because I imagine there's a difference in accident-free miles if you were to compare a new Mercedes-Benz S-Class to a 15-year-old Camry.


No it doesn’t, that’s one of the main criticisms along with comparing highway miles to city miles.


And Tesla owner demographics (presumably mostly affluent + older) with "everyone".


Volvo xc90 had no fatal accident from 2002 to 2018. Beat that for starters.

https://www.expressandstar.com/news/motors/2018/04/17/no-fat...


What about the people that the XC90 rams into?


That famous xc90 was modified by Uber and Volvo’s safety features were disabled. Literally the only xc90 that have killed are the ones driven by artificial neural networks.

https://www.bloomberg.com/news/articles/2018-03-26/uber-disa...


> I'm not saying that Autopilot isn't safer than a human driver

I'm saying that Autopilot isn't safer than a human driver. The fatal accidents involving Autopilot so far, mostly crashing into stationary objects, were easily avoidable by a marginally competent driver. Tesla is playing with fire.


There are definitely accidents with Autopilot that could have been avoided by humans, but we need to compare those against the cases where autopilot prevents accidents that are unlikely to be avoided by humans, like this one: https://youtu.be/bUhFfunT2ds?t=116

It's pretty clear that human paying attention + autopilot is safer than either: 1) human only 2) autopilot only.


I would agree, if the Autopilot didn't need a significant amount of human attention to avoid a simple, potentially deadly crash. Your equation should be rewritten to:

human paying attention to driving + human preventing Autopilot from crashing + Autopilot

I seriously doubt this divided attention produces a safer system with today's technology. Except


> but we need to compare those against the cases where autopilot prevents accidents that are unlikely to be avoided by humans

Right, but do any examples of this exist, ever?

Your link is clearly not an example. Any aware driver would've done the same.

I'm sceptical there can ever be a scenario were tesla autopilot can outdo an aware, conscious driver.


Some of the examples in the video are actually pretty impressive. I'm not sure I would have seen some of these cars coming. But they only show that human driver with autopilot assist is potentially better than human alone, i.e. human driver, with autopilot backup. I would be surprised if that is not better. But the way autopilot is supposed to be used is that it's only autopilot with human backup. And I would argue that the "with human backup" often has a "because of law, not necessity" undertone)


I totally agree with your argument.

But playing the role of the devil's advocate here one might argue that the major benefit of autopilots is that the data will be accumulated to the set of happened accidents, so that they don't happen again in future.

When comparing accidents of manual vs automated driving, manual cases don't have any learning effect (let alone communication of it and availability of that to other human drivers). Automated driving on the other hand has the theoretical benefit, if it's openly shared, that all edge cases of untrained accidents go asymptotically to zero over time.

But in order to achieve that there must be a law that enforces the use of the same dataset, and that defines this dataset as public domain so that all autopilots can learn from that knowledge.

This was actually my initial hope for OpenAI. And oh boi have I been disappointed there. Nothing changed when it comes to reproducability of training or availability of datasets, and most research of OpenAI seems to be spent on proprietary concepts.


> manual cases don't have any learning effect

Not true. The traffic and safety regulations are in part based by analysis and lessons learned from crashes. The infrastructure, e.g. road profile, speed limit, signage, etc. benefits from the same.


So we should simply make more bumpers on the side and limit speeds to 10km/h to prevent crashes?

I was not talking about changing environment constraints. I was talking about changing the perception and measuring of inputs in human drivers.


Environment constraints and performance of human drivers are not independent. Your argument suggested that they are.


The problem of people overestimating the capability of the car or just losing their attention when Autopilot is engaged could easily wipe out whatever wins you do get.


Valid stat comparison will be average accident rate of all cars with autopilot vs same of all cars without, and with $30k - $50k current market value. This will equalise many things.

I, personally, wont trust some autopilot scripts, at least in this decade.


I don't want to detail the conversation too much, or distract from the excellent points you have made... But how is this simpson's paradox?

Simpson's paradox is easier to understand geometrically.

https://en.m.wikipedia.org/wiki/File:Simpson_paradox_vectors...

L1 and L2 in the diagram have smaller slopes than B1 and B2, and yet their sum is higher. It's not hard to characterize when this happens. So the canonic example is that a drug might be more effective against all sub-cases (e.g. mild, severe illness) and yet appear less effective overall.

You, on the other hand, seem to be describing selection bias.


Simpson's paradox can be thought of as selection bias as well. To make the Simpson's paradox clear, consider the following simplified scenario: autopilot has more accidents per city mile, and more accidents per highway mile, but because highway miles have fewer accidents, and autopilot is tested on a much higher proportion of highway miles, on average it has fewer accidents per mile overall.


To avoid #2, Tesla specifically counts any accidents within 5 minutes after autopilot disconnect as an autopilot accident.


Five seconds.

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before a crash, and we count all crashes in which the crash alert indicated an airbag or other active restraint deployed."

At the bottom of:

https://www.tesla.com/VehicleSafetyReport


But they'll still release press releases saying "The vehicle had warned the driver of inattentiveness and to keep his hands on the wheel"... and oh-so-conveniently ignore "... once, fourteen minutes before the accident" (which, knowing their system now, means that was the last warning, and the attention system hadn't been tripped between then and the accident).


That's an interesting problem. The right answer mostly depends on the distribution of crashes at time t since deactivating autopilot. I would personally guess the relevance of autopilot fades to near 0 once you're 30 seconds since deactivation for 99.9% of crashes.

5 feels a little too aggressive, but would probably capture the majority of the true positives. I would have picked 10-15 seconds based on my gut.


That depends. If you're taking over from autopilot after several hours of passively resting behind the wheel, perhaps it will take you more than 30 seconds to accustom yourself to the task.


What situation could you possibly be in where its autopilots fault but takes more than 30 seconds to cause a problem AND it was necessary for you to take control?


Car steers onto opposite lane on interstate at night/no traffic?


You're not wrong, but to my knowledge, nothing like that has ever happened, and it would have been very newsworthy if it had, even absent of fatalities.


Does that really avoid #2? My understanding of that situation was this:

1. The driver senses an impending accident or dangerous situation, so they disengage autopilot.

2. The driver personally maneuvers the car so as to avoid any accident or crash.

3. The driver re-engages autopilot afterwards.

In this scenario, there is no accident, so there's nothing for Tesla to count either way. The idea is that there could have been an accident if not for human intervention. Unless Tesla counts every disengagement as a potential accident, I don't really see how they could account for this.


You need to look at the whole system. The end result (of autopilot + human) is no accident.

If the human prevents 99% of autopilot could-have-been accidents, and as a result, 10 people die per X miles driven whereas through purely human driving 20 people die, then driving with autopilot is safer.

Unless you're trying to answer "is autopilot ready for L5", this is the right metric to look at.


> If the human prevents 99% of autopilot could-have-been accidents, and as a result, 10 people die per X miles driven whereas through purely human driving 20 people die, then driving with autopilot is safer.

No, because correlation isn't causation.

In particular, it's plausible that autopilot only gets used in situations where it's easy to drive and accidents are less likely. This would erase any hope of assessing autopilot safety by looking at simple statistics like the ones you mention.


What they of course should do is count any manual intervention as a possible autopilot accident.

When I say possible, what I mean is they should go back, run the sensor data through the system, and see what autopilot would have wanted to do in the time that the human took over.


There are a couple reasons why your criteria would get almost entirely false positives.

First: Most Tesla owners disengage autopilot by tapping the brakes or turning the wheel. This is typically more well-known and more convenient than the official way to disengage autopilot (push right stalk up, which if you do twice can shift to neutral).

Second: Tesla autopilot drives like someone taking a driving test. It accelerates slowly, signals well before changing lanes, makes sure there is plenty of room in front, etc. In my experience, the vast majority of interventions are to drive more aggressively, not to avoid a collision. I think maybe twice I've disengaged to prevent a collision. In those cases it was to avoid debris in the road, not another vehicle. (The debris was unlikely to damage the car, but better safe than sorry.)


> the vast majority of interventions are to drive more aggressively, not to avoid a collision

If it's to avoid a collision then the autopilot would have crashed, and it should be deemed an autopilot accident.


Those interventions are to get somewhere faster, not to avoid a collision. If anything, such interventions tend to increase the risk of collision, not decrease it. Training autopilot to behave more like humans in those situations would make it less safe, not more.


If the intervention was not to avoid a collision, they review the footage and find that autopilot would have done something safe, and therefore it is not deemed an autopilot accident.


That's a bit speculative, since your actions will affect the actions of others, but I agree if it were done correctly would give the best picture of autopilot safety.


Can you explain how that avoids it? Not sure I understand.


It avoids a variant of point 2. The case where the driver disengages the autopilot to avoid the crash and fails. It avoids chalking that crash up to human error. It does not avoid the initial point you made that the human accident avoidance avoids the crash (and thus statistic) on the N miles of autopilot usage before it is disengaged.


its really really annoying that even smart people puppet 'self driving cars are safer than human ones'.

self driving cars are orders of magnitude more dangerous than human drivers. its absurd to say otherwise, and to do so requires a level of stupidity that can only be explained by dogma.


Can't we easily avoid these pitfalls by just comparing accidents/km on autopilot-equipped cars vs others? And disregard whether or not the autopilot is engaged.


This is a tangent about the Simpson's Paradox that may or may not be relevant to Autopilot. Specifically about the archetypal example given in the Wikipedia article, "UC Berkeley gender bias":

Even if the deeper data analysis showed a "small but statistically significant bias in favor of women" in terms of admission rates in individual departments, it doesn't prove that there isn't another kind of bias behind the overall admission rates (44% for men, 35% for women). Specifically, why doesn't the university rebalance department sizes, so that all departments are similarly competitive? It would result in the overall male and female rates converging. It would also make a lot of sense from a supply and demand perspective. It is entirely possible that there was no urgency or desire to do so because of bias on the part of administrators, who were mostly male.

Might the quickness to dismiss the issue as a Simpson Paradox reflect another bias?


I believe autopilot's safety features were disabled so these statistics are meaningless. I'm not talking about simply forcing it into autopilot, but disabling autopilot's ability to control the vehicle's acceleration. The reason I think this is the case is due largely to the speed the vehicle was traveling which is... unlikely under autopilot which limits speeds to 5 MPH over the speed limit.

If you are pushing on the gas pedal, the car can only steer and has no control over speed.

This weird sort of hybrid riding where the car is controlling the steering and the driver is controlling the speed puts the car in an untenable situation. It is a driver with no brakes and no control over the gas pedal.

Maybe Tesla should disable this mode entirely. Tesla (very reasonably) limits speeds to 5MPH over the speed limit when you are in autosteer mode, so lots of people like the ability to bypass the speed. Personally, I very much like being able to push the speed when it's reasonably safe to do so. If you are operating the system as designed, it's no less safe than cruise control.


Tesla does not limit the speed over the speed to 5 mph over the speed limit.


It's not clear exactly what the rules it uses are, but there are definitely some roads where the set speed is limited to 0 above the speed limit when autopilot is engaged.


This is consistent with my experience:

> - On highways: Autosteer and Traffic Aware Cruise Control have speed limits of 90 mph.

> - Off highways: Autosteer is no longer restricted to 35 mph. Autosteer has a speed limit of 5 mph faster than the detected speed limit, unless you’ve specified a lower speed limit offset.

> - If Model S does not detect a limit, then the maximum speed the vehicle can drive is 45 mph.

https://insideevs.com/news/332446/tesla-autopilot-update-bum...

We know it wasn't a divided highway. Even if they were on a 2 lane highway the speed limit would have been limited to 60MPH. They wrapped the car around a tree, destroyed the integrity of the battery, and both passengers were disabled enough they couldn't escape the car.

The car doesn't peg the accelerator under autopilot so even getting to 60MPH in "A couple hundred yards" seems unlikely unless there was someone applying the gas pedal.

I suppose the alternative explanation is there was a malfunction which caused uncontrolled acceleration and ejected the driver into the back seat?


Ahh, that explains it, my "speed limit offset" is set to zero.

One could speculate that maybe the guy 'driving' from the passenger seat tried to plant his boot on the brake and got the accelerator instead?


In my Model Y, there is no way the passenger could reach across to put his foot on the brake or gas unless they were straddling the console.



The linked article didn't say what model it was other than the fact that it was a 2019. Even so, that console only looks marginally easier to get your foot over. Definitely not a maneuver you could pull off in the heat of the moment.


When auto steer is engaged, it limits the speed to 5MPH over the speed limit unless you are holding the gas pedal down. If you hold the gas pedal down, the automatic braking and speed controls are not active. At that point, the car isn't in control.

If you are on the highway, it is different. But this car was in a residential area.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: