Hacker News new | past | comments | ask | show | jobs | submit login
Fatalities vs. False Positives: The Lessons from the Tesla and Uber Crashes (hackaday.com)
116 points by szczys on June 18, 2018 | hide | past | favorite | 150 comments



Great points regarding aligning precision/recall of AI systems with actual human supervision capabilities. One quibble: I hate seeing articles uncritically repeat Uber's line that a human driver could not have reacted in the same scenario. The dashboard cam footage they released does not accurately represent either human or car perceptual capabilities [1], and folks recreating the scene have shown that it doesn't even look like a good (or unaltered) example of what a dashcam should have perceived [2].

[1] https://ideas.4brad.com/it-certainly-looks-bad-uber [2] https://dgit.com/residents-recreate-crash-uber-car-case-5616...


>> Great points regarding aligning precision/recall of AI systems with actual human supervision capabilities.

May I nitpick? The article discusses False Positive Rate and False Negative Rate. These are, respectively, the complements of True Negative Rate and True Positive Rate a.k.a. Sensitivity, a.k.a. Recall.

Precision is not the same as False Negative Rate. Specificity is, as in Sensitivity/Specificity.

These metrics tend to be reported in pairs: TPR/TNR, FPR/FNR, Precision/Recall, Sensitivity/Specificity, which confuses the issue- but not all pairs are the same.


To add to that, since I see it nowhere mentioned explicitly, neither in this thread nor in the article: the theoretical framework here is Signal Detection Theory (SDT) [1]. The problem at hand can be nicely visualized with two overlapping bell curves for the signal present/absent situations [2].

[1] https://en.wikipedia.org/wiki/Detection_theory [2] https://jdlee888.shinyapps.io/SDT_Demo/


Two points - first, these frameworks don't really account for driving into concrete bollards vs. a shaky ride. Second none of these frameworks estimate production performance well; time after time after time I see classifiers that did "95%" in test and "88%" in prod and "that's pretty good!".

My point is that we are really bad at estimating performance of classifiers. Really bad, and mostly we are pretending it's fine.


A better source for the example videos showing the light levels at that point is the arstechnica page being cited: https://arstechnica.com/cars/2018/03/police-chief-said-uber-...


There's no way of knowing if a human driver could have reacted properly in this specific scenario. However if you live in an area where deer frequently cross the road, you'll know that collisions with deer occur regularly; that would indicate that it's easy to hit things that cross your path, even when you should be able to see them.


> There's no way of knowing if a human driver could have reacted properly in this specific scenario.

What? Why? The specific scenario is recorded, the surrounding conditions are known and relatively easy to reproduce, and there are both norms and requirements about human capabilities when driving a car. If we can't attempt to determine whether a human could have responded in this case, I'm not sure where we could determine anything about any kind of accountability.

People obviously make mistakes too -- I've hit a deer before, unfortunately. But when the consequences are high, we look into culpability about whether someone should have done more to prevent the mistake. Driving under the influence, without a license, etc. are all things that can make someone responsible for the big outcome of a relatively easy-to-make mistake. Accepting Uber's "could you react to this video I'm showing you?" defense is skipping all the steps that would let us assign responsibility or even learn / improve from the incident.


The miracle on the Hudson comes to mind in this case. After the event there were some wondering if the crew could have safely glided to an airport. Investigators painstakingly recreated the event in a simulator and ran several trials. The issue with the experiment? The crew in the simulator knew what was coming. They were able to make the field because they were already primed to turn for the field immediately without any time to think.

I’m sure you could do a blind study of this scenario but I’m sure people participating in a driving study in a simulator are not going to be driving as they do in real life. We see it every single day in the airline industry. It’s why we’re checked and observed at random intervals while out flying on the line. The surprise aspect gives you no time to prepare or study up to be in perfect shape like you do when you’re headed into a training event.


A rather large percentage also failed on the first attempt.

Which is a more honest representation. Humans might have hit someone crossing the street there say 1:100 or 1:10,000 times, but both of those are still possible. So, it's more a question of relative rates where a person or Uber car would avoid hitting the pedestrian in a similar situation.


The comment I was responding to specifically mentioned the dash-cam video, which I hope we can all agree would not be representative of what the operator could see. Nor would any of the on-board sensors be relevant. I suppose they might provide enough information to do a re-enactment.

At no point did I suggest that this would be an excuse to absolve Uber of any responsibility.


I disagree to a minor degree, because humans can have widely different ability of night vision.


A few years ago I hit a deer. Had the deer been waiting to jump into the road from on a sidewalk instead of behind a bush, I would not have hit it. The reason I hit the deer is because I didn't know the deer was there.

When people hit deer, it's not because the deer was casually walking across the road like a human jaywalker. It's because the deer jumps into your path in an instant. As somebody who's hit a deer and done plenty of urban driving with no shortage of jaywalkers, I can assure you there is no fair comparison. The only sort of situation that might be comparable is if somebody is trying to commit suicide by car and deliberately leaps in front of you, or if a small child concealed by a car parked by the side of the road runs out into the road suddenly. But those scenarios are unrelated to this incident.


Not in locations with substantial street lighting. It's usually in an unlit road that you get the danger (especially if you aren't using high beams due to an oncoming car).


True, but a human would have applied the brakes at least which may not avoid the collision but would reduce the injuries. We can never know but maybe a human would have been able to stop. Or maybe a human could have braked enough to prevent death. Or maybe the exact outcome would have happened even if the brakes were applied.


In fact, as anyone who has read the NTSB report would know, a Human was responsible for braking in this situation and FAILED to do so in time. The above discussion is irrelevant and results from ignorance of the facts.

From the report: ". At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).[2] According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator."

For anyone interested in actually being informed about this incident. https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...


There is going to be a reaction time difference between someone actively driving the vehicle and someone who is monitoring an automated system. It is human nature to get bored and not pay close enough attention. This is exactly why some of the Tesla autopilot crashes have happened in addition to the system screwing up.


Yes. I've pointed that out previously.

NASA has a simple tool for studying this, the Multi-Attribute Test Battery.[1] It uses a Windows PC and a joystick. The person being tested uses the joystick to center a marker in a square. The marker tends to drift away, and has to be corrected. This simulates the most basic task of flying - flying straight and level.

There's an "autopilot", which can be turned on to keep the marker centered. The user can then let go of the joystick.

But sometimes the "autopilot" fails, and the marker drifts out of the square. How long it takes for the subject to notice is measured. That's a good approximation for Tesla's semi-automatic driving system.

There are lots of papers on this. It's well understood in aviation that takeover time after a fault is seconds to tens of seconds.

[1] https://matb.larc.nasa.gov/


A human could also have attempted swerving to avoid the accident. The question that I feel can't be answered is how visible the pedestrian would have been under the conditions and if there would have been time to take action.

Even braking to half the recorded velocity would likely have been fatal.


> Even braking to half the recorded velocity would likely have been fatal.

This is not so. Half would have been under 25 mph which is much less than 50% fatal. See http://www.humantransport.org/sidewalks/SpeedKills.htm


The human operator was responsible for braking in this situation and failed to do so.

https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...


What's the point of a self/assisted driving vehicle then? If it doesn't compensate for human error, then it's useless, since humans can already operate vehicles.


The human operator was monitoring an automated system from which is can take tens of seconds to take control back until the operator is at full capacity.

Additionally the automated system clearly recognized the situation and 1.6 seconds before impact determined it should be engaging the emergency brakes. However, Uber disabled those because the Human operator was responsible to take several seconds to reorient before doing an emergency brake.


Semi-autonomous systems like this have the effect of reducing 'driver' attentiveness.


That was my point, human operators are responsible for not hitting deer yet it happens every day. The only reason it doesn't happen to people more often is that people are better at avoiding running in front of cars.


>That was my point, human operators are responsible for not hitting deer

How is this even comparable?

A deer jumping in front of a car is not the same as a person walking the bicycle slowly across several lanes (and almost making it to the other side) on a rather well-lit road.


Because the human is currently monitoring the car to not hit a deer, as other comments mentions, it can take dozens of seconds to regain full awareness.


Dozens of seconds? Have you ever even driven a car? lol


Yes, dozens of seconds. This is the scenario of "human was actually doing something entirely different because the car is self driving"

This is NOT classical reaction time.


This set of trade-offs in self driving cars is exactly the same one that CAD (computer aided detection/diagnosis) systems have been making in medicine for decades. In many cases both type I and type II errors will (with statistical certainty) kill people over enough iterations.

It's not an easy problem, but the best you can do is demonstrate that the system improves on the standard of care. In other words, overall the trade-off has better outcomes than not using the system. The same will be true for self driving cars if/when they reach mainstream use. The important thing is the average performance is improved significantly.

It's worth noting that, like the CAD systems, focusing too much on fixing an individual error can cause a degradation to the overall system. As these things get better and better, you'll have to be much more careful about applying non-obvious fixes, or risk a much worse outcome that doing nothing at all.


Imagine a CAD system that was so fundamentally unreliable that it ignored positive hits because the high rate of false positives otherwise would make the system useless. Instead, the CAD system came with a warning printed on the instruction manual that a doctor had to review every scan anyway.


In the US anyway, doctor review is standard-of-care anyway for most of these things. Not necessarily for performance or technical reasons (although it may be) but for liability. I suppose if you know this is the case, you may tune differently on your ROC curve based on joint performance, although I don't know personally of a system that does that explicitly.

It isn't really interesting to talk about performance (what I think you mean by "unreliable") in some absolute sense - it's interesting compared to current performance only.

So - sure, if the automatic driving systems are significantly enough below human performance in general it's premature to use them on public roads. However, if they are approaching human performance (again, in general), they aren't. And you may see specific new failure modes coming up because of that, but focusing on those modes in isolation is a mistake.

I have no knowledge of where the overall performance is for self-driving cars, and hence no opinion as to whether or not we are being premature. I'm just pointing out that the trade off is fundamental, and will effectively never go away regardless of improvements in classifiers.


If you had a CAD system that could not only diagnose but take immediate action (though surgery or otherwise). If on occasion, this system took actions that would appear to a human being as "completely illogical" and those actions killed people, it would be hard to explain to a person's family or to a jury that "this reduces fatalities on average", even if a company had nice charts that claimed to show this.

That's sort of the problem one has in explaining self-driving cars, along with the problem that it hasn't yet been shown they are capable of reducing fatalities.


    it would be hard to explain to a person's family or to a jury that "this reduces fatalities on average"
Yes, that's true. However it is a problem that has to be solved if you want the best outcomes. Anything else will be effectively making things worse for people in general.

This isn't a new problem at all, the saying "hard cases make bad law" encodes a similar sentiment from an area with far less empirical outputs.


> However it is a problem that has to be solved if you want the best outcomes.

Or you could invest some more and get a system that does not act crazy sometimes.

Yes, assuming the car company is honest, this is a sub-optimal strategy. But blind trust is not an optimal strategy. There are plenty of costs born out of lack of trust, and this is just another one.


I think you missed the point I was trying to make. After all people act “crazy” sometimes, and we have constructs to undertstand it. It is unreasonable to spect that we will be able to create an computer that doesn’t make surprising calls sometimes. The trade off between type I and II errors is fundamental, expecting to engineer our way out of it is naive.

Now that isn’t to say I believe we can have higher performance systems. It they will also have a version of this behaviour. Making them behave worse to hide some of the cases won’t make it go away either.

Nobody was suggesting blind trust.


I think you missed the point I was trying to make. After all people act “crazy” sometimes, and we have constructs to undertstand it. It is unreasonable to spect that we will be able to create an computer that doesn’t make surprising calls sometimes.

If a person acts crazy, we certainly do blame them. However, the point is people accept the position "if a computer is irrational no more often than a person, it's no worse a situation" in the abstract but have a harder time accepting a situation where a computer irrationally kills someone than when a person irrationally kills someone.

Moreover, having a person make the mistake "gives you someone to blame." If computer builds a system and that system can be shown to gratuitously kill a person, the amount of liability a jury is inclined to hand out may be quite large since it's spread about a large amount of people.

Also, in the self-driving car situation, you really are going to have to have situation where blind is required.


It in the abstract it isn’t “no more irrational”, it is “on average, clearly better” that is the target. So while I agree on the psychology being different it’s entirely plausible that we have to figure out how to communicate this on i dear to have measurable improvement.


Welcome to breast cancer screening.

The current systems are so good at detecting tiny anomalies that probably don't matter that their usage has had to be tuned down.


You have to be very careful with false positive rates with something like breast screening, as the work up rate has definite negative consequences. Radiologists have too high a false negative rate on such screens, so you're trying to achieve a balance. To be really blunt about it, you are trading off complications/deaths from missed cancers against complications/deaths from biopsies, but these categories occur at very different rates.


Just beep for every positive, false or otherwise, before braking if no action is taken by the human operator. Also blink the stop light so the operator of the car in the back sees it and gets ready to take acton.


So an object is detected, it beeps and flashes the brake lights, then if the driver does nothing, it brakes? What does the driver need to do to have it not brake? What about when there isn't sufficient time between object detection and anticipated impact to warn the driver? What frequency of beeping would be preferable to just driving the car manually?


Yes, it brakes if the driver does nothing. Preferably slowly if the distance to obstacle allows it. If the user brakes instead, the human operator takes over, even if s/he softly brakes for n seconds, it could turn off the hazard warning. If there isn't sufficient time to warn the driver, have it just brake automatically to avoid collision. This is what AEBS/AEB collision avoidance systems do. Frequency of beeping could be increasing with distance to object, just like parking sensors. Continuous beep = accident imminent.


The standards for self-driving cars are MUCH higher than for computer aided diagnosis. Choice of medical tools is made by trained, rational doctors largely hidden from the public. Self driving cars’ failures will play out in front of normal people who often have emotion-driven subrational decisionmaking processes.


Your point about public visibility stands, but there's plenty of evidence that doctors are irrational just like the rest of us.


Hard to believe these emotion-driven, subrational people with beliefs like "autonomous cars should not be on the road with the braking features off because the ride is too choppy."


I'm not claiming they are identical problems. Just pointing out that the fundamental type I/type II trade-offs for the classification part of this are just that - fundamental and unavoidable. The analysis of performance itself is by nature very similar. At least, if you do it properly.

How you communicate that data and to who is different, I agree.


I don't agree that that's the discussion. The discussion is the knob that ranges from "inconvenient" to "occasionally deadly"

Imagine being dead because drivers were tired of their overly cautious vehicles.


We're asking a lot from this kind of software (for good reasons), but humans commit hundreds of leaps of faith of various severity on the roads daily -- failure to yield, failure to maintain a safe following distance, assuming other drivers immediately adjacent to you, like in lanes that are significantly slower than yours, will keep driving safely and carefully. Urban, peak-hour traffic on most US freeways is an exercise in collective insanity, riding people's bumpers at 55+ mph (and often significantly higher), leaving little room to stop for incidents [1][2][3][4] or debris.

But only a small subset of these situations result in significant accidents, because unimpaired humans, largely, have some intuition for self-preservation. On the other hand, we're expecting an algorithm coded by humans to perform better than a complicated bioelectric system we barely understand.

Waymo's self-driving program has opted to thoroughly understanding its environment, which is why their cars drive in a manner that bears no resemblance to how humans actually drive. We as a society have to eventually reconcile the implications of the disconnect.

[1] Unsafe lane change in traffic with different lane speeds: https://gfycat.com/CleanGleefulArawana

[2] Tailgating causes crash, swerve, multi-vehicle accident: https://www.youtube.com/watch?v=j0rj2sZ1KA4

[3] Inattention to incident causes further accidents: https://www.youtube.com/watch?v=hZL6OKwQGew

[4] Inattention in slowing traffic causes accident: https://www.youtube.com/watch?v=Ff7wbSwTuEk


human drivers routinely do things that are not justifiable by vehicle physics (e.g. stopping distances) because they have a comparatively sophisticated and unconscious understanding of our physical world, how changes may develop and how other human drivers or pedestrians might react. self-driving is a much harder and unlikely problem than the perpetual hype-cycle is willing to admit.


Agree with minor nitpick

>human drivers routinely do things that are not justifiable by vehicle physics (e.g. stopping distances) because they have a comparatively sophisticated and unconscious understanding of our physical world, how changes may develop and how other human drivers or pedestrians might react

Whatever "we" do is justifiable by implication because what is and isn't justifiable is set by consensus. Traveling 85mph with 1.5sec following distances might not leave much room for error but it's still "justifiable" because what's "justifiable" is a function of what everyone considers ok to do.

Self driving systems don't need to kill less people, they need to kill less people in edge cases that are so trivially avoidable to humans that the only people who have similiar kinds of accidents are incapacitated.


Mostly because they use the heuristic of "see, nothing happened last time" and do it again but with a little less safety margin.


It's partly that, but also a matter of human drivers trusting each other to some extent. In congested areas if you leave the recommended safe stopping distance then another driver will always cut in front of you. So people adapt by closing up the gaps and hoping the drive in front doesn't do anything stupid.


I find this one interesting. I rarely drive in the city, so when I do the distance in queues seems pretty crazy. But at some point I stopped caring that people cut in in front. The active cruise control keeps my distance now (apart from when we're stopped) and often people will use that gap. But... so what? Let them and keep your distance again. It really improves the experience.


If you try to account for it, it's not much time at all. And you're improving traffic flow by providing a means to change lanes (a bit more) safely. Most people "cutting in front" are actually just in search for a spot.


Are you saying that even if AI can't react as well as humans, the roads would potentially be safer with more AI drivers because of the cumulative effect of all the little safety rules they follow that human drivers do not?


My prediction is that if self-driving cars commingle with human-driven cars on the same roadway, and are common enough to show up during your commute, road safety will improve slightly in aggregate, but the immediate environs of traffic surrounding a self-driving car will be less safe.

There will be drivers annoyed at the car whose behavior approximates a tentative, wary driver: strictly adhering to the speed limit, leaving ample space in front, avoiding lane changes, braking early, and occasionally pumping the brakes for no discernible reason. These self-driving cars will create a traffic bubble around them, making it less safe for everyone else, including the aggressive drivers who will cut it off or overtake it at a high rate of speed, and the average motorists who are just trying to align their driving to the emergent conditions of traffic, and are caught between conflicting pressures and unsafe adjacency to different speeds.


Humans are pretty good drivers if you don't count the ones who are drunk, looking at a phone or otherwise impaired.

If you a bunch of AI cars into the mix safety will decrease unless you toss them in in a way that results in them displacing the more risky drivers.


It's unclear to me why human drivers should be considered unable to adapt to more "tentative, wary" drivers on the roads. After all, humans vary quite a bit in how they drive. There are plenty of cautious drivers already out there.


They do adapt; it's just slightly less safe than being among a set of vehicles behaving more similarly. Same thing with a particularly tentative human driver. (Or a particularly aggressive one, of course.)


Yes.

Computers won't get distracted. Computers won't accidentally mix up medications and get foggy. Computers won't get tired. Computers won't drive drunk.

I can create a self-driving car better than 50% of Southern Californians right now: "<beep> High humidity detected. I'm sorry, I can't let you drive today, Dave. <beep>"

People suck at driving in new places--take a plane flight to a new city, rent a car, and have a passenger record how many illegal things you do--you probably won't make it out of the airport before the first infraction.

People are good at driving in the familiar because they memorize it. A new stop sign at a previously uncontrolled intersection causes chaos for a while.

Computers will also memorize the local environment at some point. Remember how bad Google Maps was when they started? This will be the same.


You're comparing what humans do to what software could possibly do, which is a category error.

Software doesn't recognize color coded warning signs.

Software doesn't trust it's stationary object recognition at high speeds.

Software doesn't distinguish between high speeds on limited access highways and high speeds on pedestrian accessible roads.

Software has a higher crash and fatality rate per mile driven than humans, right now, despite driving in statistically safer cars.

Yes there are good reasons to be hopeful software can do better at those things eventually, but it's obvious now that it's not ready (or Tesla wouldn't have to trash-talk a dead man for not maintaining high enough torque on the steering wheel to inform the car he was steering, because he wouldn't be dead and they wouldn't have to require passive hands on the wheel).


>Software has a higher crash and fatality rate per mile driven than humans, right now, despite driving in statistically safer cars.

Do you have a source for that claim because everything I have seen has been inconclusive so far?


From https://en.m.wikipedia.org/wiki/List_of_autonomous_car_fatal...

3rd Tesla autopilot fatality happened at 210,000,000 km, giving a fatality rate of 9.2—14.3 per billion km of autopilot depending on if you take the average just before or just after the third death.

From https://en.m.wikipedia.org/wiki/List_of_countries_by_traffic...

USA has an average of 7.1 fatalities per billion vehicle-km.

I’m assuming most of the Tesla kilometres were in the USA. I don’t know if that is a reasonable assumption.


Autopilot is also (usually) only turned on in situations where it is worthwhile, works well, has a benefit.

General human driving rarely has that luxury short of "do not drive at all".

Comparing the "best/better world" scenarios of where AP is active, versus all miles driven is an apples and oranges comparison.


>3rd Tesla autopilot fatality happened at 210,000,000 km

That 210 million km (130 million miles) number is just flat out wrong for that time period. That number originally comes from Tesla's public statements after the first fatality in the US. That was close to 2 years before that 3rd fatality happened. Tesla should be well over 1.5 billion km by now which would bring the fatality rate to down below 2 per billion km.

[1] - https://www.tesla.com/blog/tragic-loss


Tesla doesn't publish current mileage or crash incident reports under autopilot; given their willingness to lie about Wei Huang's conduct before his crash I'd be surprised if their statistics actually held up as beneficial. The numbers I recall seeing were specific to Uber, who were attempting to operate a fully autonomous vehicle (Tesla doesn't claim to, except when Elon does press).

In any event, if you're going to compare injury and crash rates, you should compare Tesla to other luxury cars with FCW+AEB. Those systems alone create 40% fewer crashes in at least one analysis[1] but don't come with the risk issues of automated lane changes and steering control that create driver inattention. That improvement is over and above passive safety improvements in luxury cars.

1. https://orfe.princeton.edu/~alaink/SmartDrivingCars/Papers/I...


So just so we are clear, you stated as a fact that "software has a higher crash and fatality rate" while not having any evidence that what you said is true?


His evidence is the 4 driver fatalities resulting from Tesla's Autopilot, which is infinitely more than the 0 driver fatalities of any other self-driving system.

Including pedestrian, cyclist, and other-vehicle fatalities, Tesla's fatality statistics are so bad that they exceed the combined total of every other self-driving system back to their respective launch dates.


Where did that 4th fatality come from? I have only ever seen reports of 3.

Either way, you can't credibly use aggregate totals without at least mentioning the underlying rate statistics. That would be like me saying that over a 100 people die every day from manual driving but only 3 people have ever been killed by self driving Teslas and therefore Teslas are safer. Statistics doesn't work like that. Tesla's have more fatalities than any other semi-autonomous car because they have orders of magnitude more miles driven. It is far to early to tell whether they are either safer or more dangerous.


I’m having trouble finding out if the earlier Chinese death was or was not caused by autopilot. All I see are opinion pieces. Do you have a citation for that?


I don't think anyone has publicly released a definitive answer on that. Tesla doesn't have access to the car or data and I am not aware of any investigation by a trustworthy third party. However the details of the accident are similar enough to both the other accidents and known Autopilot flaws that it seems reasonable to conclude that Autopilot was likely active at the time of the crash.

And I appreciate you correcting that inaccurate mileage reference on Wikipedia.


Thanks for that link. I’ll update the Wikipedia page if nobody else gets there first.


It is a shame that sales pressures are stopping us forcing "AI" cars from being branded in an obvious way (bright orange and pale blue stripes?) - this would allow human drivers to treat the vehicle as a non-human, removing a lot of the dangers it would cause to lawbreakers who follow too closely.

Of course `AI hybrid` vehicles would still behave unpredictably unless they had some way to signal whether the human or the `AI` was in control of the vehicle.


People also have trouble with changes to otherwise familiar places. I watched an elderly driver last week get very discombobulated by a local detour, at low speed. And I can see that happening to me in a few decades too.


> humans commit hundreds of leaps of faith of various severity on the roads daily

And this is necessary (to some degree, probably not as far as we take it though) in order to drive a car in a practical manner. Instead of thinking about it as "safe" vs. "unsafe", it's more helpful to think about driving as trying to limit your risk to whatever you consider an acceptable level, based on your model of the world.

From least to most risky, some potential models are:

* Assume nothing beyond the basic laws of physics

* Assume nothing outside your visual range is breaking any road rules

* Assume the road outside your visual range is as you last saw it (no blockages / hazards etc.)

* Assume other vehicles around you will mostly follow the road rules (basically what Waymo does, afaik)

* Assume other vehicles around you are alert and have a sense of self-preservation (the normal human driving behaviour you describe)

* Bomb through traffic YOLO style

The other issue, especially for emergency braking, is that whether or not something is a collision hazard or not is dependent on velocity, not just speed. Until emergency braking systems can take control of steering as well as braking there will always be false positives and negatives no matter how good we make the system's perception.


While we're on the topic of Waymo, does anyone know whether they use driving data from Google Maps Navigation? It'd be an interesting upper hand, if so.


Yes, they definitely do. I think there was a TED talk on google self driving a couple years ago which outlined how they used maps.


If a company is playing fast and loose with the false negatives rate, drivers and pedestrians will die needlessly, but if they are too strict the car will be undriveable and erratic. Both Tesla and Uber, when faced with this difficult tradeoff, punted: they require a person to watch out for the false negatives, taking the burden off of the machine.

Companies should not be allowed to punt this way, when lives are on the line. This really is a problem in need of a regulatory or legislative solution, as multiple companies prove they don’t have anyone’s interests at heart, but their own. Worse, every pedestrian, cyclist, and other driver on the road who didn’t sign up for this in-the-wild alpha is being drafted to enrich the likes of Tesla and Uber.


Lives on the line? On average, existing cars kill more than 100 people each day on American roads. The current system of human-driving cars has killed one million Americans over the past couple of generations, mostly from human error. To save lives we need to push forward on self-driving tech.


No, on average existing drivers kill more than 100 people each day [source?].

There's no proof yet that self driving cars are safer than human drivers. And more importantly there's a big difference between being safer on average than a human driver, and being safer than the average human driver.


The average was 102 per day in 2016, and that is just in the U.S. https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


And when will we actually start saving lives? How many humans are going to be killed by bad self-driving technology before we hit the break-even point? No one can even say! Self-driving advocates never seem concerned with the real-world present day costs of the technology, only the futuristic possible scenarios. You have to weight both sides of the equation.


What if we required everyone who received a DUI to only operate self driving cars for a period of 2 years?

Some of the alcoholics in my family drive about as safely as your average toddler. It takes years for them to collect enough accidents and DUI's to be forcibly taken off the road.

I feel like the break even point has already been reached for my family. But I'm fine if safe drivers don't want to turn over their domain to AI for another few decades.

Maybe AI vehicles could be required to be painted Bright Yellow like a Taxi Cab, or have other really obvious markings so that other drivers on the road knew to take extra caution with them? Just like an "AI Student Driver" program, that we all work together to get through safely. The AI could maybe just be allowed in rural areas, and kept out of the urban hellscapes of LA, BAY AREA, and NYC traffic.


> What if we required everyone who received a DUI to only operate self driving cars for a period of 2 years?

What if we required every car to have a breathalyzer ignition control?

We'd cut fatal crashes by a quarter, for far less then the cost of equipping every car with LIDAR.

For some reason, this idea never gets traction with the self-driving crowd. It may be because safety is not their first concern - watching Netflix while your car drives you around is.


I wouldn't assume rushing an unsafe version into mass production will speed up the production of a safe version.

Once v.1.0 goes gold, there is a lot less incentive to build a slightly more safe version.


This is a spurious comparison. The number of miles logged by self-driving cars are a blip compared to human-operated ones.


Ok, you go volunteer on a test track then. While you’re at it, you should volunteer for medical testing, do you know how many people die each year from various diseases?! Maybe the needed progress will come soon, or maybe not, but since your only justification is the magnitude of the present problem I’m sure that you won’t object.

Gooood luck!


Because sick people are desperate we actually make it quite hard for them to volunteer for things like risky medical tests. Requiring more animal or smaller pilot studies first. I don't know the numbers well enough to know whether this is a net win for humanity, but just like any technology development project, the longer your dev cycles the slower things go.


Especially the companies that need to solve autopilot sooner or will fail, so they fake it till they make it.


I can still see the people commenting that the Tesla driver did not had his hands on the wheel and he was warned when this warring happened 15 minutes before the accident, so the Tesla PR blog blaming the driver had results proving again how hard it is to undo a lie/almost lie.


Also, the people still commenting that the Uber video is meaningful. That's not from the vehicle sensors. That's from a completely separate dashcam and recorder.


A video was released today showing what the classifier in Tesla's Autopilot sees, provided by an individual who hacked it. False positives galore:

https://youtu.be/Yyak-U2vPxM


You should read the description: "These are my old snapshots with radar data overlaid."

It's just the radar data, not the camera based system. Further:

"Sometimes the circle is in place, where there's nothing present. It should be well known, that radar have problems with determining the elevation at which the object is located, so such readout can be caused by an object that is higher or lower on the image."

and

"Radar reports coordinates in 3D space and so the conversion to image space might not be super accurate at times"


this is great. It would be fun to turn off the source video and imagine what it would be like to drive with only what the algorithm detects!


Dunno about ‘galore’. The color of the circles seems to correlate with the confidence.


It's stationary (orange) vs. moving (green). Tesla ignores stationary objects detected by radar and the video makes it painfully obvious why.


> In contrast to the Tesla accident, where the human driver could have saved himself from the car’s blindness, the Uber car probably could have braked in time to prevent the accident entirely where a human driver couldn’t have.

That's a strong statement. Watching the actual video [1] of the Uber accident, it seems both an attentive driver (she seems to be not looking at the road before the crash) or a better decision algorithm, could have prevented the crash. Keep also in mind, videos are very bad at viewing low light shots, and the scene must have been significantly clearer to both the driver and the computer when you're actually there.

[1] https://www.youtube.com/watch?v=pO9iRUx5wmM


Read the actual report. The Uber system registered that emergency braking was necessary to avoid a collision but the brakes were not under the control of that system.

https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

Relevant section: "At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).[2] According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. "


Why would it be clearer to a compuer? Unless thhe car has some extra sensors (radar, laser), surely it cannot see better than the camera?


Uber's self driving cars have both LIDAR and Radar in addition to cameras.

https://www.gearbrain.com/autonomous-uber-car-technology-255...


The camera is just a dashcam and is not part of the self-driving system.


Turning one system off in favor of another is basically admitting that they have no faith in their software. Also isn't driving a car with sensors (without self-driving tech) basically a giant data collector? The mere act of driving a car (with a human brain) should help a SDV system understand a false-positive and a false negative simply by learning what a human does in contrast to what the machine thinks it is supposed to do.

I imagine this is what separates Waymo and the others. I feel like Waymo is using math/neural nets, etc.. whereas Tesla and Uber were probably giant hand-written if/else machines. I have nothing to back that up, except that that's kinda what this article is about.


Speaking for Tesla and Uber, they use neural nets. Tesla has the most data which is what you're referring to above and they do exactly what you were stating. The software is/has always been shadowing human drivers, that is learning and comparing to the decisions it would make on its own. Uber is also most definitely using a similar set up. I think neural nets are a basic necessity in this field since there are an almost infinite number of scenarios.


It seems like self-driving cars may not (and maybe should not) become commonplace until the roads are built for self-driving cars – that might be a better thing to focus on, instead of systems that mimic human judgment.


Seems unlikely that road infrastructure will be upgraded/changed prior to construction of self driving cars.

When developing the self driving tech, a company retains all IP associated with research/development costs. Would they ever instead pay for development of public infra? Would a municipality build roads that conform to Company A's standard of self driving roads, omitting the requests of Company B?


>Seems unlikely that road infrastructure will be upgraded/changed prior to construction of self driving cars.

By that logic, railways would have never been built.

"Why would I invest in constructing a special road for that engine of yours? Make it go on dirt roads as well as a horse does, then we'll talk"


Horse and cart never worked all that well, to the point that long distance travel by land was extremely rare. Deep ruts would form in the dirt roadways and mud would cause constant obstruction. It was easier sail around South America than to travel by land between San Francisco and New York City.

It wasn't until the railroad that travel by land was possible at scale. For the USA and Canada the railroad was seen as the only way to keep the vast land together as a single nation which is largely why it was subsidized. Further much of the subdisdy was land grants to land which would otherwise be economically useless without the railroad to allow transport of goods.

While self driving vehicles may improve efficiency of our roadways it is unlikely to be anywhere near the same scale of improvement that railroads brought, and the need to connect the country for political reasons is not there.


I absolutely agree that it's a tall order, but it seems at least possible that development could happen in tandem, and some kind of standards could be agreed upon for what kind of technologies would be used across the board. I'm picturing that if some new infrastructure and cars implement that, those could be roads on which self-driving cars would be permitted to drive, and a shift would happen gradually.


Volvo proposed embedding small permanent magnets in the road surface that would allow vehicles to track more reliably.

Their press-release for it is here, and has a couple of minor technical details:

https://www.media.volvocars.com/global/en-gb/media/pressrele...

I kind of liked the idea - it's basically a higher-tech line-following robot concept, and doesn't look too expensive to roll out compare to other proposed infrastructure changes.

However, I haven't heard anything about it since the 2014 press frenzy over the release linked above. Maybe getting road maintainers to make changes like this really is too much to ask.


When I first saw magnetic markers proposed, it was sometime between 1999 and 2005.


I remember seeing stories about self-driving cars on the Discovery Channel as a kid. All they needed were magnetic markers embedded in the roads. The takeaway was that dedicated infrastructure for self-driving cars was too expensive to make happen.

The thing that made the recent wave of self-driving vehicles exciting was that they claimed to work on existing roads. Giving that up is giving up their achievement over previous self-driving tech.


>The takeaway was that dedicated infrastructure for self-driving cars was too expensive to make happen.

Too expensive compared to what?

Any kinds of transit system requires dedicated infrastructure. Humans in particular need signs, road markings, traffic lights, etc.

Decades of research and optimization went into building that infrastructure so that it would be easy for humans to use safely and effectively.

Why is it surprising that this infrastructure simply doesn't work well for computers to operate in?

We aren't trying to run trains on unmodified streets. We build rails first. We should do the same for self-driving cars.


That's going to be impossible with the last-mile problem. We already have mass transit if you don't care about the last mile.


I have a short commute but I often drive all over California on the weekends. Having well-defined points where I could trust the computer to take over would be very useful, even if I had to manually drive certain stretches.

This weekend I drove from my home in the bay area to my Dad's home in central California. I could see a very useful system that goes like this

- Manually drive to the 101.

- Let the car take over until I reach the exit for the 152.

- Manually drive over Pacheco pass and get on the 5.

- Let the car take over once I'm on the 5.

- Car alerts me and I take over manual driving through a construction zone.

- Car takes over again until I reach the exit for the 58.

- Manually drive the rest of the way.

This scenario only involves automation on two of the major N/S arteries in California, yet it would have made my drive much easier.


And probably safer!


Can you elaborate? In the near future, it seems unlikely that passenger vehicles are going to be fully driver-less legally in many places, regardless. But I might be misunderstanding what you have in mind.


Sure, but the further you go in this direction the more I wonder whether you shouldn't just be improving public transport.


Public transportation is related issue, but it hardly solves all the same problems as automated vehicles do -- and eventually, I image, buses and such will be automated vehicles as well.


I think this is one of the first time in my life i see marketing (calling something « autopilot » which it is not) and overhype (we have to be the first to have self driving cars, now !) actually kill people and not just rip people or investor off.

I think history will judge those companies very severely, and i’m wondering if justice isn’t going to be just as severe right now.


Cigarettes were advertised as good for your health, for decades.


This certainly explains reports of almost every Google car accident being the case of human rear-ending it. Google did it right from the start, didnt gamble with false positives.


I will hold to my opinion stated before, these systems are not safe enough to be on public roads as of yet let alone in the hands of consumers.

that out I still see no reason why we don't adapt limited access roadways to support self driving in that realm. a large number of cities have dedicate HOV and (express) toll lanes that can all be adopted and marked much easier to support self driving capability in a controlled environment. plus its probably a good chance than both manufacturers of cars as well as AD systems along with drivers themselves would be willing to pay more for such access. I just recall the imagery from the old William Shatner series Tekwar that showed a similar approach, get on highway and car takes you into the city. It could even then be extended to service event centers so that it continue driving for you till parked; think off expressway to airports and stadiums


I wrote about this recently as well. This is actually going to be more of a concern as we incorporate models into everyday products. Stakeholders and QA need a solid understanding of Type I and Type II error so they can assess how much risk they're willing to take on and make that part of the quality process.

Shameless plug: https://blog.d8a.me/the-qa-of-stochastic-processes-a15a94065...


Nice article; thanks.

* * *

. Tesla claims: A 40% crash rate reduction with the autopilot as compared to no autopilot, over an 18 month period [1].

. If this is true—and we could imagine it is (at least partially?)—then Elon's remark to journalists would make sense:

"It's [..] irresponsible [..] to write an article that [..] lead people to believe that autonomy is less safe,” [..] “Because people might actually turn it off, and then die" [1]

* * *

But to have an opinion about the autopilot's risk statistics I would also need to know: a) What populations (data) they compare; b) How each population is defined (inclusion and exclusion criteria); c) What's the sample size (18 months, and?); d) Who makes these calculations (to clearly identify possible conflicts of interests).

- Not sure if this type of data is publicly available?

* * *

Actually the National Highway Traffic Safety Administration (NHTSA) seems to indicate[1]: 1) that the data comes from Tesla—cf. point d) => conflict of interest; 2) Autopilot on/off was NOT used for the risk statistics—although it's central (point a); 3) instead the "40%" would measure the "number of airbag deployments per million miles" which is a proxy-metric that's not directly related to car accidents.

- Hey, this is odd (it's definitively not a Science or Nature method protocol).

* * *

. "The Insurance Institute for Highway Safety suggests: A "13%" reduction in collision claim frequency, indicating sedans with Autopilot enabled got into fewer crashes that resulted in collision claims to insurers."

However it's a small difference and there're possible confounders like social status (a "Tesla driver"), gender, geographical area, and usually the confounders have a large influence on experiments, so it's unlikely that this (small) 13% difference would remain if we adjust for confounders...

* * *

Here's the article AARIAN MARSHALL on Wired. [1] https://www.wired.com/story/tesla-autopilot-safety-statistic...


Could any of the fatalities be related to the malicious code commits? [1]

[1] - https://www.fastcompany.com/40586864/read-elon-musks-email-a...


>It may seem cold to couch such life-and-death decisions in terms of pure statistics, but the fact is that there is an unavoidable design tradeoff between...

The fact that we as a society can't have an adult discussion about that topic is a large part of why it's so hard to strike the right balance.


Good thing L5 self-driving can be done without low-latency LIDAR and that Tesla didn't defraud many Model S owners by selling them cars fundamentally incapable of providing L5 self-driving.

/s


The upside of self driving vehicles is so immense that I can't help but find in favor of giving leeway to companies developing this tech. I would vote legislation limiting the accident liability of 'qualified' companies developing self driving tech. No, I don't know what separates 'qualified' from 'not qualified'.

Try looking at this a different way: there were 37,461 vehicle related fatalities in 2016 in the US alone. In the current risk-averse climate, would we - over the early 1900's - ever allowed the development of public infrastructure to support an invention leading to so many fatalities? Likely not, but the benefit we enjoy from motorized transportation far outweighs the cost of those 37k yearly lives. The point is that continuing with a risk-averse, liability-obsessed development culture will stifle invention that can otherwise lead to great quality of life improvements.


Then you and other people like you can volunteer to work at a test track as obstacles. Leave the rest of us out of it. You’re entitled to your utopian fantasy of what might be, and the best way to express that is to put your life on the line instead of volunteering others to do it.

Statements like this seem to be trying to make the point that any amount of increased public risk is unacceptable unless everyone opts in to accept it.

No, statements like mine reject the notion that sacrificing lives today for the possibility of an unproven improvement sat some unspecified later date is morally reprehensible. My answer is to invite the person endorsing this to put their own life on the line instead. If we were talking about medical testing on humans instead of cars, maybe it would be easier for some to understand. “It’s so bad today, the only course of action is to act recklessly to improve things as fast as possible!” is a thoroughly rejected argument.


Statements like this seem to be trying to make the point that any amount of increased public risk is unacceptable unless everyone opts in to accept it.

I find this argument entirely uncompelling. There are an inordinate amount of risks imposed upon me while operating in society which I neither agreed to nor would I given the choice, aside from in a hand-wavy “social contract” sense.

However as a member of society I am in fact compelled to accept myriad risks on a daily basis imposed on me for the benefit of others, or the benefit of potential scientific advancement.

This concept that a society cannot morally make risk-reward decisions in the course of technological advancement is entire bunk IMO. Government can absolutely morally make these trade offs and does so constantly.

I feel pretty good about the current legal and regulatory frameworks and the technical chops of the NTSB to monitor developments in self-driving R&D adequately.

One thing I am certain of is that we absolutely must succeed in developing, productizing, deploying, and ultimately mandating self-driving technology. Millions of lives and trillions of dollars are at stake, and yes, society as a whole will incur individual loss of life in the pursuit of this goal. An attempt to develop the technology with zero risk of collateral damage would in fact cost many more lives overall.


So far, per mile driven, the fatality rates of Uber and Tesla SDVs are far higher than human drivers in statistically safer than average vehicles.

Waymo has a better track record but essentially refuses to drive at significant speed -- I have yet to see a Waymo SDV place itself in a situation with a higher speed limit than 35mph, and I see them a lot.


I regularly see the Waymo vehicles in 45 mph zones on my commute. I can't say I've seen them go any faster, but I spend about 95% of my time on roads marked 45 mph or less.


>One thing I am certain of is that we absolutely must succeed in developing, productizing, deploying, and ultimately mandating self-driving technology.

On a planet where the primary killer of millions of people is unclean drinking water the argument that we "absolutely must" develop self-driving cars is not very compelling. Especially since what is really at stake here is your "trillions of dollars". People are primarily developing these technologies to make money, not prevent deaths, and the idea that society as a whole should shoulder the burden of additional risk to greatly enrich a few individuals seems morally indefensible.


When capitalism encourages extraordinary levels of investment in technological advancement which will save millions of lives, I call that a win-win.

Unclean water killing millions of people a year is a compelling reason to encourage investment in more affordable solutions for cleaning water.

You know what else kills 1.25 million people per year? Human driven cars. That’s a pretty compelling reason to encourage investment in eliminating the steering wheel. The best way we accelerate the advancement of this technology is, generally, not to regulate it out of existence.

> People are primarily developing these technologies to make money, not prevent deaths, and the idea that society as a whole should shoulder the burden of additional risk to greatly enrich a few individuals seems morally indefensible.

They can’t make the money unless they can prevent the deaths. The benefits to society as a whole as the technology is perfected is worth trillions. The idea that we would want to perpetuate a system which kills 1.25 million people a year because an extraordinarily rapidly progressing technology isn’t yet perfected is what seems morally indefensible to me.

I don’t know what you’re on about society shouldering a burden to enrich a few individuals. Economically speaking we all become richer with self driving cars.

The way I see it the question is simple — increase regulation now and shift the adoption curve to the right - resulting in on the order of millions of net additional driving deaths... or eliminate roadblocks and stimulate investment in self-driving R&D and shift the adoption curve to the left - saving millions of lives.

The fact there are those calling for clamping down on regulation and stymieing self-driving development after exactly one singular fatality in which self-driving technology may have played a contributing factor (but where the police on the scene ruled that the car was not at fault) appears to me to be a fearful mob type of response.


See, what I hear you saying is "Other people will needlessly die, but their sacrifice is a chance I am willing to take".

The public takes risks every day, but it rarely gives companies a free pass to move fast and kill people.


I am saying we all go out into a chaotic world every day and face technology which companies have put into the world for their own profit-seeking motive which is just as likely (if not significantly more-so) to kill us as a self driving car.

This has nothing to do with sacrafice. The government necessarily makes life and death trade-offs in setting regulatory hurdles. It’s entirely unconvincing to argue that the beta driver assistance and beta self-driving technology should be banned from roads because it can’t guarantee someone won’t die.

Because in fact automobiles are killing over a million people per year. Self driving automobiles are responsible for exactly 1 of those 1.25 million deaths, and driver assistance technology have contributed to about a dozen more.

What is indisputably moving fast and killing people are... people driving cars. The singular solution to this ongoing bloodbath may be only a few years away. But you seem to want millions of people to continue dying because of a misappropriated Facebook slogan?

I’m not a big fan of populist outrage in the best of times, but when it seeks to perpetuate the status quo of 1.25 million annual auto fatalities I think we owe it to ourselves to elevate the discussion to the point where you are actually considering the implication of your proposed solution.

Every year you shift the self-driving adoption curve you are killing hundreds of thousands of people. But in the real world today this technology could arguably share the blame for about dozen deaths. Should the government ban the technology from public roads and force companies like Tesla to recall their existing functionality? There’s a hysterical appeal to emotion be made, but nothing close to a cogent logical argument.

If anything, government should be pushing self-driving requirements harder than carbon limits. Instead of municipalities claiming they are going to be banning gas engines by 2030 they should be claiming they will ban steering wheels. The steering wheel is far, far deadlier.


Nobody here is calling for a flat ban of all self-driving technology. You are tilting at strawmen.

The blowback here is in reaction to what appears to be objectively bad software. Irresponsibly bad. Maybe even negligently bad.

Look to drug trials & the FDA. We take risk to test possibly life-saving drugs. But, we don't let just any whackjob take his unproven mad science experiment to clinical trial and inject people with mantis DNA.


We’re talking about whether it’s reasonable to beta test self-driving tech on public roads, and the relative risk of slowing down the advancement of self-driving technology (this is what is meant by “shifting the adoption curve to the right” — as in, seeing widespread adoption of self-driving further into the future).

I don’t think it’s a strawman at all to observe that greater regulations and restrictions on public testing will delay wide-spread self driving. Nor is it unreasonable to surmise a multi-year shift in the adoption curve could cost millions of lives.

I mean, argue that public mistrust of the technology due to sensationalization of the highly public (yet rare) failures is more likely to delay rollout worse than stricter regulations. There’s a reasonable rebuttal I could engage with!

But what I got instead was... mantis DNA?


A reference to my favorite mad inventor:

"Those of you who volunteered to be injected with praying mantis DNA, I've got some good news and some bad news. Bad news is we're postponing those tests indefinitely. Good news is we've got a much better test for you: fighting an army of mantis men. Pick up a rifle and follow the yellow line. You'll know when the test starts." -Cave Johnson

The strawman is you're the only one talking about banning self-driving test mules from the road.


[flagged]


That’s not whataboutism - I’m saying it’s the job of elected government to make exactly these determinations even when actual lives hang in the balance, and in fact many thousands of regulations make these sorts of risk/reward trade offs in all sorts of ways in our daily lives. Most of them are in fact significantly more risky than allowing self-driving cars on the road in return for significantly less potential reward.

My point is primarily that the claim like “I didn’t consent to share the road with this nascent technology” — as if that means the technology should not be allowed — is not actually how our society functions. We don’t actually get to veto technology that minutely increases our risk of injury when we venture out in the world.

I’m surprised to see your ad hominem attacks and accusation of shilling. You’re relatively new here so please allow me point out that kind of response is against HN guidelines.


Shilling is a very different thing from just being biased because of a personal stake in something. Shilling is dishonesty, bias is just human. I know you’re not shilling, I’m asking if you might be biased due to a personal stake, or ideology. Are you?


> Shilling is dishonesty, bias is just human. I know you’re not shilling...

But in fact your previous post asserted;

>> The whole thing is a dishonest deflection...

It seems like you’re trying to walk back your previous statement without apology or actually turning away from continued ad hominem attacks.


Call me paranoid, but this is the second time you’ve decided to cherrypick my comments and play the victim rather than just saying “nope, no personal financial or ideological stake.” I’ll ask again, do either of those apply to you?


The theorized upside is immense but that doesn't mean the practical difference now is even positive. If Tesla (Uber's already out of the game, probably) wants to keep operating autopilot in public, they owe us actual statistics on safety so that the public can make an informed decision. The private data laundered through the NTSB two years ago is incredibly suspect, and the fatality rate for Tesla since then appears to exceed what the average human piloted car would do.


> The upside of self driving vehicles is so immense that I can't help but find in favor of giving leeway to companies developing this tech.

This is, frankly, how a lot of scams work. Sure, there are some rough patches and the prototype doesn't work, but imagine when it does ... !

If Theranos had been able to diagnose all diseases from a finger stick, the upside would have been crazy-huge.

If the EM drive could generate substantial reactionless thrust, the upside would have been Trekian.

If high-deductible health insurance plans discouraged primarily wasteful healthcare spending...

If vegetable smoothies...


Rate of 2016 US Farming deaths: 21.4/100,000 workers (https://www.cdc.gov/niosh/topics/aginjury/default.html)

Rate of 2016 US Automobile deaths: 11.59/100,000 population, not drivers (https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...)

So, according to your logic, we should stop farming immediately, but continue driving?


No. You misunderstood his position. According to his logic the risk aversion is a bad strategy. So according to his logic, we should both continue driving and continue farming.


Arguing that it would be nice if something existed is not very useful in determining the actual performance of attempts to create it. Yes it would be nice. No that does not give anybody a free pass.


> I would vote legislation limiting the accident liability of 'qualified' companies developing self driving tech.

I'm not sure I agree, but if there were something like this, I think one of the requirements for being 'qualified' would be fully publishing all the data on any accident.


Or we could not let companies murder people with shitty half baked hardware and software combinations.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: