Hacker News new | past | comments | ask | show | jobs | submit login
Uber’s Self-Driving Cars Were Struggling Before Arizona Crash (nytimes.com)
233 points by paulashbourne on March 23, 2018 | hide | past | favorite | 232 comments



This is how I would deal with it if I was a regulator: * Revoke Uber's license to operate - While they might not be found at fault, they have shown gross negligence by having an operator that doesn't observe the road, computer vision that can't sense a pedestrian crossing several lanes even when their LIDAR manufacturer says they could, and an algorithm which encouraged the car to speed 3 mph over the limit when it was dark conditions (that's when you are supposed to slow down due to decreased visiblity). I would then:

* Construct a test to be able to operate under day and night conditions. Periodically audit each operator to confirm they can pass this. This is essentially equivalent of a driving test but would be a controlled environment that simulates real cities. Maybe have robot humans to simulate pedestrians and obviously have other cars on the road. * Require that operators test X miles of specific scenarios (city driving, highway, etc.) each month and audit their # of disengagements in each of these scenarios. When it his a threshold, their license is revoked and they need to go back to the lab. They should also submit videos of clips for all disengagements. * Require all operators to have video on their safety operators and submit records of % of the time the operator has eyes on the road along with videos. Regulators can revoke license if they see issues with the the operator not paying attention.

We can't rely on a corporation to safely controlling their experiments, the regulators need to step up here. was uber doing anything for example, to make sure the operator was watching the road? The operator is looking down for most of the time during the crash video.


When does the next batch of YC applications open up? I'd like to introduce TestTrack - Autonomous vehicle certification as a service. State (and federal) governments collaborate with us to develop a certification plan that AV makers must pass before being allowed to operate in their state. With a plan in hand, AV makers and state reps meet at one of our regional test tracks to implement the custom tests. Obstacle avoidance, braking distance, driving in snow, heat, rain, all the scenarios. If you can dream up a test, we can make it happen. Once satisfied the tests have passed, we certify the vehicle and its software. We create a signing key and sign the software, which is then allowed to be deployed to the maker's vehicles. The state is given a device that they can use to audit a vehicle's software during routine checks and of course during the investigation of any incidents. We're still working on our pricing tiers and we might reconsider the name.


ironically Uber currently has a business in this exact area

"Returning to the files, I soon found the puzzle piece. It was in a document Otto had submitted in support of its application to operate the state’s first Autonomous Technology Certification Facility (ATCF). This is an independent organization that will assess all autonomous vehicles that want to operate (rather than just test) in Nevada, checking that they comply with the state’s safety requirements and traffic laws. The application was a success, and construction of the facility is now underway in a suburb of Las Vegas. (Yes, you read that right — Uber will soon be assessing other companies’ compliance with safety rules.)"

https://www.wired.com/2017/02/how-my-public-records-request-...


Wow, talk about the fox guarding the henhouse!


You have to work blockchain into there somehow, looks good otherwise


Just imagine if autonomous ride-hailing services were required to self-authenticate their firmware before each ride. On the blockchain. Which means you sit in a stationary car, by yourself, for 4+ minutes (probably being forced to watch ads) before it starts moving. But hey, BLOCKCHAIN!!!


During which time the price of "gascoin" dives 50% and your ride is abruptly ended as a result of inadequate payment. When you try to pay the difference, you're informed that the network is congested (due to people rushing to send their coins to an exchange and panic sell) and it will take 3 hours to confirm.


S18 batch applications are already open:

https://www.ycombinator.com/apply/


>... we certify the vehicle and its software

Certify what, that the car passed the test track? That’s not the hard part.

The hard part is taking responsibility for the performance of the vehicle including the liability for accidents and malfunctions.

When the autonomous vehicles become a comercial reality they will be need to be treated like aircraft: regular maintenance by certified engineers and NO unapproved modifications. Gone will be the ability to pimp your ride, gone will be back yard mechanics and certainly no “jail broken” cars running modified software on the road.


I'm about to get my hack license (taxicab operator) in NYC. I learned in taxi school yesterday that in NY, any fatality or serious injury involving a taxi (including ubers etc) triggers an instant suspension of the driver's hack license until an investigation is complete (and if the investigation finds the driver was at fault, the suspension becomes a permanent revocation)

I don't see why an experimental self driving taxi should be treated more leniently than that.


Why are you getting a hack license?


Why go from hacker to hack? I'm taking a break from the corporate software job world and thinking about what to do next, but I need something interesting to keep me busy.

Many people would travel in a time of their lives like that, and I see this as much like that. Its just that this is depth over breadth, that is, exploring my beloved NYC in very fine detail excites me more than exploring the world in almost pointlessly coarse detail...

Making a few extra bucks and making myself useful rather than purely self indulgent (unlike traditional tourism) is a plus also. It's also something that comes with zero schedule commitment and no boss. It also dovetails with a longtime obsession with transportation.

That wasn't the point of my comment of course, just stating where I got this info, but I guess I piqued your curiosity.


The article mentions that the car was going "40 miles an hour in a 45-mile-an-hour zone." Is there a more reliable source stating that the car was speeding?


Seems an odd question. Do have some reason to think it was speeding?


The grandparent comment implied it was speeding:

> and an algorithm which encouraged the car to speed 3 mph over the limit when it was dark conditions

But the linked NYT article said 40 in a 45 zone.


Early reports said the police stated the AV was going 38 in a 35mph zone


It is right in a transition period between 35mph and 45mph. The Mill ave bridge is 35mph and Mill ave is slow (25mph) because it is an urban area and by ASU and Tempe Town Lake with lots of pedestrians.

After the bridge it changes to 45mph so it could be the Uber was also in a 10mph speed up phase in the hundreds of feet from the bridge right where the pedestrian was.


Is that how speed limits work in the US? It’s my understanding (elsewhere) that the limit is in enforcement after you past the sign, not at some unspecified distance before it, or when you see it. The sign itself is the only transition point between limits.


Yes that is the way it works in the US, but as a practical matter most people start speeding up as soon as the higher speed limit sign comes into view. And of course conversely they don't start slowing down until they pass the sign posting the slower limit.


Right, but both those things would elsewhere (I’m thinking of the UK) be considered bad driving and you could (and people do) get speeding tickets for this.

I’d expect a self driving car to be more aware of the rules and drive according to the law.


I was going to post that!

What are “reasonable” rules for autonomous vehicles? Should they stick to the limit literally, so only start accelerating after the increased limit sign? (There will need to be warning signs when a speed limit reduction is ahead to give the vehicle time to comply.)

Would a vehicle speeding up before the sign indicate a failure or error in the system?


It's important to follow the average speed of the traffic you are in, if you keep to the speed limit while driving slower than the average your driving causes a danger on the road by causing people to drive around you and changing lanes and speeds is where accidents happen.


The slow driver isn’t creating the danger though. It’s all the drivers going over that actually create the danger you describe. If everyone just followed the laws, the danger you describe would no longer exist. Normalization if traffic violations is compounding the problem of the danger we face on the roads. So many problems could be solved and so many lives saved if people just followed the laws. The fact that everyone isn’t following the laws doesn’t need to be accepted as status quo.


> The fact that everyone isn’t following the laws doesn’t need to be accepted as status quo.

It literally is the status quo. If I were still working on autonomous cars, I'd be much more inclined to work into the true status quo rather than imagine the status quo ought to be different and build towards that.


The status quo is also that people don't pay attention and are rude to pedestrians/cyclists/each other and ignore rules all the time (ever try waiting at an unsignalled crosswalk on a busy 4 lane road?). The one thing that I can't wait for is traffic enforcement by autonomous vehicles.


In Arizona, it is not against the law to drive over the speed limit. But it is against the law to speed, or drive too fast for the given conditions. Driving over the limit is evidence that you might be speeding. If everyone else is driving above the limit, and you aren't, you are the one causing the danger.


Your description does not seem to be consistent with my reading of the laws that are referenced from this page: http://azbikelaw.org/speed-limit/


I suspect that the 35 mph reports came from the other side of the street.

On the northbound side it changes from 35 mph to 45 mph right after the bridge just before the freeway. Southbound, it changes from 45 mph to 35 mph about 140 feet after Washington St, well before the freeway and bridge, and about 150 ft north of where the accident occurred.

If someone near the scene wanted to find the speed limit, and failed to consider that the two directions might have different speeds, I'd expect them to be more likely to find the southbound speed. That's because it is common to place speed limit signs shortly after intersections so that people turning onto the road won't have to go long to find out the correct speed limit. So I'd expect them to head toward the Washington/Mill intersection to start searching for a speed limit sign.

For the curious, here is the accident location on Google street view [1]. Back up 17 steps from that, continuing to face north, if you want to see the applicable speed limit sign. Or cross to the other side from the accident location and go north 8 steps to find the speed limit for the southbound road (it's on the last street light before you reach the intersection).

[1] https://www.google.com/maps/@33.4363673,-111.9425082,3a,75y,...


The accident was ~590ft after the speed increase.


So about 6-8 seconds or so, as the car was still accelerating most likely. There is also probably a slight delay for AI based driving after speed limits change, so probably closer to 4-5 seconds later.


That actually sounds like a potential corner case - braking while still accelerating. I'd hope to all that's holy that sort of corner case comes up in your first million miles of testing and not basically in production, but it sets my debugging nose a-twitching.


Yes, tests could use Boston Dynamics-type robots to simulate humans or deer etc, but the test for not killing a person crossing a highway could be simpler to begin with:

A test-road intersected by a tram-track with a motorized trolly on which various test-subjects are mounted; the car drives the road (at night under controlled lighting conditions) and the tolley drives the test-subject across the car's path at various speeds and times. The car must perform an emergency-brake, swerve or other safety move depending on the test-subject (a crash-test dummy, cardboard-box, plastic-bag, fake-bird etc).

The test result data could be fed back into the machine-learning to inform simulations for further testing in virtual environments. A company with Uber's resources can afford to build such a test-track and should have. There's a gap in the market for a company to provide such testing facilities for car makers.


> This is essentially equivalent of a driving test but would be a controlled environment that simulates real cities.

So, which is it? A controlled environment, or a real city? I think it is a bit naive to expect /every/ edge-case to be worked out prior to even a testing permit being given out. If you could pass a test that faithfully simulates ALL the intricacies of driving in a real city, you're well past testing phase anyway.

Further, this ignores that we may be able to incentivize this behavior (learn all the lessons you can in controlled environments before putting people at danger) in other ways, e.g. financial penalties.


Those tests would be required but not enough to proof the car is safe, if you fail the test you are not allow to test on the real city with a security driver, if you pass the test you can move at the next step of testing with the security driver in the real city.


Out of curiosity, is anyone aware whether it might serve some use to do tests like this virtually? Drive a car around to gather a mass of raw sensor readings on a busy street, and then feed those readings to test cars as inputs?

I guess that description doesn't make much sense, but it would be interesting to develop a virtual testing environment for testing the algorithm package even though that doesn't address hardware and numerous other concerns.


I think that's what NVIDIA tried to do with their BB8 test car [0]. I don't know what came of that.

[0] https://youtu.be/LVBBKppAaV4


That's super cool!

Looks like Nvidia has turned it into a self-driving car on chip, due for release later in 2018:

https://www.engadget.com/2017/10/10/nvidia-introduces-a-comp...


Google^W Waymo's cars have "driven" millions of virtual miles.


They've drive 5 million real miles and billions of virtual miles.


Yes, multiple of these virtual testing environments, with sensor simulation/playback exist. See Microsoft's Airsim, or carla-simulator.


Above all I'm interested why the car apparently did not apply the breaks when a person appeared in the visible light camera.

The NTSB investigation should be interesting.


Because it likely only uses the LIDAR for obstacle avoidance. The camera is there for stoplights, other car's turn signals, etc.


Good, a CI for road safety. I would love to see that.


Call your representative and demand legislation.


Reminds me of Google desire for a test neighborhood..


> audit their # of disengagements in each of these scenarios.

This would incentivise the companies (and force the drivers) to not disengage in borderline dangerous situations.


> Require that operators test X miles of specific scenarios (city driving, highway, etc.) each month and audit their # of disengagements in each of these scenarios.

They need to go beyond periodic checks. Simply have constant video telepresence any time they are operating. Another operator monitors them to ensure they are attentive.


This sounds expensive, unnecessary and would result in creating systems that were built to defeat a test, not reality.

Most current human drivers wouldn't be able to pass a test with the conditions that resulted in the fatal crash.


This is simply not true. The dashcam footage released by Tempe police does not depict the driving conditions at the site of the accident accurately at all. Check out [1] for the actual driving conditions at night.

I'm not usually the one to talk about conspiracy, but Tempe police had no legal responsibility to release the footage since the investigation is still ongoing. Couple this with the statement released by the Tempe chief of police defending Uber, I'm inclined to believe Uber and some state government officials are working together to sway the public opinion and minimize damage. At the end of the day, the victim is painted as an idiot who practically committed suicide. This is gross injustice.

[1] https://www.youtube.com/watch?v=1XOVxSCG8u0


The thing is that it doesn't really matter. There are only 2 options here. Either, the car should have seen the woman and taken corrective action or it was driving faster than it can see. Just because the speed limit says X doesn't mean that's the speed you should be going. Often, actual conditions require driving slower.


Well, I do feel like I've been had after that and a few other videos, so thank you for that.

My point though, is that I don't want self driving cars to defeat simulations or tests, I want self driving cars that can react to real driving conditions.


But how else do you imagine scenarios like these are tested? Normally there aren't a lot of people willing to jump randomly out in front of cars to test their reaction.


I really don't know. Even with all the hours of recorded road time by Google/Uber used as a testbed for proper driving conditions, most machine learning systems still suffer from figuring out ways of cheating[1], and in those cases it could be fatal.

[1] "The program even learns how to take advantage of bugs and glitches, like timing jumps so that Mario begins falling again at the exact time that he makes contact with a Goomba. Mario's invincible when he's falling, so the touch kills the Goomba, not Mario, and it gives him a further jump boost." http://www.wired.co.uk/article/super-mario-solved


That's a problem for the test design. Maybe we can develop an approach that uses more randomness to test these systems.

But I dont see how you can avoid having a simulation. These events are just too rare to wait until they occur in real life.


It is not ML deciding where to turn, when to slow down.


[flagged]


I think comparing this to human drug trials is an appeal to emotion.

I also think you're saying that bad tests are better than no tests, which I don't agree with. These would be bad tests because they wouldn't constitute the complicated reality of driving, they inherently only summarize the output of such a complex system.


You seem to have confused the concept of bad tests with incomplete tests. Asking to stop the car when pedestrian shaped obstacles appear seems like a bare minimum that we should expect from an AI before letting it drive on real roads. We can't test every driving situation in a lab, but that doesn't mean we can't test some of them.


[flagged]


It would be morally bankrupt to rely entirely on a quantitative test for something that is qualitative, I don't know how much clearer with the previous comment I can get?


[flagged]


We've banned you for repeatedly violating the site guidelines.

https://news.ycombinator.com/newsguidelines.html


You're missing the fact that any time someone takes a drug, they're essentially testing it against their unique body chemistry. The same as self-driving cars in unique situations.

The only difference is the car feedback loop is way better instrumented, and doesn't have to rely on mass statistics to inform recalls.


> Half of all humans who have ever lived, died from mosquito-borne illnes[s], about 5,000,000,000 human lives.

Your math doesn't work.

The current population of the world is between 7 and 8 billion.

If half of all humans who have ever lived were 5 billion, the total number of people who have ever lived would be 10 billion, of which at least 7 billion are still alive.

That leaves less than 5 billion who could possibly have ever died for any reason, much less mosquito-borne illnesses.


“Modern” Homo sapiens (that is, people who were roughly like we are now) first walked the Earth about 50,000 years ago. Since then, more than 108 billion members of our species have ever been born, according to estimates by Population Reference Bureau (PRB).

Source: https://www.prb.org/howmanypeoplehaveeverlivedonearth/


I imagine just rolling some objects across the road or some self-riding bicycles should be enough.


But what if the self driving cars fail the simulation? The simulation would be the first thing the car needs to pass, if car passes the simulated, then special tests, then you get approved to test on public roads with a security driver, I have no idea when is the time to test without a driver.


I know this is going to sound uncharitable and I acknowledge that any death is tragic but...

This person carelessly crossed a busy road at night! The victim practically did commit suicide. She could have just as easily been killed by a human-driving car, just like thousands of pedestrians do every year. Why are we so quick to try to blame the car? At least the car was in the right place.


The car was not in the right place, or it would not have hit her. In the state of Arizona a pedestrian has right of way in front of any vehicle, and the driver must yield to any pedestrian on the same side of the road.

Edited to add: We don't know anything about her physical or mental condition that might have made it difficult for her to perceive the car or the distance when she was crossing, so to assume that she had perfect perception and chose to cross in front anyway assumes facts not in evidence.


Coming back to this, I was wrong about the details of Arizona law. They require pedestrians to yield to drivers outside of crosswalks, marked or unmarked.

https://www.azleg.gov/ars/28/00793.htm

However, I still think Uber violated 28-794.1 and .3.

  Notwithstanding the provisions of this chapter every driver of a vehicle shall:
 
  1. Exercise due care to avoid colliding with any pedestrian on any roadway.
  2. Give warning by sounding the horn when necessary.
  3. Exercise proper precaution on observing a child or a confused or incapacitated person on a roadway.
https://www.azleg.gov/ars/28/00794.htm


I also think most current human drivers would also expect to spend significant time in jail for causing death by driving so careless they didn't even try to brake when confronted by an pedestrian in the roadway (even if she shouldn't have been there and an accident may not have been entirely avoidable)

I'm fine with Uber's executives and programmers accepting the same level of culpability as an alternative to meeting higher standards than humans to get on the road. Ditto for their culpability for potential bugs which would be considered [attempted] vehicular homicide in the absence of a more logical explanation for the vehicle's actions if driven by humans. Abdicate that responsibility and you'd better be prepared for more stringent tests.


Why do they call it accidents when it is usually negligence? This is nothing new.


I hate that word. I use crash instead.


Tests allow irresponsible people to blame the test when their system fails, I worry that a system like that will cause the opposite of your intentions.


Human drivers don't get to cite their test passes when their driving fails either. The point isn't that [overoptimisation to pass] tests should be the be only thing on which they are judged, the point is that if autonomous driving system developers want the special privilege of not being held as strictly liable as humans for erratic driving resulting from decisions they made, they really ought to expect more stringent testing regimes.


That video is hardly representative, you don't need street lights to see people at night. Simple car headlights are plenty to see someone in time to stop at 45 MPH.

The 'driver' was not paying attention and the software failed. An attentive human driver acting alone would have easily stopped.


> Simple car headlights are plenty to see someone in time to stop at 45 MPH.

That's a common misconception. You will see the pedestrian in time if they are near a street light or wearing something reflective. In complete darkness, if the pedestrian is wearing bright, colorful clothing (say, a yellow jacket), you will see them from 120 feet or so; often enough to react and come to a full stop, but not if the road is slippery or the driver less than fully attentive. And a pedestrian wearing black in the darkness is a dead pedestrian, basically invisible until the vehicle is on them.


I have stopped at 45MPH to a black dude crossing the street in dark blue clothing viable only in my headlights. Night vision degrades with age, but it's simply unsafe to out-speed your headlights and if you can't see you need to slow down.

Now, some mostly older cars have terrible headlights, but that's another story. http://www.iihs.org/iihs/sr/statusreport/article/51/3/1 Note Uber hit someone in bluejeans and a pink bike which is not that dark: https://www.theguardian.com/technology/2018/mar/22/video-rel...

PS: Just Starlight aka no moon and no clouds is plenty of light to see and walk around. I can't read small text, but you really should not be blind at night.


Here's the Finnish road safety council infographic about pedestrian reflectors (required by law in Finland): https://www.liikenneturva.fi/en/road-safety/reflector

They claim a pedestrian without reflectors can be seen from 50 m with low beams. Other source I've seen break it down by the color of clothing; white might be seen from 50 m (160 feet), yellow from 37 m (120 feet), and darker colors even worse, down to black which is visible from a distance too short to react.

High beams are a much different story, of course. And perhaps the numbers for HID lamps on a luxury car are better - at the expense of blinding and distracting the other drivers on the road.


Good lowbeams don't blind other cars. They create a curtain of light that's fairly uniform as bright spots hurt night vision.

At 45 MPH you should stop in under 50 meters in good road conditions. 50 MPH is around 53 meters +/- depending on the vehicle and road conditions.

Reflectors are very useful because they add cushion to that. And believe me that was heavy breaking situation. But, the Uber car did not even start to break.


I think that my eyes are better at night than the camera that shot that footage. It looked like she was wearing dark clothing though.


I just drove that stretch of road twice. I had no trouble seeing any portion of the road where the pedestrian was crossing, from easily 100 yards away. This human could have stopped. My assessment is that the majority of human drivers in AZ could have also.


Most humans can't do large square roots in their head either. I still expect my calculator to do it properly. What's the point of adding lidar to cars if they aren't going to use it?


You don't think most humans would see someone crossing a flat straight road in an area with mediocre lighting? If that's true we should have thousands more deaths a year like this.



I thought the point of SDV’s was that they would be radically better than human drivers? You can’t have it both ways.


Regulate that testing on public road must be done by a third-party who is certified. It's not perfect but would reduce perverse financial incentives...


But regulations cost jobs! Uber could just move to another state!

>The Phoenix area was added a year ago, and quickly became the company’s main testing ground, with 400 employees and more than 150 autonomous cars driving local roads because of "favorable regulatory environment, favorable weather conditions,” according to a company document.

The sorts of regulations you're suggesting cost California the privilege of hosting Uber's self driving program.


And no one in California is dead because of that.


Yes, and I love that California requires that the operator report on dis-engagements. It gives us some measure of how good the cars are performing (obviously not a great measure). The fact that the industry leader (Waymo) is fine with California's regulations show that they aren't onerous.


Exactly. I may need to be a bit more explicit with a /s tag next time.


How about blame the driver behind the wheel who was hired to drive the car and was 100% responsible for everything that happened?

Funny how the pile on is targeted at Uber and the tech, when this is a test vehicle with a human driver who was hired to be responsible to monitor, respond, and override the computer at all times.

The human failed. Blame the computer. Makes sense.


Why not blame both? The human should have been paying attention.

But it's obvious that riding long stretches in an automatic car is going to lead to people getting bored and distracting themselves and not paying attention. It's a HUGE topic in every discussion on self-driving cars. It's why Teslas require you to grab the wheel every 15 seconds.

Uber should have short shifts, bans on personal cell phones, attention and tiredness tracking (which IIRC Volvos already have!) etc, etc to combat this. But they don't actually care, it's just a regulatory checkmark, so they stick some minimum wage warm body in there on a full-time shift and this is what happens when it gets late.

Other safety-conscious know that humans will cut corners and hence have tricks like safety interlocks, double-checks, multiple people involved etc to combat that. In aerospace you can't just say "well the maintenance guy shouldn't have done that, 300 people died and it was all his fault" - you have redundant systems, verifications, checks, etc.

edit: from the article, Uber used to have two safety drivers in each car, but cut it down to one for whatever reason

> When Uber moved to a single operator, some employees expressed safety concerns to managers, according to the two people familiar with Uber’s operations. They were worried that going solo would make it harder to remain alert during hours of monotonous driving.

They knew this was going to be unsafe but did it anyway


While that person wasn't looking at the road, the blame should fall on Uber and the state of Arizona. Someone sitting behind a wheel and not digging is going to have a much slower reaction time than someone driving. These cars were not ready to be driving at such speeds at night.


> Waymo, formerly the self-driving car project of Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona

There can certainly be arguments about whether the kind of miles driven are equivalent here, but with more than two order of magnitude difference it hardly matters. I expected Uber to trail behind the top a bit, but I didn't expect that kind of huge difference.

This looked bad from the start for Uber, and it's only getting worse. I can understand that tests on public roads are necessary to actually achieve the goal of autonomous driving. But that step should come after sufficient tests in controlled environments that indicate that you got the basics down.


Uber's competitive advantage has always been its willingness to engage in unethical behavior. No surprise here.

They were forced out if California because their system didn't work so they just put it on roads in Arizona instead and killed someone.

Edit: grammar


They weren't forced out, they just didn't want to comply because they would have to report incidents which would show how far behind competitors they are, which would kill their valuation


It's a semantic difference and exactly my point. They were effectively forced out of California because their technology doesn't work.

https://www.theguardian.com/technology/video/2016/dec/15/ube...


It's interesting how so many techies initially had the knee-jerk reaction of "well, despite this accident, I bet these Uber autonomous cars are still safer than humans".

It seems like all information available to us suggests the polar opposite.


After nearly getting hit by a car while on a crosswalk in broad daylight today, I don't think "polar opposite" is correct.

I actually made eye contact with the driver, and expected her to slow down (I was right in the middle of the street). Instead she sped up to get through the crosswalk ahead of me.


Instead of cars driven by assholes, we have to worry about cars programmed by assholes.


And given that a single asshole can drive only one car at a time, but can program a million cars, and given this is Uber we're talking about, I'm worried too.


I've had this happen to me so many times. It's insane that driving laws aren't strict in proportion to how much damage an irresponsible driver can inflict.


I don't know if that's better than what happened to me: she just screamed and threw her hands up while I jumped out of the way.


Neither is exactly confidence inducing, but I'm pretty sure the one who deliberately endangered another person's life to save a few seconds is worse.


I believe it about google. I don’t trust uber at all. Or Tesla for that matter.


Why do you believe that about Waymo?

We have no information at all about the progress Waymo is making. All we've seen so far are a few videos produced by for PR under highly controlled conditions.

There's basically zero transparency here.


If they are struggling to meet 13 miles per "intervention", how was the "safety driver" not paying full attention on the road at night? That's baffling.


They were averaging I think 200 miles per safety critical intervention. So, every mile the driver would have to take over to do something like go around a truck blocking the lane or construction or whatever, but the car would come to a stop safely.


Whoa. The car couldn't even pass a stopped vehicle on its own? That's terrifying. It's glorified cruise control.

I think uber is basically pretending to develop a self driving car.


> As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona

To be fair, Uber is not necessarily acting badly -- especially since AZ doesn't appear to care either way -- having the safety driver is ostensibly supposed to make the AV at least as safe as if it were a regular human-driven car. I hope during the investigation we find out whether Uber made it clear to its human drivers that they had to be at least as alert as if they were driving a normal car.


As a human driver, when do you know to take control of the vehicle and start stopping? At 100 feet? 200 feet? 300 feet (a football field?)

A quick Google[1] suggests the average stopping distance of a human traveling 35mph is 95 feet (including reaction time.) The promise of autonomous is 60 feet. If the human decides to stop at 60 feet, it's too late. If they stop before that, how do they know if the vehicle is working properly?

1. http://www.brakingdistances.com/35Mph


You don't need a complete stop to turn a fatality to a minor injury


If you see something and the software does not. It probably doesn't provide that sort of information to the driver though. Even then the driver probably isn't paying attention.


I suspect this is simply asking too much of human driver. The human brain is not wired to be constantly alert without any ongoing stimulus. If someone else is doing the cognitive load, it is extremely difficult not too get bored and 'drift off'.


The article says that Uber instructed their drivers to hover their hands over the wheel in order to be able to take over at a moments notice. But they also supplied an iPad app to report issues, that some safety drivers used while in motion (others were cited as using it at red lights)


The hovering hand thing weirded me the hell out. You'd get terrible RSI if you actually did that all day. Try it.


It is pretty much impossible to be "at least as alert as if they were driving..." When you aren't actually driving. And yes Uber is acting badly. Just because Arizona let them do it, didn't mean they have to do it. Of course, I believe Arizona also acted poorly.


I mean, in terms of "conditions", it doesn't really get much better than Arizona. That's why most cars drive there. By most metrics, it's actually easier there than in California.


Well, except for the other drivers on the road.

Horribly horrible drivers here.


It's a cliche that everyone thinks this about their own particular area, but it's true that people in different areas (sometimes within the same state) drive badly in very different ways. Just another factor that SDVs will have to test.


Yep, the rub is that this time of year we get people from all over bringing their "drive badly in very different ways" with them which makes for some interesting interactions on the roadways.

Summertime, not so bad -- nobody willingly comes to Phoenix when it's 117° except the conventioneers who don't have a lot of money to burn and get a discounted rate on hotels and the convention centers.


They seem about middle of the pack to above average. They're definitely not New Jersey:

https://fivethirtyeight.com/features/which-state-has-the-wor...


Also note: The number of miles per intervention isn't a metric of AI driving skill, but of how safe the company is being to other road users.


In science, when there's a two orders of magnitude difference, it's more indicative of distribution mismatch between the test sets. The logarithmic property of the learning curve suggests that, unless one approach is a completely random walk, two learning algorithms after a period of training must converge to their maximum capacity. As a scientist, I'm refused to be clouded by my prior judgement of the companies behind the approaches and must question the nature of the metrics and the various definitions that were used.


This supports my thesis waymo is building a self driving car but uber is only building a pretend self driving car (to fleece investors, who realize uber will go bankrupt if snd when anyoneother than uber achieves total success with self driving)

Google is playing chess, uber is playing checkers. Dont be fooled just because the board looks the same.


What are you smoking?


> As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona

For reference’s sake, here are the disengagement ratios for the companies who operate in California, which requires such disclosures:

https://www.google.com/amp/s/thelastdriverlicenseholder.com/...

Waymo averaged more than 5,000 miles per disengagement, fwiw. Though as the link mentions, the definition of “disengagement” can vary


Something to note - a mile is a mile here. I can buy an off the shelf tesla and have it do laps on my neighborhood street until the cows come home, this is not the same kind of mile as downtown SF in peak traffic.


I don't think majority of those miles are in SF. I expect most would be in CA suburban towns like mountain view where for the past few years I'd see a waymo car basically Everytime I drove for more than 10 minutes


The conditions under which Uber failed are similar to those in Mountain View if there was no traffic (which there rarely is). I suspect Google's car would not have had the same issues.


I agree


Also, from the videos I've watched on Youtube, Tesla autopilot will refuse to take over under a lot of conditions .


Arizona, and all states really, should make this data and much more mandatory for testing. Not only that, but there needs to be a "minimum capability" you need to be able to prove before you're allowed to test on public roads.

No wonder Uber stopped driving in California, they didn't want people to see in what disastrous state their technology is. It's honestly ridiculous how reckless they are, all in the name of "winning" and being first.


But then they won't be able to win the regulatory arbitrage game. Why do you think Uber left California to test in Arizona? If the regulations were equivalent, do you think they would have done that?


They were driving in San Francisco up until the accident, and stopped their tests.


Hmm, then how are they bypassing the disengagement report?

https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disen...


I believe they got a permit after the last cutoff date, they'll show up in the next one most likely.


This is correct (AFAIK). Uber did operate AVs in San Francisco around 2016, but they did so while openly refusing to apply for permits [0], which would have then required them to disclose. They pulled their AVs at the end of 2016 and moved them to Arizona. Only recently, did Uber apply properly to have cars in SF, and apparently it's only been so far to give rides to Uber employees [1].

Also of note from [1]: apparently California is just weeks away from allowing AVs with no human operators. I'm guessing Uber wouldn't have been ready to try that out in any reasonable scale (if the state of the program is as bad as the internal docs say), though the cynical person in me thinks that, if the AZ fatality hadn't happened, they might have had a non-human-assisted AV in SF just to get some slice of the publicity.

[0] https://www.theverge.com/2016/12/22/14062926/uber-self-drivi...

[1]


Wow, they have a complete disregard for safety and just care about their stupid valuation. This isn’t even a case of “hacking” for innovation. The company seems to be rotten to th core with everyone just salivating to profit from the IPO.


Please de-AMP your links.


I don't see Uber on the list, what brand are they under?


Last year they had not been in operation long enough to require disclosure.


This is totally not surprising. Everything we know about Uber leads me to believe that they would try to shortcut this projects development. Shame on Arizona officials for being so lax with such a sensitive policy.


I just don't fully understand why Uber is attempting to compete at the technology level.

The most important components, like the Lidar systems, are all third party and non-exclusive.

And the likely current market leader in the tech - Waymo - has made many indications that they see their largest opportunity as providing the technology through partnerships.

Heck there's even been noise recently that Uber IS trying to license Waymo's tech (1)

And then you have major auto manufacturers are also pouring BILLIONS into the space because guess what - they still want to sell cars without having to pay a Waymo tax!

Today Uber owns 70 - 80% of the current US ride sharing market and are in ~700 other international cities (2).

And they have thousands of employees that act as local stewards of ride sharing etc, real infrastructure on the ground & politically that should give them some cushion to sort things out.

So yes, maybe Uber's valuation would take a hit if they relied on Waymo, or an auto manufacturer.

But maybe Travis' infamous quote about "I have to be tied for first at the least." needs another look or revision at this point? (3)

Maybe it's ok for Waymo to be first in a field of their own if it means not killing fucking pedestrians?

(1) https://www.engadget.com/2018/03/05/uber-waymo-partnership-f...

(2) http://www.businessofapps.com/data/uber-statistics/

(3) http://www.businessinsider.com/travis-kalanick-interview-on-...


I agree - Uber's definitely not going to fare well. Maybe Travis' thinking/realization was that Uber is just an app and maybe the major auto manufacturers will realize that and not want to pay the Uber Tax and just put their cars out on the road using their own in-house hailing app? Also prior to the accidents - it was great free media coverage for the future IPO when talking with investors and recruiting smart engineers.


> their own in-housing hailing app

This is very likely to be the plan.


There is no way you can justify the expense of development unless it is the plan. You can't make that much money selling self-driving as a very expensive (and finicky) option. Ride-sharing puts cars on the road, rolls them out regionally, and maximizes the utilization.


Uber as a service. Some company will build it and run the backend for most car companies, branded for each one.


> And then you have major auto manufacturers are also pouring BILLIONS into the space

That is now. But at the time Uber made its decision to develop self-driving cars, the auto manufacturers had just gone through a bruising financial crisis and were skeptical about this new self-driving technology. Uber recognized it as an existential threat before anyone else and did not want to rely on an industry with a poor reputation for management and technology investment.


Such is the nature of free-market economies: companies will go to great lengths, even wasting billions of dollars and thousands of man-days, for the sake of "competition".


I thought the conventional wisdom was that the only way Uber can become profitable was through autonomous vehicles. It sounds like they're very unlikely to succeed at that...

Arizona should consider improving their regulations to try to avoid additional fatalities.


They could always buy from someone else ... but uber seems to have extreme competitor paranoia when it comes to buying any services.

I think they also spent $500M to make their own maps when you could use any of the popular mapping APIs out there.


Yes, they COULD licence the tech from someone else, but the price would be too high to make it worth doing.

My feeling is that if someone else has self driving technology, Uber has no way of adding value. Tesla, Waymo, whoever - will simply run a taxi service themselves and reap more profits.

It's possible that a smaller self driving company might not have the capital to start an actual fleet, but I feel like investors would go for it as it'd be a pretty sure bet, provided the technology is solid.

I think Uber outright acquiring another company that has the technology is more likely - again, that company would probably also just start their own taxi service, but it might work if they have different goals - the owners may simply be looking for a short term payout.

Uber knows that. All they have is a userbase with an app installed, which isn't of as much value as it is for say, a social media app, because there's no network effect. You don't have to use the same taxi service as your friends do. People will be more than happy to switch to a different service if the price is competitive and it doesn't have a bad reputation.

It's not impossible they'd licence self driving tech, but I doubt the economics of it.


I wonder if Uber will have the capital or credit to roll into acquiring a fleet of self-driving cars by the time the tech is even ready. Remember, they don't have any fixed transit assets now -- their driver contractors are all on the hook for that.


Whoever is first to market will compete directly against Uber. There is no point in selling autonomous cars to Uber.


Not necessarily. There are a lot of reasons for a car manufacturer not to enter the transportation market:

They have no history of operating in that market. It's a major undertaking to compete across the globe. The other transportation companies will now be their competitor, and will refuse to buy their cars. They may not be successful because of marketing, positioning, etc.

If they refrain from that.. they can sell their cars to Uber, Lyft, et al.. which we already know the manufacturer is good at. They can sell their cars to multiple transportation companies that can offer varying levels of premium features; differentiate themselves with marketing; etc. And they can sell a large number of cars to Uber more immediately, since Uber is already in so many cities. They also won't need to deal with cab regulators in each city, etc.

Manufacturers sell their cars to dealerships who stock their cars on their lot until they sell.. this lets the manufacturer turn over inventory faster. If they built a transportation network, they would need to maintain those assets on their books.

If it were simple, I think we would see more manufacturers operating rent a car services. But we don't.


Both BMW and Mercedes run self service car rentals (ReachNow and Car2Go).


and yet those companies dont dominate the rental car market.. and plenty of other manufacturers sell cars to other rental companies.

Proves my point. Having expertise in building cars, doesn't translate over into running a transportation company. They're two separate enterprises.

Uber isn't going to go out of business just because someone else produces the cars.


"and yet those companies dont dominate the rental car market.." this is a fair point, however I feel there's more to rentals that stands in the way of that, such as real estate. There's only room for so many car rental places in a convenient location at an airport, for example.

I'm not saying setting up and running a company like Uber is an easy task, and I acknowledge that not every company vertically integrates everything they possibly could, but I don't feel like the hurdles to running a taxi app are as significant for a car company as they would be for entering other related markets.


> convenient location

Uber and Lyft don't even have taxi stands, drivers have to come from a much larger distance than even cabs, yet their usage is some multiple of taxis even with the inconvenience.

With SDC, having access to monopoly power locations will be less of an issue. And with Mercedes and BMW already sprinkled throughout an area, cars are usually close.

I think there is merit in someone having learned all of the issues with running a transportation company, but it isn't a billion dollar lesson, it is a low tens of millions of dollar lesson.

But I don't think being first has much merit, I think the margins will quickly shrink to zero within 5 years of SDCs and we will see a couple major auto manufactures go bankrupt. Ironic if the self driving car of the future is a bipedal android that makes any car a self driving car. Turning a 15k Hyundai electric into an autonomous vehicle.


And Silvercar was funded by and now acquired by Audi.


> Whoever is first to market will compete directly against Uber.

That's true, but then Uber can just run at a loss until someone else develops self-driving tech and sells it to Uber.


I'm not fully up to speed, but I thought Lyft and Waymo had something of a partnership.


Uber is cheaper than cabs ... Is there a reason why Uber needs to be cheaper than cabs?


It's only cheaper because it's operating at a loss. Taxis do not operate at a loss.


But why does it need to be? I would still take a Lyft/Uber because the service is better, even at the same price.


You’re probably atypical. I don’t use any of these services much and am not very price sensitive especially given that it’s almost exclusively for business travel. But I’m guessing a lot of people using these services care a lot about 2x differences in price.


It needs to be because they're the new(er) player. There's a vast difference in attracting customers between "costs the same, but it's neater" and "it's neater and cheaper!".


Once all the full-time taxi drivers are out of their traditional cab job, who do you think they will drive for?


Here is the supposed technological marvel, to the point where the NYT has some infographic showing the various tech whiz bang sensors on that rampaging SUV, but then someone hands them a 140p dashcam video with half the pixels in any given frame truncated to pitch black and no one at the Times thinks to question that particular narrative?

Instead we get a weak story about "pressure building before the CEO visit" and a bunch of blame on the unknowing scrubs Uber has "test-driving" them.


My takeaway opinion: Sad to see that Khosrowshahi did not end the self driving program at Uber.

Yes, I know hindsight is 20/20, but that's exactly why an incoming CEO is brought in. Everyone was saying autonomous vehicles was Uber's only possible future... he saw the internal reports, he knew his (or should have) talent (was it "techbros" or computer scientists?).

I feel this is the real inflection point for Uber.


I think Uber's essential mistake is to go from "autonomous vehicles are the only way our business model makes sense" to "we should be building autonomous vehicles".

Their expertise is in UI, routing algorithms, and business development. There is nothing in their background that indicates they should have any competitive advantage in building autonomous vehicles over Google/Waymo, the proverbial 800-pound gorilla of everything AI, or GM/Cruise, with experience in safety-critical engineering and oodles of capital to throw at the problem.

Sure, base your business model on the eventual development of driverless cars, but buy the damned things from the professionals, eh?


They essentially bought CMU's robotics department which ranks #1 (in terms of universities) in almost all AI conferences. They also bought part of UToronto's ML department which is where GHinton was teaching.


I think uber has been on a hiring push the last few years for quality engineers so it's not inconceivable they could make this work. I also think them working on this mostly makes business sense. To me it sounds like they rushed the development and tried going too far way too fast.


“It also appeared that the driver’s hands were not hovering above the steering wheel, which is what drivers are instructed to do so they can quickly retake control of the car. ”

Really? Who can sit for even 30 minutes with their arms out in front of them without getting sore?


13 miles / intervention. These cars should not have been on the road. Period. That's pathetic. Uber should be held criminally liable for this. And so should whomever approved their application in the AZ gov't. Maybe the driver too, if she knew this statistic. Her manager who sent her out solo for sure. This is truly outrageous. I'm full support of autonomous vehicle development but not when there is clearly zero oversight by authorities.


Uber's valuation seems to hinge on them not only getting AV, but the combination of getting AVs, beating competitors to it, and having the capital on hand to build out a large fleet of AVs all without losing too many riders to other services like lyft.

However fast they were progressing towards real AVs they'll be progressing slower now. And even then it looks like they are behind their competitors.

If I could short Uber I would.


Is it harder to stay attentive in an autonomous car than it is as a normal driver?


Yes. This is a well-known psychological phenomenon; see e.g. https://cacm.acm.org/magazines/2016/5/201592-the-challenges-...

> When a navigation system performs well over extended periods, drivers may no longer feel they need to pay close attention. Indeed, many psychological studies show people have trouble focusing their attention when there is little or nothing to attend to. In such situations, they tend to reduce their active involvement and simply obey the automation. There is already ample evidence drivers disengage from the navigation task when the automation is programmed to lead the way. But how pervasive is this? Casner and Schooler found even well-trained airline pilots report engaging in copious amounts of task-unrelated thought, or "mind wandering," when an advanced navigation system is being used and all is nominally going to plan.


I bet night driving compounds with this.

It may be reasonable to ban night-city autonomous near the beginning of the technology's public introduction driving, regardless of how well the car can see at night.


The other reason for banning night-city semiautonomous would be how many people thought it didn't really matter if they were drunk because they didn't need to intervene that much anyway...


Yes and you have to compensate for this considering lives are at stake. One needs to carefully select personal and supervise said personal closely. The fundamentals are not rocket science more like railroad age science e.g. the dead man switch. There is no way for Uber to plead ignorance as there are plenty of studies on the topic in the airline industry.

1) the person needs to be qualified as a true test driver (experienced auto engineer, professional driver with sufficient experience)

2) physical fitness, especially ability to stay awake and documented measured reaction times

3) strict rules on distractions, random checks

4) engineering measures for staying alert (sounds, tasks etc.)

5) limits on driving times, documented work logs and much more

and plenty more. Someone in Uber should have had the tasks making sure the tests are done as safe as possible. That person would have needed the qualifications and authority to affect the test execution.

The biggest problem is imho. that Uber does not have to produce self driving technology but only needs to be seen as working on the leading edge of it to sustain their financial storyline.


Significantly, because you aren't engaged with the environment. You are just watching the car drive in circles, which is slightly more engaging than watching paint dry.


Wow, compared to Waymo's 5600 miles per intervention Uber struggling to get to 13 miles per intervention is really a bad sign for them. If it turns out their cars really weren't ready to be on the road and possibly endangering humans, this might bring a lot of problems for the company and other self driving programs.


Isn't this where a humanoid robot (i.e. some sort of Boston Dynamics type robot) doing random, erratic stuff might be handy?

Maybe put a bunch of them in a test track area like this one in Concord, California. Gomentum is Silicon Valley's secretive test track for self-driving cars:

https://www.cnet.com/videos/inside-silicon-valleys-secretive...


Waymo has their own too: https://www.youtube.com/watch?v=Z3-2XuKdbMI

I do agree that there needs to be a minimum requirement you should reach before being allowed to drive on public roads. Companies should not be alpha testing with human lives.


Cool - was not aware of that one. At some point during the qualification process, multiple self driving competing car vendors should be put in the same test bed at the same time for a month of continuous 24/7 driving.

The test bed should have the 'average moving car density' equivalent to that of an SF/NYC neighborhood.

Let the robots test themselves and risk their 'lives' first before putting real human lives in the mix!


I think there are much cheaper and more reliable ways to mimic pedestrians right now than full on walking robots...

But yeah some sort of random pedestrian/animal/debris in the road testing seems like it will be crucial to prevent this sort of accident from happening again.


The self driving car executive’s children. These guys are willing to risk my family to make a lot of money. We should make it mandatory that their kids play an extravagant game of frogger, instructed (when/where to cross) by college kids who get paid $$$ if one of the kids gets hit. Televise it. Let’s see how confident they really are in their technology.


A modest proposal.


Wow, some tech investors are taking it on the chin this week. Uber has to pay its customers now; this self-driving Hail Mary was the only thing supporting its value. [EDIT:] am I wrong about that? Tell me more!


The story focuses on the safety drivers not being particularly safe. Uber apparently has internal cameras and they have to have reasonable CV to attempt the main task. Why not also try and get some sort of measure of if the safety driver is actually going to save the situation if it starts going south?


> When Mr. Khosrowshahi took over as Uber’s chief executive, he had considered shutting down the self-driving car operations

As I thought, it seems extremely likely that Uber is going to cancel the project now. Ethically there would have to be some kind of meticulous safety audit which would put them even further behind the competition. Even if they wanted to continue, there's hardly any point.


but that latest Arizona crash wasn't their fault


My guess is their code is buggy or not responsive enough to deal with things at the rate it’s needed. The hardware cannot but report the data to the SoC, it’s just that their software didn’t process it in time. That should be apparent in their logs. Is that not asked or probed?


I don't understand the fuss around this topic. In my country the law is very clear: in such a case the pedestrian has 100% of the guilt, the driver has 0% responsibility. From the movie, the pedestrian is hard to see. Yes, people point at Lidar and other systems, but human driven cars don't have anything like this: any human driver would have the accident in the same circumstances. But how about the pedestrian, didn't she see a car high 2 big headlights coming? Did expected the car to stop or avoid her? Why everyone blames the software (and Uber) and not the pedestrian crossing the street like she was crossing her living room where there are no cars around?


The “fuss” is largely because someone was killed, and independent of the tech issues here, many people don’t believe that death is a reasonable penalty for jaywalking. The fact that states differ on applying fault in these cases should be a hint about how the morality here is not clear-cut.

Then there’s the tech-focused issue of why Uber’s AV could not stop or even slow down when encountering a human-shaped object that seems to have been visible for several seconds before impact. Part of Uber and other companies’ pitch for AV is that it has better than human reaction and detection time.


The human was a human with a bicycle in another lane. It probably was large enough being sideways to look like a car, and that car was in a different lane.

Not sure if the human with bicycle was sprinting across the road either.


For one, because self-driving cars are supposed to be better than humans at seeing in the dark with all sorts of sensors - otherwise what would be the point of even allowing them on the road?

From the NYT article it seems that Uber's self-driving car systems may have terrible performance. Worse than humans performance is very likely at this point, especially when they seem to have this attitude towards self-driving cars:

https://www.theverge.com/2018/3/20/17144090/uber-car-acciden...

Also, either Uber may have potato cameras worse than a smartphone camera, or someone made that Uber video look so dark:

https://arstechnica.com/cars/2018/03/police-chief-said-uber-...


We expect people to operate within the limits of their capabilities. We don't demand superhuman response time because we can't. We rely on things like "best practices" to capture reasonable standards of behavior.

Self driving car hardware is trivially capable of detecting a person walking in the road 80 feet in front of you. We should set our standards based not just upon meeting human performance, but also upon doing extra things that are well within the system capability.


Whether the pedestrian is legally at fault is only part of the issue. Failure to yield is a ticket, not a death sentence. The driver also has no legal right to strike the pedestrian.

Every driver (or equivalent) has a responsibility, be it legal or moral, to use all reasonable means to avoid crashes, injuries, and loss of life.

No one will accept autonomous vehicles that do anything less. If anything, an autonomous vehicle may be expected to avoid risk to humans by suffering self-damage.


I don't know which country you are talking about.

Dash cams are of low quality, which means such low light situations are hard for dash cams, but easy for the human eye. That means, if the drive should have take attention, the situation could have been prevented.

Second, every modern technology can cope with this situation, except Uber. A regular Volvo XC90 would have enganged emergency braking, which is called City Safety.


The video has poor dynamic range and is likely highly misleading. If the safety driver were paying attention it's quite likely they could have avoided it (although I imagine disengagement would negatively affect reaction time).


I wish this was brought up in all of the many news articles that have covered this story so far, if not from the police themselves. Even an argument from common sense seems clear: the visual range of the video is barely much better than what you see when driving down a rural highway. But the AZ road cuts through a public park where concerts are held, including the Marquee Theatre at the nearby intersection. The median from which the victim crossed even has a well-paved pedestrian path to get from a parking lot to ostensibly the park [0]. If the scene was as dark as the Uber dashcam portrayed, there would be many, many more victims than just the one from Uber's AV.

But speaking as a sometime camera enthusiast and photographer, I don't believe there any digital camera yet that comes close to capturing the dynamic range of the human eye.

Look at a still from the video. The exposure is such that the street lights are basically pinpoints that seem to illuminate little more than the patch of ground immediately beneath them. Anyone who has seen municipal street lights (or even the shitty ones on a country road) should be able to tell that that can't possibly be the actual amount of light at the scene [1]

[0] https://goo.gl/maps/xr5U1yHML6B2

[1] https://i.imgur.com/mP2jH6E.png


Somebody posted a video on youtube where they drove past the scene of the accident with a less crap camera, at the same time a day later.

https://www.youtube.com/watch?v=1XOVxSCG8u0

a screenshot directly comparing the two images, appologies for the dumb meme crap

https://i.imgur.com/eSre3hL.png

There is absolutely no way the uber video is an accurate potrayl.


Well, I must admit I laughed at the meme.


The police's quick defense of Uber and their completely inaccurate comment about the pedestrian jumping in front of the car out of the shadow doesn't make sense to me except for collusion. A cursory look at even Uber's dashcam video would make it clear that the pedestrian didn't jump out, that the vehicle didn't even slow. So either the police were completely negligent or willfully lying to protect Uber. Didn't hear anything much about that.

But that was before I saw this thread and realized it wasn't even dark. If it couldn't even see a human in that lighting to even slow I don't know why this shouldn't be considered murder by negligence (whatever the legal term is) at every level from uber, engineers, the driver. And why the police chief is not being asked to explain his comment and defense of Uber.


You honestly think any alert human driver would have slammed into her as fast as that car did? I saw her in the video soon enough to have reacted, and I figure my driving skills are average. And that's assuming the lighting really was as bad as it looks from the video, which is dubious. My guess is that if I'd been driving I would have had time to slow down some, possibly saving her life.

I also find it in bad taste to blame this woman for her own death. There may have been a good reason she didn't see the car coming, but we won't ever know, will we?


>I also find it in bad taste to blame this woman for her own death.

Not only, it shifts focus from the actual issue.

Once said that the poor woman, may she R.I.P., may or may not have concurred to the accident, the focus should be on the actual performance of the driving system.

Where I live it is common (not every-day common, but every-couple-weeks or every-month common) that while I drive home at night (plain, normal country road, with speed limits between 60 and 90 km/h) and far less public lighting than what is seen in the Uber video that any of:

1) deer/roebuck

2) boar

3) porcupine or other smaller animals such as cats, foxes, weasels

suddenly decide they want to go to the other side of the road just when I am passing.

I never happened to hit any of them, but it happens from time to time to find in the morning some of the smaller animals killed on the road.

Collisions with deers and boars as well do happen but they are maybe 1/100th or 1/1000th or 1/10000 of the "crossings", I remember having seen in several years only a handful of these accidents.

For the record, the boars are usually more predictable, you can often see them on the side of the road before they cross whilst deers and roebucks simply appear out of nothing before your car.

Even if it is a single anecdata with "vague" numbers, my experience is that on average human drivers don't hit "suddenly crossing" living creatures very often, and even when that happens/happened I have always seen evident signs of braking.

Even if we value 0 the life of these wild animals, the damage to the car when one of the large animals is hit is not trivial and then there is the shock, the risk of the car be driven off road by the impact, some (hopefully minor) injuries to the drivers or passengers.

All in all, while it would be unrealistic to expect "perfection" from an autonomous car driving system, the possibility of someone/something suddenly crossing the road in front of the vehicle should be part of the testing with some solid evidence that the reaction of the software is on average not worse than the average of human drivers, and possibly much better.


Is it bad taste to point to the obvious? Really? You may be superman, your 70 year old grandma is also that fast avoiding ninjas on a highway?


It is bad form to use a strawman argument (very few people criticizing Uber have argued that the victim was blameless, at least from what I've seen on HN), though perhaps some would take extra offense when the argument involves a dead person, who among other issues hasn't been able to speak in her defense.

I can't speak for Uber and its engineering team, but I doubt they find it flattering when people defend their AV performance by comparing it to 70-year old grandmas.


You bring up a great (but separate) point. It is far too easy to renew your driving privileges even when your abilities are no longer sufficient to be a safe driver.


Arizona driver's licenses don't expire until you turn 65.


Moral or legal culpability is not a zero sum game. Recklessness by the pedestrian does not absolve Uber. Generally, if it's in your power to avoid killing someone, you have an obligation to do it. That's also why comparisons to a hypothetical human driver are beside the point. Uber should be judged according to what they could have done to minimize danger, not according to what a human driver would have done.


That seems kind of odd. Drivers are never at fault? What country is this?

In any case, it's less about the law or even who is at fault, and more about level of safety expected from self-driving vehicles and how to improve it. This is going to be investigated like an airline crash would, and the investigation should show what needs to be fixed.


It would help if you actually state the country in question, so we can properly judge your statement's applicability.

In the USA laws vary by state -- in California it is typically almost always incumbent on the driver to avoid pedestrians.

I've also traveled to countries like Jordan where the driver is always at fault, and fatally striking a pedestrian would have lifelong consequences for a driver.


The country is Romania and the law is very precise: if the pedestrian is not on a crosswalkk, any accident is the fault of the pedestrian. Funny, I lived in Jordan as my parents were expats there. Never heard about a case where a pedestrian hit by a car outside of a city had any consequences for the driver.


Funnily enough Romania is on top of the pedestrian fatalities statistics within the EU: http://ec.europa.eu/eurostat/statistics-explained/index.php/...

Maybe you should reconsider the law.


In Arizona it is legal to cross the road, outside of a crosswalk. Given that this was actually near an intersection, it probably wasn't legal in this case. But that isn't the issue. The issue is a self driving car, which is supposed to be better than a human driver, hit the person. A person died because a company was testing their product. A product that want ready to be used in real world conditions.


because thats not how progress is made.

I want self driving car accidents to evolve like airline crash investigations not traffic fatality ones.


Wouldn't those with greater power (the power to kill) be tasked with greater responsibility?


No. If you jump in front of a train, is the train driver's fault? Where is the pedestrian's responsibility?


A train has no steering capabilities, the breaking distance isn't even close to a car. All one can hope is that the train manages to stop within two times the line of sight, if lucky.

A selfdriving car should be able to stop a lot faster than a train, and has instruments that is supposed to be able to see an obstacle even in bad light.


In Arizona it is the driver's responsibility to yield to all pedestrians on the same side of the road. She somehow crossed 3 and a half lanes of traffic without the Uber SDV ever perceiving her. Their fault.


Coming back to this, I was wrong about the details of Arizona law. They require pedestrians to yield to drivers outside of crosswalks, marked or unmarked.

https://www.azleg.gov/ars/28/00793.htm

However, I still think Uber violated 28-794.1 and .3.

  Notwithstanding the provisions of this chapter every driver of a vehicle shall:
 
  1. Exercise due care to avoid colliding with any pedestrian on any roadway.
  2. Give warning by sounding the horn when necessary.
  3. Exercise proper precaution on observing a child or a confused or incapacitated person on a roadway.
https://www.azleg.gov/ars/28/00794.htm


Why should we accept your puny and limited human capability of "vision" as an excuse for a superior machine capable of much quicker reaction times and outfitted with a plethora of sensors that operate far beyond your narrow natural limitations.


Why should a pedestrian cross the street at nigh on a high speed street (over 30mhp limit) in front of a car? Did she know the car is controlled by a superior machine capable of much quicker reaction times and outfitted with a plenthora of sensors? Really guys? Blame anyone else except the pedestrian walking in front of a car? What if there was a train instead of a car, plenty of sensors and equipment? You completely lost your moral compass, I think.


Even if she would have crossed illegally! Drivers have to be prepared to handle situations where others violate rules! If you presume that everybody around you follows the rules, you make very dangerous assumptions:

1. that they perceive the situation in the same way you do

2. that their interpretation and application of the rules to the current situation is the same as yours

3. that they are currently able to control their body and vehicle sufficiently to follow the rules

4. That their vehicle does not malfunction

Can we accept that people walking across streets is a common occurrence? A driver, be it software or wetware, must be able to handle this.


In no country is it ok to deliberately drive into a human.


I don't know what country you live in but, in Arizona it is different:

http://www.ncsl.org/research/transportation/pedestrian-cross...

The pedestrian had the right-away in this particular situation.

EDIT :: Looks like I read that first sentence wrong (my apologies)... "Vehicles must yield the right-of-way to pedestrians within a crosswalk that are in the same half of the roadway as the vehicle or when a pedestrian is approaching closely enough from the opposite side of the roadway to constitute a danger"

Growing up in AZ I was always told that pedestrians ALWAYS have the right away - something I always thought was strange.

I do stand correct however.


My reading of it was the opposite: > Pedestrians must yield the right-of-way to vehicles when crossing outside of a marked crosswalk or an unmarked crosswalk at an intersection.

If a pedestrian is inside a crosswalk they almost always have the right of way. Outside a crosswalk it is not the case.


That page is about crosswalks. The person who was hit was not in a crosswalk.


The link you shared disagrees with you - it says that in Arizona vehicles always have the right-of-way outside of crosswalks:

"Pedestrians must yield the right-of-way to vehicles when crossing outside of a marked crosswalk or an unmarked crosswalk at an intersection. Where traffic control devices are in operation, pedestrians may only cross between two adjacent intersections in a marked crosswalk."


Looks like I read that first sentence wrong...

"Vehicles must yield the right-of-way to pedestrians within a crosswalk that are in the same half of the roadway as the vehicle or when a pedestrian is approaching closely enough from the opposite side of the roadway to constitute a danger"

Growing up in AZ I was always told that pedestrians ALWAYS have the right away - something I always thought was strange.

I do stand correct however.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: