Hacker News new | past | comments | ask | show | jobs | submit login

How did LIDAR and IR not catch that? That seems like a pretty serious problem.

It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).

Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.




It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

That's not at all clear to me. I don't know too much about cameras, but it looks to me like the camera is making the scene appear much darker than it actually is.

In the video, you can see many street lights projecting down onto the ground, and the person was walking the the gap between two streetlights. The gap between street lights (and hence the person) was in the field of view of the camera the entire time; they just weren't "visible" in the camera because of the low lighting. I'm confident my eyes are good enough that I would have been able to see this person at night in these lighting conditions. (Whether I could have reacted in time is another question.) It seems to me like the camera just doesn't have the dynamic range needed for driving in these low light conditions, which is a major problem.


I have to agree. Just like a normal camera has issues in low-light, it is clear that this camera is diminishing exactly how light the road ahead was. While I can't say confidently that I would have been able to stop to prevent hitting them, watching the video in full screen does lead me to believe that I would have seen them and been able to apply the brakes at least enough to reduce the impact. Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else just prior to the impact. This alone leaves me skeptical to just how much could have been done to prevent this accident.


This is pretty much the experience I have with my dash cam, a Yi. In its recorded video, its automatic exposure control make it look like everything outside of the headline cone is pitch black, but it is actually not. I have seen deer and possums by the side of the road, and debris etc, that did not show up when I later checked the video for the same period. There is enough spillover light from modern headlights that a human whose eyes are dilated and adjusted to dark conditions will see a pedestrian standing on the median, stepping off it, crossing the inner lane towards the car's current lane. More than enough time to begin to brake and possibly swerve. I have dodged animals in a situation similar to this.


Yep that exposure control / sensor quality of the dash cam in the video was rubbish. My own Blackvues produce far, far better results than that. Just look at how nothing is illuminated by street lights, this clearly has the effect of making the poor rider appear "out of nowhere". Also agree it appeared driver was on smart phone most of the time, thus not in control of the vehicle, and had thus no business being on the road as these are systems UNDER TEST.

If that's the best Uber can produce then they ought to hang their heads in shame. Unless it was doctored with... as I find it hard to believe they'd put such rubbish quality cameras in their trials.


Do you trust Uber to provide all the data, or would they selectively produce data favorable to them?

Do you trust Uber to provide unedited raw video, or would they process it to increase contrast, make it appear that nothing was visible in the dark areas of the frame, reduce the resolution, drop frames, etc.?


It's funny how the internal camera which shows how distracted the driver was has way better night vision than the external road camera...


The key here is contrast; plus a IR light at 2 feet works great, at 60 feet...not so much.


The internal camera (let's be honest and call it the scapegoat camera, because that's the only practical use for human "safety drivers" when they are not permanently engaged) must take almost all its light from IR, because we don't see anything of the smartphone screen glare that the eye movement so clearly hints at.


I don't think the driver is looking at her smartphone. I think she's checking the car's monitor (as in a computer screen). Although to be fair, that should be showing the car's view of its surroundings so I don't know what's going on there.

Edit: Nevermind. Someone posted a picture of the car's interior, below and there's no computer screen.


Link?


Sorry - I can't find it. This thread has grown rather.


Ok so this is getting old now, but I just came across the following - which show what I'd expect the roads to look like, and geesh were Uber ever full of crap to release their video which pretty much had the effect of exonerating them.

https://arstechnica.com/cars/2018/03/police-chief-said-uber-...

Please check the videos out.


Yep, exactly..


> Yep that exposure control / sensor quality of the dash cam in the video was rubbish.

Is that the same cam used by the AI to detect obstacles?

I would expect a safe self driving car to include IR cameras that can be more cautious about moving warm blooded creatures.

Surely some more detailed telemetry data would reveal whether the main issue is with the sensors or with the algorithm.


I highly doubt that camera is part of the perception pipeline.


> I have seen deer and possums by the side of the road

Both of those have eyes that act as reflectors and you can see their eyes well before you can actually see the whole animal.

This[0] suggests that the total time required for a human to avoid an incident like this is 3.6s (at 35 mph, casual googling suggests the car was doing 40). Even if we add 1 second of extra time to deal with it I'm not sure that makes the cut.

0) http://www.visualexpert.com/Resources/pedestrian.html


Other people in the thread have pointed out the woman stepped out in a darker area between where the street lights are placed. Reflecting eyes are not the only way to detect an object. A person watching the road would have seen her dark silhouette contrasting to the next patch of light.

Also remember she was not a stationary object. She was in the act of crossing the road. Human eyes/brains are good at detecting motion in low light even if we can't 100% make out what the object is.

I have lived in Tempe and know that part of town well. There are apartments, gas stations, hotels, strip malls, fast food restaurants and a strip club. It's not a pitch black country road.


I know what you're talking about with the eyes, I spend a lot of time driving rural WA highways at night, but no. I have seen deer that had their heads facing the other way and were standing in the shoulder/ditch area. In conditions where i can definitely make out the shape of the deer and its location but the dash cam sensor misses it entirely.

Your last paragraph is a valid calculation if this were a case of a person stepping directly off a curb into the lane of traffic. However, it appears that they were probably standing on the median looking to cross, then stepped off into the left-most lane of traffic, an empty lane, proceeded across that lane towards the lane in which the car was traveling. In this sort of situation human intuition will recognize that a person standing on the median of a high-speed highway is likely to do something unusual. Particularly when you observe the visual profile of, as media has reported, a homeless person who is using the bicycle with numerous plastic bags hanging off it to collect recycling.


Driver didn't see this person because the driver was occupied with smartphone, only occasionally glancing up.

Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.

Perhaps the video was intentionally darkened to simulate this effect. :P


>Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.

Using bright interior lighting at night is something that we've known not to do for more than a century. If the driver couldn't be expected to see the pedestrian because the interior lighting or UX was too bright that is not something that does not reflect favorably upon Uber.


I wonder if the driver is liable.


That's their only purpose. Nobody in their right mind could expect human observers to stay as alert as an actual driver when cruising for days with an AI that is good enough to not require interventions all the time. Passengers add nothing to safety, and an almost reliable AI will make anyone a passenger after a short while.


Completely agreed, but the law needs to take this into account. Human psychology can't just be ignored on this.


I'd like to have an interior view of what driver was actually looking at. It couldn't have been a FLIR monitor, for sure.. it seems more likely to be a phone held in the right hand? Bit hard to tell with the quality of the footage, but driver looked rather tired to boot.

If so (a hand held phone), in Australia that driver would be going to jail for culpable driving causing loss of life.


It could have been anything readable. I got the feeling it was either a Kindle or something like that or maybe even a hardcopy of something printed or written on paper. This was just a hunch but I think it's being validated in my mind by the fact that there was no light seeming to shine on the driver's face but that's probably due to the night vision camera not picking up that type of light? I don't really know. My mind is filling in a lot of gaps here, I realize.

EDIT: Upon re-watching the video a third time and really paying attention to this I don't think there is any real way for us to know without confirmation from the driver them self or an official report on the incident. My mind was definitely deciding things that just aren't discover-able from the video itself.


Here we have it, I believe:

"Uber also developed an app, mounted on an iPad in the car’s middle console, for drivers to alert engineers to problems. Drivers could use the app anytime without shifting the car out of autonomous mode. Often, drivers would annotate data at a traffic light or a stop, but many did so while the car was moving"

https://mobile.nytimes.com/2018/03/23/technology/uber-self-d...

The whole project seemed designed for an outcome like this. Eg allowing app to be used whilst on the move, after reducing from 2 to 1 operators. Culpability ought to lie with Uber.


Here's a picture of the Uber car from inside. No FLIR, just GPS:

https://cdn.geekwire.com/wp-content/uploads/2018/02/Front-_-...

A different picture from that article shows that under the GPS is the gear stick, an emergency button, and a cellphone charger.


According to the filename, those are iPads, which implies they could have been displaying anything (not just GPS).


I think he is, at least, I've never heard of any law that removes responsibility from a driver if driving a self-driving car. I think this will also apply to empty cars, if they get into an accident, the owner is liable.


If I recall correctly the military has done a lot of studies on this.


I compare it to the backup camera in my car. While close up at night it is good, if something or someone is a short distance away I can barely make them out. However, looking in my mirrors I can see them or at least make out that someone or something is there.


A camera can have pretty good dynamic range at night, but it needs a big sensor, a huge lens to operate with a fast shutter speed. In the video, you can already see the motion blur, indicating shutter speed is slower than what it needs to be to identify nearby objects in low light.

Autonomous cars are never going to be viable. Just looking at the cost of high-end SLR sensors and lenses that you'd need to match human eye dynamic range, and you're already looking at an expensive setup, before we even get to things like 360-degree vision and IR/LIDAR/Hyperspectral imaging. And that's in addition to all the compute problems.

Sorry Silicon Valley tech-bros, but it's a fantasy you're chasing that's never going to happen. The quicker we can end this scam industry, the better.

People really need to be told "no".

Probably better to chase after flying cars..


I think you’re comparing object detection to high quality photography though. There are plenty of options that can detect objects at night. Even cheap infrared technology, I would think, would be sufficient for picking up moving objects at night.


Wetware is astonishing stuff. All the propaganda to anthropomorphize machines is showing here... cheap IR sensors are not the issue. AI is not intelligent and inanimate objects have no self.

They should pivit to augmenting drivers, not attempting to drive for them. I would happily utilize a properly designed HUD (meaning I have source access) connected to a fast MerCad or bolometer array.


Sorry for lack of input or varied discussion but I just had to stop and say how goddamn friggin cool it would be to have bolometers hooked up to a smart HUD that didn't interfere with your vision of the road. Something really translucent that smartly blended it's color scheme as to not interfere with the coloration of signs and details beyond your view on the road / around the road.

But you are right, though. I think augmenting drivers sounds like a great idea in the sense you talk about. The kind of augmenting drivers I don't want are those stupid headbands you'd wear that beep like crazy if your head starts tilting in a way that resembles falling asleep. If you are in danger of falling asleep at the wheel and need a device like that I think it's pretty obvious one should take a nap on the side of the road or in a free parking lot, haha. Hopefully if we do wind up headed in that direction the people inventing will have a similar way of thinking and inventing.


> Wetware is astonishing stuff.

It really is. The eye can detect a single photon. Fingertips can detect 13nm bumps, smaller than a transistor on Coffee-lake CPUs.

We're better off acknowledging machine limits to work on other problems instead.


High-quality exists because the human eye is that sensitive and discerning. And there aren't plenty of options that can detect objects at night. IR isn't any cheaper, and then you have to figure out what IR bands you want to detect.


> A camera can have pretty good dynamic range at night

No. A camera's dynamic range is pretty much fixed. If you can capture low-light objects it means high-light objects are completely blown out.


I've read (see [1]) that humans have a low-light ability that approximates ISO 60,000, a pretty large value and larger than simple video cameras provide. However, very high end pro/enthusiast SLR's go considerably higher, see this real-time astrophotography with the Sony a7s at ISO 409,600 (youtube video [2]). The same Sony will work great in full sunlight too.

The Canon ME20F-SH is a video camera reaches ISO 4,000,000. This camera has a dynamic range of 12 stops and is available at B&H for $20,000. [4]

Of course, this isn't exactly the challenge that cameras face when assessing a scene. The dynamic range happens within a single scene all at the same time. Wide dynamic range (WDR) is the term I've seen used in describing video cameras that can handle both bright and dim areas within the same scene.

[1] http://lightartacademy.com/blog/tutorials/camera-vs-the-huma...

[2] https://www.youtube.com/watch?v=ZRzXgSMbBu0

[3] https://www.cambridgeincolour.com/tutorials/dynamic-range.ht...

[4] https://www.bhphotovideo.com/c/product/1187825-REG/canon_100...


The extreme numbers are for static cameras mounted on a tripod with a slow shutter speed. They won't do a tenth of that in a moving car at 45 mph.


No that's not how ISO works. The Canon ME20F-SH shoots high definition video at professional video shutter speeds and has an available ISO range of 800 to 4,560,000. At $20,000 I'm not suggesting that this exact camera would be appropriate for use in autonomous vehicles, but I am pointing out that video systems can now exceed the capabilities of human eyes.

There are a number of video samples shot on the Canon ME20F-SH on YouTube. In these one can see that under low light situations the camera is shooting at ordinary video speed (the camera supports shutter speeds from 24 to 60 fps). I'm not trying to push the Canon ME20F-SH; I don't have any association with Canon. The manual for this camera is available on-line if you'd like to read up on it: [1].

The actual exposure of a video frame or image depends upon the f-stop of the camera's lens (aperture), the shutter speed, and the ISO of the image sensor. See [2].

Basically, each doubling or halving the shutter speeds corresponds to one "full-stop" in photography. Each full stop of exposure doubles or halves the amount of light reaching the sensor. Changing the aperture of the camera's lens by full stops also doubles or halves the amount of light reaching the sensor. Full stops for camera lenses are designated as f1, f1.4, f2, f2.8, f4, f5.6, etc. The light sensitivity of the film or sensor is also customarily measured in full stops. Very slow fine grained color film is ISO 50 and is usually used in full sunlight. ISO 100 is a bit more flexible and ISO 400 used to be considered a "fast" film for situations where more graininess would be acceptable in exchange for low light situations. Each doubling of ISO number corresponds to a full stop. So a photo take with ISO 400 at f2 with 1/1000 second shutter would have the same "brightness" as a picture taken at ISO 100 at f2.8 with 1/125 second shutter (less 2 stops ISO, less 1 stop aperture, and plus three stops shutter speed). Naturally, other factors come into play, the behavior of film or digital sensors at extremely slow or extremely fast shutter speeds isn't linear, there are color differences, and noise issues too. See [3] if you are interested in more about how photography works.

[1] https://www.usa.canon.com/internet/portal/us/home/products/d...

[2] https://photographylife.com/what-is-iso-in-photography

[3] https://www.amazon.com/Negative-Ansel-Adams-Photography/dp/0...


They could have 2 cameras--one for low light, one for high-light fairly cheaply.


1. A good start would have been to put a good camera in the first place because that one is absolute crap.

2. I'm not sure you can fix the exposure on "fairly cheap" dashcams.

3. "high-light" in night settings are likely much lower than standard daylight, let alone bright daylight.

Though I guess you could have an auto-exposure dashcam standard and add a low-light one which is only active in low light conditions.


Which is the same as with our eyes. There are ways around that limitation.


IR can help with that. Headlights are limited by the disincentive of blinding oncoming drivers...

Now that I think about it, self driving cars may be paralyzed by other self driving cars running IR boosted headlights.



The footage from a normal camera should not matter, a self driving car is equipped with stuff that works regardless of light conditions like LIDAR oor IR cameras. This looks to me like a software failure.


The footage from the normal camera does matter in that it's the main way that we (humans) can process the scene. The parent comments are just pointing out that the camera footage is likely darker than the actual scene in person.


The video footage being presented is beyond useless because it is misleading. The important data to determine wether the system misbehaved would look like this: https://www.theguardian.com/technology/video/2017/mar/16/goo...

Waymo cars are capable of sensing vehicles and pedestrians at least half a block away in every directions. I was reserving any judgement on wether this collision could have been prevented, but seeing the video tells me that 1) a human driver might have hit the victim regardless, and 2) I'm very surprised that the LIDAR sensor didn't cause the car to stop to a halt much, much earlier. This is exactly the kind of situation that I would expect self-driving cars to be better than human drivers.


I agree that dashcam/external cam footage is going to be limited and possibly misleading, and I would think/hope such footage isn't the primary factor in evaluating accident cases. But I do think there's value to it. I shouldn't have said that it is the "main" way for us to process a scene, but the most accessible/relatable way.

What you posted looks pretty cool, I don't know enough about it to understand what I should be prioritizing focus on, but we can chalk that up to ignorance. The benefit that driver-view footage has is that it is a viewpoint all of us are familiar with. If you ask me to watch dashcam footage to assess some kind of traffic thing, there's a general expectation of where I keep my eyes and what I notice.

This normal-human-view mode is probably going to be necessary in AV cases in which we determine whether the car's AI did the right thing. Presumably, as AV becomes mainstream and extremely safe, these accidents will involve edge cases and outliers which are poorly interpreted by sensors/non-human-vision. Seeing the scene as a human driver does might be a necessary starting place?

But the Uber case in AZ, IMO, proves your point. The Tempe police quickly made a judgement call based on what seems to be inadequate video. Everyone who can now view the video will also be inclined to think how impossible it would be to avoid hitting the victim, even if the actual scene in-person has much more light. And of course, we don't want to judge AV solely on whether it performs as well as normal humans.


> we don't want to judge AV solely on whether it performs as well as normal humans

You don't think performing as well as normal humans should be sufficient to allow them on the road?

Or are you saying they should be allowed even if their performance is worse than human? (...as long as some other criterion is met?)


j-walking, at night, no reflectors, in the dark.

Even if the camera was brighter uber isn't at fault anyway....


Uber may not be at fault, legally speaking. That's up to the legal authorities to decide.

However, as a society and civilization, and even more so, as engineers and scientists, we are going to expect that the autonomous car matches or exceeds human-level performance in critical situations like this.

Therefore the time spent on investigating, understanding, and discussing the root causes of the accident is worth understanding. Accidents like these generally do not happen due to a single factor. It is necessary to understand all the necessary factors if we want to make autonomous driving systems more reliable.

At the very least we need to understand whether the pedestrian appeared in the other sensors that a human could have identified by looking at the sensor data, and if yes, whether the autonomous system matched or exceeded human-level performance by detecting the pedestrian, and if the pedestrian was indeed detected, why the autonomous driving system failed to respond to the situation.


In North America, isn’t the vehicle owner usually liable, regardless of who is driving?


Surely not? Cars are routinely driven by people who are not owners, and liability for traffic offences (including that the vehicle must be insured) is with the driver.


In my experience typically only minor infractions like parking violations are assigned to the registered owner of the vehicle, but in other case – accidents, running red lights etc. – the driver is liable regardless of who owns the car.


parking violations are assigned to the registered owner because they are not present at the moment they are imposed


I mean in terms of who gets sued for personal injuries. Or to repair damaged vehicles.


No. The general rule is that negligence is required to be held responsible. If I let my next door neighbor borrow my car to go to the grocery store, and he hits someone, I'm not responsible. Unless, the person can prove "negligent entrustment", i.e. it was irresponsible just to let this person borrow my car, e.g. they're a habitual drunk, or blind, or 11.

However, most auto liability insurance covers whoever you permit to drive the vehicle, so the owners policy does typically cover the fender bender on the way to the grocery store.


Correct, the owner's insurance policy is the primary coverage when the owner lends their car to a 3rd party. Obviously in the case of a moving violation the driver is at fault and receives the penalty, but damage is still covered by the owner's policy. In the case where the other driver is at fault, that car's owner's insurance is liable.


This car failed the moose test. Legal details aren't relevant, it's plain rubbish. This is test track pre-alpha stuff, for crying out loud.


The bike probably had a reflector on its pedals.

I would be very interested to learn whether or not the car's autonomous system identified a bicycle at any point prior to the collision.


The car likely didn't identify an obstacle at all, let alone a bicycle, as it didn't apply the brakes.


exactly this. what's the response time of software? it ought to be close to zero and significantly faster than human's. let's say it's a generous 0.5s - no brakes where applied at all, and even with the crappy darkened video we got (place isn't that dark https://www.youtube.com/watch?v=1XOVxSCG8u0 ) the pedestrian was in view for 2 to 3 seconds.

car didn't see it at all even in those last moments.


Which is weird, because regardless of reflectors, it should show up on IR and lidar imaging.


Usually other reflectors are required when riding at night.

But this was a pedestrian, not a cyclist.

Personally, while riding at night, I look like a Christmas tree. $10 on EBay goes far these days in the reflective tape and bike light department:


Well it was a pedestrian but they were walking their bike across the road. It's not like the software should make a distinction between a cyclist in the way and a bicycle with no rider in the way.


In some places (UK for example) you need to have lights on your bike as well--not just reflectors.


If everyone followed the law, this wouldn't have happened for a multitude of reasons. Alas, here we are.


Indeed, it's hard to find pedals without them. Even ones that cost $10 a pair have reflectors. Unfortunately, pedal reflectors are ineffective when the bicycle's path of travel is perpendicular to the light source. The video doesn't reveal evidence of other reflectors, such as the common spoke-mounted ones whose purpose it is to highlight a bicycle traveling crosswise. For a moment, the bicycle is clearly illuminated by the headlights; I don't see any spots of light on the wheels or elsewhere.


When travelling perpendicular to the car, bike pedal reflectors are not visible.

What is surprising is that the bike didn't seem to have Tire reflectors like these:

https://www.wired.com/2011/11/fiks-reflective-rim-strips-for...

They are mandatory in lots of countries, to the point that it's impossible to buy tires without them. All brands come with them.


I have literally never seen that. Wow, that's a good idea!


For a side view, the reflectors on the tires (visible at the end of the video) are way better indicators of “watch out! Bicycle” than those reflectors.


It was a side impact, so pedal reflectors aren’t going to be visible.


Wheel spoke reflectors ought to have been - from what I've seen in the video, there were none (they're surprisingly bright at night).


See this video for a comparison of visibility (not in English, but that's immaterial - set speed to 2x ;)): starting with a "bike ninja" and going all the way to "Christmas tree" https://youtu.be/oAFQ2pAnMFA?t=1m0s


This video looks too dark, as if the camera had not enough sensivity.


It's from 2011, there's been a lot of improvement in consumer-grade cameras since. Even so, it fits my perception IRL: even a small reflector is orders of magnitude better than no reflector, and adding multiple (esp. covering 360 viewing angles) makes you stand out at night; same goes for pedestrians.


The human driver didn't have their eyes on the road the majority of the the time. And yes, the video footage is _nearly_ useless.

Maybe the outcome will be that thermal infrared will be mandated on all sensor packs?


This 100%. When I drive, I watch the road. I don't watch my mobile phone, I don't watch the kids behind, I don't watch my wife. I don't watch the sky. I don't watch the GPS.

I just watch the road in front of me.

My idea is that the car has been behaving well for a long time and consequently the driver lowered is vigilance. Big mistake.


> I just watch the road in front of me.

Unlike many others, sadly - even when they don't have any self-driving tech at all


Any chance the backup driver was looking at a driving-related computer screen? (Speedometer/Video of road/LIDAR/IR camera)


That screen is mounted high in Uber's cars. The driver was looking low.


A fully attentive human driver might have hit this person regardless. Would they have hit them while taking no evasive action whatsoever? No swerving, no brakes?


I don't think so: the dash cam video is misleading. I had multiple ninjas jump at me before, and although I did notice and avoid them, they were not visible on the dashcam until the very last moment. Surely Uber would not release data to intentionally mislead the public?


Well, right, I think even if the camera isn't at all misleading you could have hit the brakes and hit the person at a lower speed.


Even so, I count a full second from when a human paying attention would have seen something just using this video as eyes, until impact. The stopping distance at 35mph is 136ft, which is 2.65 seconds at 35mph, so the accident would still happen but the impact speed could be lower.


Yeah, but at that speed, it's more than possible to swerve around an obstacle rather than screeching to a halt before touching it. Even turning slightly to the left/right would have made a dramatic difference in the outcome to this person's life. Not to mention the person in the car that might have also been severely injured if this was a heavier obstacle.

This was purely bad software, and no failure scenario being programmed in. I really don't think it's that difficult to program split-second reaction to obstacles that appear into the driving path. We need to get to a point where these vehicles can do stuff like this, even in a 2-dimensional way:

https://youtu.be/uLasBsoZBi0?t=1m40s


Well, there's also a question of whether the hardware is up to the task.


That’s a pretty significant difference — check out how quickly the fatality rates increase over 30mph or so:

https://nacto.org/docs/usdg/relationship_between_speed_risk_...

Getting hit at 10mph still is going to suck but it’s a lot more likely to be broken bones and road-rash.


Even if it had just managed to slow from the 38 mph that it was clocked at to 30 mph would lower the probability of death from about 45% to 10%.


https://en.wikipedia.org/wiki/Stopping_sight_distance

They seem to use 2.5 seconds as the standard for drivers to perceive and react to an obstacle, which based upon studies covers 90% of all drivers. 1.5 seconds to perceive, 1 second to react. Then you have maneuver time on top of that 2.5 seconds.

Given this, 1 second seems very low. A large percentage of drivers would probably plow into them at full speed.


Your link says that 2.5 is to allow for worst case situation and below average drivers.

If all but the slowest 10% can react in 2.5 seconds than I would think many would do a fair bit better.

Edit: Apparently the average person is closer to 1.1 seconds. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.372...


why havent they released the Infra Red footage? that will be less deceiving


This dashcam footage was released by the local police. It's likely they don't have the ability to access the autonomous car's working telemetry. Given Uber's legal history I doubt they'll release anything until they're compelled to by law. Personally I find it borderline irresponsible of Tempe PD to release this video and statements based on this video so early in the investigation.


That's probably the exact reason it wasn't released.


> it is clear that this camera is diminishing exactly how light the road ahead was

Someone else thought the same thing and went to get their own footage of the road.

https://www.youtube.com/watch?v=1XOVxSCG8u0


This shows how worrying is autonomous cars in a low-light environment. I suppose LIDAR should be able to pick that up, but sadly it failed miserably.


Even if there were no street lights, the car's headlights should give enough light to see the pedestrian on an empty road.


Low beams at high speed do not give enough advance warning to reliably prevent a collision; as your lights are turned downward, you see a pedestrian only when they're quite close.

In general, traffic safety requires that road planners ensure that one of three conditions always applies:

a) the roads are lighted from above; b) cars are able to use high beams; c) there are no pedestrians crossing the highway.

This can be done in general, mostly by investments in infrastructure to ensure lighting or isolated highways wherever the density doesn't allow to drive with high beams.


> clear the driver was looking at his phone or doing something ...

Seriously, what else can you expect. These companies who do put these things on the road with the justification that "There is a human behind the wheel" should be taken out back and shot in the head...Just pull the plug. No more self driving cars for them. Those are just the kind of tech companies we don't want around...

See, it is not a mistake that they are making. They know well enough that this human behind this wheel is a useless as a dummy. But they do it any way. What does it say about them?


There are other cases where having a backup-driver might help: mechanical malfunction, sabotage, or a more obvious un-sensed danger.


I feel sorry for the 'safety driver' here as it seems likely much of the liability will fall on her. As a transgendered ex-felon she can't have had a lot of fantastic job opportunities. I wonder how much Uber was paying her to sit in the hot seat.


Ok, she didn't have job options. That doesn't excuse her from not doing her job and getting someone killed.


I didn't say I excused her. I said I felt sorry for her.


> Seriously, what else can you expect.

I see waymo drivers all the time actively paying attention to the road.


The difference between Waymo and Uber here should be the difference between being allowed to continue, or getting barred from further self-driving research.


So you think having a driver behind the wheel who could potentially intervene is as bad having absolutely no humans in the car at all?


1. A driver who is not looking at the road cannot "potentially intervene", and is as good as no driver at all..

2. These companies seem to be doing nothing to make sure that the drivers will pay attention always and is always in a position to intervene. They even seemed to allow smart phone usage while they are in the car.

So, according to them, the human behind the wheel is just a decoy to prevent backlash from officials and the public, so that they can always say, "look, there is a human behind the wheel if something goes wrong"...

Also, even if they implement some measures, they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. A driver who is actively driving the car will notice a lot more stuff than a passenger who is just looking at the road. There is no way to make a human pay that kind of attention with out actually driving the car. So at best, your "driver behind the wheel" is as good as a passive passenger.

And as told before, the companies are not even trying to make sure of that.


I could be wrong, but I believe part of the reason for having a human behind the wheel is that it allows the testing to take place under existing driving laws. At some point prior to an unmanned vehicle being allowed on the road, lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel, a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.

In that context, the landscape changes significantly - instead of a self driving car that mowed down a pedestrian, we have a driver who was too busy looking at her phone to pay attention to what her vehicle was doing. From the various articles, it seems that she's not an engineer, and is there in effectively the same capacity as any other Uber driver. If that's the case, she's putting far too much trust into an experimental system. I agree that Uber could do more in the way of technological means to ensure the driver is paying attention, but at some point, an adult with a job needs to be responsible for doing that job.


>lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel..

The framework should have been in place before these vehicles were ever put on the roads. For example, there should have been some formally specified tests for a self driving vehicle before it can be put be on the road, even with a back up driver..

> a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.

Any thing that does not require drivers to keep their hands on the wheel is not a driver assist. It IS the driver. So there should be tests that make sure of the competence of the tech that is in the drivers seat.

I don't know how people let this happen!


>they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. //

I'm certain that if you can design and build a self-driving car that you can design a simplistic human attention monitoring system that will cause the car to pull over if attention level is too low.

Gaze monitoring that checks for looking downwards or away from the carriageway for extended or too often repeated periods wouldd probably be enough.

I imagine the attention of the "vehicle operator" is vital to the proper training of the vehicles -- if they don't see near misses, or failures to slow for potential hazards, or failures to react to other road users then how can the softwares faults be corrected? Do they get a human to review all footage after the drive?


I agree completely. As far as I can tell, the driver did not even have hands on the steering wheel. How hard would it have been to put sensors on the steering wheel to require both hands? They didn't even do that. Although even if they did, I agree with your statement that "[t]here is no way to make a human pay that kind of attention with out actually driving the car."


Not difficult at all, and you can make them keep reasonable attention. Look at the new Cadillac driver assist: sensors in the wheel for hand placement -and- eye tracking. If the driver isn’t watching the road/holding the wheel, they get escalating alarms until the autopilot disengages.

And that’s consumer drive assist tech, not “we are experimenting with full autopilot” tech, where I’d think such safety measures would be even more appropriate.

This is a solvable and solved technical challenge. Uber just didn’t devote any resources to it because they don’t appear to give a shit beyond acquiring a legal fig leaf to shift liability from themselves to an individual.


Frequent, randomly scheduled disengagements should keep the driver quite on edge, preventing them from becoming a passenger. But each and every one of them would create additional risk, so the net improvement might be negative. There is just no way to get this right, except for being reluctant of pushing to scale. With all the hype, wishful thinking and investor pressure, this clearly isn't happening.


I've been thinking about this for the last couple of days, and it's definitely a hard problem -- even with steering wheel sensors and eye tracking, it doesn't stop people zoning out and not being ready to react.

I did wonder if you could require the driver to make control inputs that aren't actually used to control the car but are monitored for being reasonably close to how the computer is controlling the car, and then the automation disengages (with a warning) if the driver is not paying sufficient attention. I then realised that may be _worse_ - in the event of a problem, the driver would have to switch to real inputs that override, which may delay action and not be something they do automatically. It would mean they are paying attention more to see if the automation is making errors where they have more time to react though (e.g. sensor failure that is causing erratic behaviour but not led to an emergency situation).

I wonder if a hybrid approach might be viable -- fake steering is used to ensure that the driver is alert and an active participant, but the driver hitting the brakes immediately takes effect and disengages the automation.


He was clearly not paying attention. So why not cut the crap and say it like it is: the car is driving itself with no supervision.


You are so right. To me this clearly looks like they are reading something below the dash. A book or phone perhaps.

Looking forward to seeing this play out in court.


BTW isn't it a she?


> Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else just prior to the impact. This alone leaves me skeptical to just how much could have been done to prevent this accident.

Wait, aren't you mean to have your hands on the wheels at all times? I don't see what to be skeptical about when if he just followed the law this could have been avoided.

It seems to me the driver might be in for some legal trouble.


But this has got to be just the black-box camera, right? Surely the actual camera they use as a driving sensor is much better than this? Not to mention the LIDAR and all the other sensors that should have caught this.


> Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else.

Probably checking the computer installed for diagnostics of the autopilot system. If it's in self driving mode and you are the engineer in charge, you'd want to constantly check what the system is seeing vs the actual conditions on the road.


If you're the driver of a car you're supposed to ensure safety by looking out, not verifying sensory information. If Uber designed their cars to show a rendering of the computer's perception to the driver, or other sensory output, they would violate that principle.

To me it looks like the guy is just falling asleep at a boring job. In all likelihood that was not an engineer more than any other taxi driver is an engineer.


If you're the driver of a car...

The software is the "driver" of this car. Not the human behind the wheel. Take a look at job descriptions [0] for this. They always include a bit about "operating in vehicle computers". The fact, we don't know what the person is doing.

0 - https://www.indeed.com/viewjob?jk=597616bf7d02d899&tk=1c96sl...


From that description: "Ensure the safe operation of our test vehicles on public roads."

I don't know about current regulations. Are companies now allowed to operate autonomous cars without a driver that pays attention?


The driver, Rafaela Vasquez, is a "vehicle operator," not an engineer. https://heavy.com/news/2018/03/rafaela-vasquez-uber-driver-s...


I am pretty sure Uber uses an iPad app for its autonomous vehicles. The driver is looking at that iPad application periodically along with the physical windshield view.


What kind of things does the iPad display? Should the safety driver even be tasked with looking at it while in 'control' of the vehicle?


If you search "Uber autonomous vehicle" you can see some videos of the display. From what I gather, basically gathers the signals into a human readable model. In general I wouldn't have recommended this driving style but it might have been too dark to see much anyway.


I don’t understand this, I’ve seen a few people comment in the same vein.

People can safely drive in total darkness with the aid of their amazing human eyes and high-beams.

If for some other reason visibility is low you slow down - not rely on glancing at a backlit display ruining your own night vision and taking your eyes off the road for seconds at a time.


> apply the brakes

Or swerve out of the way.


Or flick your high beams, quick beeps, adjust speed... I do all these things if I see anything on a collision course with my vehicle.

It is surprising to learn that these vehicles are operating at night. To collect training data, since nighttime driving is inevitable, perhaps there are ways to simulate night to the computer vision systems during daytime so the human supervisor can still see clearly.


Lol, the "human supervisor", looking at his knees, probably on reddit or tweeting.

Would you trust this system that didn't even manage to slow down at all with a pedestrian slowly pushing a bike directly in front of it, artificially adjusted to be even worse, driving during the day??

I wouldn't.


Human eyes have the same issue: if you are next to a bright light source, the areas without or less light will look much more dark. I assume cameras work the same way?


Cameras work the same way, but much much more poorly. A human eye can see multiple orders of magnitude higher range of light to dark areas at the same time. The accepted estimate is that the human eye can detect a 1 million: 1 range from light to dark in terms of photon intensity.


Uh I think it’s a girl


The driver has been described as male in news reports:

> "The driver said it was like a flash, the person walked out in front of them," Moir said, referring to the backup driver who was behind the wheel but not operating the vehicle. "His first alert to the collision was the sound of the collision."

> "The driver, Rafael Vasquez, 44, …"

https://www.bloomberg.com/news/articles/2018-03-20/video-sho...



What do doctors identify him or her as?


Cheers!


also note that pushing the brakes was not the only option : steering the wheel to avoid collision was another, maybe more efficient. Still, I feel the same as you do : I cannot guarantee I would have avoided this.


I can confidently assert that Asian or atleast Indian drivers will almost assuredly not hit the pedestrian in this scenario; We have trained our eyes and senses to watch out for these as it happens all the time.

EDIT: what i meant, in light of the downvotes is that humans can train themselves to see, and just that folks driving in Asia have heightened sense of alertness, due to their environment. Hope it came out alright.


The whole reason many people on here have been advocating for self driving cars is that they can see obstructions more or less perfectly in the dark with LIDAR. I am much more interested in what that sensor said.

I'm reluctant to infer exactly what a human eye would have seen in that situation. I have absolutely driven down streets in suburbia where the gap between street lights was large enough to make them quite dark, and that video was an example of exactly what I was afraid of happening whenever I drove down those streets (though admittedly my fear was hitting a white tailed deer).

I think it might also be fair to argue that the car's high beams were not on (but again, that shouldn't matter because of LIDAR, right?).

I'm not confident even an above average human driver would be able to avoid that accident, even if good eyesight gave you an extra half second to respond. Dark clothing and no reflectors means that person was definitely invisible to both the camera and the driver for some time after they would have been visible in daylight.

I've had a couple of situations where someone appeared close to my line of travel with low visibility clothing (at night) that scared the living shit out of me, and they weren't trying to cross the street.

To be clear, I am not blaming the victim here, but do wear high visibility clothing when you're a pedestrian near high speed roads at night.


A person with common sense and a developed understanding of the situation would drive more slowly in situations like this. The law says that you don't drive faster than you can see.

A similar thing (no fatalities, just a shopping cart pushed by homeless people) happened to me. Ever since then, I have learned to be much more aware of situations like this (tunnel of light surrounded by darkness).

This just shows that Uber's tech is bad and that they let it on the road shows that their culture is still at least partly rotten.

I don't think Dara is the do gooder that some people are making him out to be. His primary motivation seems to be to usher Uber to an IPO. IMHO, if he actually had ethics, he would be front and center on this. Your company just killed someone. Where are you?


> The law says that you don't drive faster than you can see.

Amusingly, the law also says that manufacturers have to produce headlights that cast light out far enough to leave you adequate stopping distance at 60mph. Almost no headlights on the market currently do that.

Not a counterpoint, just a tangent that I find sadly amusing.


Maybe for brights, though they can be blinding to oncoming traffic. Even the LED ones appear too bright to me.


The ninja is the reference standard for real-world pedestrians. It's up there with the surprise moose. Systems that can only detect bright peds are going to be horrific meat grinders and lead to autocar hell instead of autocar heaven.


Sad side note that most people appear unaware of the benefits even the simplest and cheapest of reflectors do provide.

The seemingly random design decision of many runner manufacturers to embed tiny reflector strips in their shoes have no doubt saved countless lives. And their owners would probably be none the wiser.


And those who are aware often lack the understanding that with glare-minimizing headlights, reflective surfaces at or below knee-level are many times more useful than anything higher. A reflective hat would be pointless.


Yup. There's a place for education here. I know I wasn't really aware of the benefits until adult age, when I started to find myself more often in a car, at night, in rural areas. I still remember the first experience, in which I've noticed a cyclist on another lane ~0.5 seconds before we passed him. Dark clothes, dark bike, zero reflective elements.


In Norway, when growing up, I was frequently exposed to campaigns saying "Bruk refleks!" (Use reflector(s)!), and given free ones at every opportunity.

Of course it makes sense there where daylight may be hard to find half the year, however even in Australia, once it is dark the darkness is the same.

And I haven't seen a single government initiative to increase visibility awareness - most people are completely in the dark. (Sorry)

Riding shared bike trails in Melbourne at night on the commute home, this is something I think about often in the "winter" months. Peds may hate the strong glare from my LEDs, but it is the only thing the has half the chance of making out ninjas against the frequent sports ground stadium floodlights the path goes by.


> The whole reason many people on here have been advocating for self driving cars is that they can see obstructions more or less perfectly in the dark with LIDAR. I am much more interested in what that sensor said.

That is not the whole reason, it is one of many reasons.

> To be clear, I am not blaming the victim here, but do wear high visibility clothing when you're a pedestrian near high speed roads at night.

Yes, stupid homeless person.


Preventing the accident might not have been possible, but even being able to decrease speed by a tiny amount would have greatly improved the pedestrian's chance of survival. Slowing from the 38mph that the car was traveling down to 30mph would decrease the chance of fatality from about 45% to below 10%.


Yep. I live half a mile away and just drove the same path tonight around 10pm, it's nothing like the video. There are spots that are darker than others, but they don't look nearly as dark. Nowhere on the street looks pitch black, there's ambient light everywhere.

For anyone who's interested, try taking your phone with the camera app open into a dark room and comparing what you see to what's on the screen. Which shows more detail?


Somebody recorded this stretch and posted it [1]. The police-released video looks much darker in comparison:

[1] https://www.youtube.com/watch?v=1XOVxSCG8u0


Out of interest-- can you take a pic while the lighting outside is similar (assuming weather hasn't changed dramatically?) and maybe adjust exposure to what your eyes see? Or take a camera phone pic for comparison?


pic at 9pm of the spot taken out of a video circulating here earlier, sorry for the meme format

https://i.imgur.com/eSre3hL.png


> The gap between street lights (and hence the person) was in the field of view of the camera the entire time

But the gap between street lights is going to be very hard to see into.

> I'm confident my eyes are good enough that I would have been able to see this person at night in these lighting conditions.

I think you're overconfident. Human low light vision is very good if there is low light everywhere. But it is not good at seeing into low light regions when brightly lit regions are nearby.

That said, I agree that a visible light video camera is likely to be even worse that human vision under the given circumstances. But as others have commented elsewhere in this thread, the car is not just supposed to be using a visible light video camera. It has LIDAR and IR sensors, which should have clearly shown the pedestrian well before visible light did.


FWIW the spot where the crash happened is in fact badly lit. I know this anecdotally from having been at the location for events -- it's right next to a concert venue -- but it can also be seen on other dashcam videos.

In this video [1] driving northbound, same as the vehicle in the crash, the car first goes under AZ-202, emerges under a streetlight, goes through a darker spot, then another streetlight (as you see the rocky outcrop), and then a very dark spot: and suddenly, you see a right-turn lane that wasn't there before. The latter dark spot is where the crash happened.

Another video by the same author, driving southbound [2], provides another useful reference. And these videos are three years old, yet the illumination of the roadway has not improved. Cameras exaggerate the contrast a bit, but not unreasonably so. The streetlights in question essentially aim directly downwards, illuminating the roadway immediately underneath, but much less of the surrounding air than other designs. This is responsible for the dark gaps, albeit it does significantly reduce light pollution.

[1] https://youtu.be/zEaTdYJExq8?t=8m50s [2] https://youtu.be/yfR7krN7z00?t=23m26s

EDIT:

Found more. The car in this video is going southbound, camera facing backwards [3]. This view faces the same way as the Uber did, but of course this video is moving away from the scene, and offset by a few dozen meters to the west. The drastic change in roadway illumination can still be seen.

In a fourth video [4], the car is going northbound, like the Uber, in the proper lanes, but the camera is pointing obliquely front-right. The illumination seems better, but you can still see the intensity of the shadows, including environmental shadows and the car's own shadow, as it moves between the lights.

[3] https://youtu.be/0Dum8Fj71JU?t=13s [4] https://youtu.be/6qHcuW_LCIU?t=16m45s


Thanks for the links.

Everyone is moaning and slicing and dicing what the self-driving vehicle did wrong but, since you're familiar with the area: are pedestrians typically expected to be crossing this road?

Seems like the accident has a lot of factors that might not only be the self-driving car's fault, nor even a human driver that was fully in control. Regardless of how well people may want self-driving cars to do, one thing that can actually exist in the present is to make sure that we are creating safe ways for pedestrians to cross a road.


I've also driven around here a lot. No, pedestrians are not common. Maybe once a week in my experience? They do love to cross outside of crosswalks at night, though, and I've found that I have to adjust my own eyes' object recognition to look for moving shadows and not just moving lights, because they're very hard to see even in well-lit areas.

I've driven many thousands of hours at night and have dealt with a fair number of crazy pedestrians including a rather ... uncoordinated ... guy in Casa Grande who decided to go in circles on his bike in the middle of the road at around 3 AM for no discernible reason. Fortunately that place was much better lit and I was able to see him and stop until he got out of my side of the road.

So it's not that common, but yes, every so often you will see some person in black jaywalking across a wide road at night and they're quite hard to see. I don't think a lot of people appreciate that the streets here are wide & fast and that there just isn't that much pedestrian traffic even in daytime.


That was my suspicion. I've lived in very suburban areas before as well as rural ones where you might even be going 55 on a two-lane road with no street lighting whatsoever.

Here in LA, it's dense and traffic can't get up to very high speeds and we have relatively frequent places to cross safely if people choose to do so. I've definitely seen those who choose not to walk an extra 100 feet to wait at a crosswalk nearly hit in dusk or night traffic.

No amount of automation is going to bring the accident rate down to 0 so through a combination of factors, such as traffic and community design, we can work in tandem with automated driving to get closer. There's still the X factor of our human ability to do really dumb stuff.


The average speed limit here is 40, or 35 in the slow places and most roads are 4 lanes + a center median or turn lane, so yeah.

I've crossed these myself, but I always look both ways.


Wait, you don't cross lanes of traffic staring at your phone? That seems to be the new way people want to commit unintentional suicide.


Is Tempe like Tucson in explicitly aiming for low light pollution? It does make for some nice star gazing.


Sadly, no.

Tucson does it due to the nearby observatory. The greater Phoenix area has a huge glow that washes out all the stars. You can see the glow as far away as Casa Grande when you come out of the little rocky pass on I-10 north of there.

Edit: removed a duplicated word.


> The greater Phoenix area has a huge glow that washes out all the stars.

I live in the Phoenix area, on the west side closer to Glendale (specifically, the border between Phoenix and Glendale is literally in my back yard).

There are times in the summer where the glow from the city is so bright, that rather than a dark sky (never black), you have a grey dimly lit sky instead.

Literally, "the sky was the color of television tuned to a dead channel" - maybe not as bright as the static Gibson was referring to, but still bright enough to see by - even without a full moon.


This area specifically is bad due to it's proximity to Sky Harbor airport.


This site lies on the approach route for Sky Harbor airport. I'd imagine the street lights are intentionally designed to reduce light pollution at the expense of "on the ground" effects.


> But the gap between street lights is going to be very hard to see into.

This wholly contradicts my experience driving at night on a street with street lights. I can't recall a time in my entire life I have had significant difficulty seeing into the gap between street lights. Keep in mind that the gap is not arbitrarily chosen.

Edited to note that I have experienced difficulties in low-vis conditions such as snow storms, sand storms, VERY strong rain storms, etc. None of which apply to this situation.


Keep in mind a lot of folks in this thread might be suddenly realizing they have reduced ocular ability at night, a likely common condition that pretty much nobody is aware of when it’s minor (because it’s not obvious something is amiss; maybe it’s just that dark). I agree with you that streetlights and headlights are almost universally sufficient in my experience. If they’re not, it’s worth getting your eyes checked out for light sensitivity at night. You never know.

I’m not sure a typical eye exam checks for it, either, because none of the tests I can think of seem like they’d be useful.

(As usual, an even keeled comment based on family experience is -2 and rapidly being silenced with zero feedback inside 5 minutes, which makes me wonder why I contribute to this community at all, probably time to stop)


Wouldn't a visual field test show this?

I did one at my last eye exam and it was pressing a button when you see dim flashes in all different locations. If you had low sensitivity, you wouldn't see those flashes and presumably you'd get a low score.


That test mostly isn't testing sensitivity, it's testing field of view which is an indicator of some potential eye health issues. It might end up testing sensitivity incidentally but that's not the purpose.


I suspect it's too late to chnage now but have a "throwaway" account is an indicator you might not be committed to the community. one has to dig a little deeper to find out a multi year history with 4000 karma. so first impressions of your comments might be getting biased (it might just the "red car effect" but i am seeing a lot more throwaway accounts these days)

I would also not judge the community based on reactions to this very contentious thread - i am wary of jumping in on this one, but thought it worth noting your comment was not wildly out of place.

stay, we have cookies :-)


Judging a comment, or commenter, based on karma is asinine. Respond to comment, not commenter; ideas not people. You are not well representing HN, and this is my 'unpopular opinion' account talking. There are plenty of better ways to engage, and I do appreciate your enthusiasm for HN. Perhaps this is an apt introduction to the heated discussion that is HN.


Well, in my view I was responding both to the comment ("i am leaving") and the commentor (making the years of participation and 4000 points relevant). If someone has been a contributor for many years then we should consider why they chose to leave. it might be them, it might be us.

I actually believe i am representing HN as a place where different opinions can be voiced, hopefully in a manner to generate light not heat. Heated discussions are rarely the useful or interesting ones to read.

Thank you for appreciating my enthusiasm.

PS Are you using two accounts - one ("my 'unpopular opinion' account) for saying things you fear people might not like? That seems odd. May I ask why?


This may be how you believe you see the world but most people take reputation into consideration and on sites which expose that information account age and karma are very popular cues for that.


Karma does not mean shit. It just means you are complaisant. I think the proper way to use things like HN/reddit is to always use a throwaway account and always speak your mind without the fear of negative karma...So I also agree 100% with the parent. Reply the comment, not the commenters, their karma or their entire history.


> Keep in mind that the gap is not arbitrarily chosen.

It's not supposed to be, no. But the gaps are not always optimal. The spacing of the street lights in the video (to the extent I can tell) seems to be quite wide, wider than I would think is optimal.


The edges of the "lightpool" that the lamps normally cast is probly being clipped by the cameras crappy dynamic contrast, it is almost certainly a much larger lightpool in real life.


Not usually, and I say this as someone who has spent a lot of time driving nearby roads at night.


> it is almost certainly a much larger lightpool in real life.

Not according to the post by niftich upthread.



If Tempe is like Tuscon, they are using different kinds of street lighting from the rest of the country to minimize light pollution for star gazing reasons.


Also in those stormy conditions you slow down enough where the stopping power should roughly equal your visibility.


> But the gap between street lights is going to be very hard to see into.

Looking, right now, at a parking lot between two lights from a well-lit room. I can make out most of the outline of the black car in the middle of the "darkness" without any trouble. This isn't even the low light vision kicking in (which I agree isn't going to kick in if you're driving). Human vision should be able to make out the pedestrian earlier than the video footage.


How long does it take you to make out the black car and determine that it's a car? What if the car were coming straight at you out of darkness and you were standing in the light of a street lamp?

Also, are you looking straight at the car? Or are you looking elsewhere so that the car is in your peripheral vision, the way it would be if it were on the side of a road you were driving on?


A moving object would be more visible than usual. Human vision is better at picking up scene change.


Not when you transition from high to low light conditions. The problem is night vision has more noise which makes movement detection far more difficult. This is made worse because the pupil can't fully dilate making the gaps seem much darker.


This street in particular is weird at night because the street downstream rises up, and the light from those lamps is cast at a higher point. The place she was hit is extremely dangerous because there are no lights on her, and no lights behind her.


I believe that a human would be able to see in those conditions. It's a lit street with a car with functioning headlamps. It wasn't foggy or rainy.

I've personally driven down country roads without any lighting except my headlights and saw deer poking their head out of the woods a ways away for which I slowed down in case they darted across the street. Someone slowly walking their bike would be trivial.

Reminds me of this video of a Rally driver racing with malfunctioning lamps https://youtu.be/HwyRS_6Uqn0?t=2m36s

The video makes it seem impossible but afterwards in the interviews the driver said it wasn't too bad after his eyes adjusted. He did have some issues with his own lamp blinding him which lead to errors. (He actually won this stage.)

As far as I'm concerned Uber's software/hardware is completely at fault and not ready for public testing. I'm uncertain how much better everyone else's tech is but Uber's typical carefree approach has ruined it for everyone.

There are consumer level dashcam that can shift up to 12800 ISO which can create a fairly distringuishable picture with ambient moonlight.[1]

Canon builds sensors with ISO's in the millions which should be able to see distinguishable shapes without ANY light. [2]

[1] https://youtu.be/hHU-hWG5DDk?t=5m48s [2] https://petapixel.com/2015/09/13/this-is-iso-4560000-with-ca...


> It's a lit street with a car with functioning headlamps.

The headlamps may have been functioning, but they appeared to be aimed way too low. You can see that the car is able to traverse the distance lit up by the headlamp in about a second at 38 mph. If the headlamps were aimed properly, it should light up the road about 5 seconds ahead of the car.


> If the headlamps were aimed properly, it should light up the road about 5 seconds ahead of the car.

A system with this rule baked in would be driving slower.

People adjust the way they drive based on what their environment is doing, how well their equipment is working and their own alertness. Except in the extremes we should not accept misconfigured equipment as an excuse. And if a system detects that there is no acceptably safe speed for it to go then it should not move at all.


> A system with this rule baked in would be driving slower.

Arguably, the system should detect a misconfiguration like this when the car is turned on and not allow the car to be driven until the problem is fixed.


Screw the system, people on the project should've detected a "misconfiguration" like that.

They didn't. Either because they are negligent or because the footage is misleading and the system saw the whole thing but did nothing.


I also suspect that human eyeballs would have a different view of the light/dark portions of what's depicted there, and especially eyeballs would have probably had a much higher chance of detecting movement in peripheral vision than that video gives any hint of.

We typically cant see much detail in the scene out of our small region of focus, but you can bet if a tiger appears from behind a tree our visual system will scream to the brain _look over there right now!_

Our eyes and our entire visual processing system is very much not "just like a webcam, but made out of meat".


I’ve driven that location on that road many many times at night and no it is not that dark, it is lit up well like most city streets. The video make the contrast appear greater.


Thanks for your comment. That's what I thought.

I have a dashcam, and I've seen night videos from it.

In fact, the picture from my dashcam seems much better than this low quality mess, but still night videos from it come out much less visible than reality.

I've tried to rewatch some parts of videos later, and I find I was able to see much more detail on the sidewalk and on the periphery than was captured by the dashcam. Everything gets blown out in the night videos by the headlights.


If you could it would be very valuable if you did a test at that location and shared your experience


Which makes me wonder if the Uber autocar is just relying on camera vision to drive itself... If it is, and it's been lying to authorities (I don't know what they said to the authorities about their cars' capabilities), that could be big.


Far more likely is that this is a standard dash cam that the police know how to pop the SD card out of and access the video...


The point is not germane. What is germane is that a car that supposedly uses LIDAR and infrared, and presumably was approved by the regulators on the basis of such, should have had no problem seeing the pedestrian as LIDAR and infrared are unaffected by night and at least shown some indication of braking but did not. This suggests that the car does not in fact utilize any of those fancy (non-vis) detection methods. Alternatively, these fancy detection methods were fooled by the bicycle and thus misclassified as an error or something.


My point is that it's likely the camera view we're seeing has nothing to do with the self driving portion of the vehicle (a good hint to this is the interior view--useless to autonomous driving, but a common feature of dash cams).


The car has a LIDAR sensor mounted on the roof. It is supposed to continuously scan 360° of the environment. Since LIDAR is an active sensor (it emits light), the car should have seen the person and bicycle even in the dark. That it did not do so suggests the car does not evaluate LIDAR input, or it dismissed the object as erroneous data.


This is the LIDAR sensor being used (virtually all SDC companies use this LIDAR because it's the most advanced out there for 360 degree 3D coverage):

http://velodynelidar.com/hdl-64e.html (note that this LIDAR is expensive - costs way more than the car it is mounted to)

Here's the manual for it - note the specs in the back:

http://www.velodynelidar.com/lidar/products/manual/HDL-64E%2...

It has 64 lasers, spread out over about 27 degrees - about 0.4 degrees per laser, from almost horizontal to an angle of 24 degrees or so down. Now take a look at where it is mounted on the car, and envision these laser beams spreading out and being spun in a circular conical area around the car.

Now - if you think about it - as the distance from the sensor increases, the beams are spread further apart. I'd be willing to bet that at about 200 feet or so away from the car, very few of the beams would hit a person and reflect back. Also - take a look at the reflectance data in the spec. Not bad...but imagine you are wearing a fuzzy black jacket on your top half. How much reflectance now?

What do you think the point cloud returned to the car is going to look like? Will it look like a human? Hard to say - but you feed that into a classifier algorithm, there's a possibility that it's not going to identify the blob as a "human" to slow down. Especially when you add some bags, a strange gait, plus the bicycle behind the person. All of this uncertainty adds up.

I am also willing to bet that only the LIDAR was used for collision detection (beyond the radar on the unit). Any cameras - even IR based - would likely only be used for lane keeping and following purposes, plus traffic sign identification. Maybe even "rear view of vehicle" detection. Ideally it would be used for "person/animal" identification and classification to - but again, given the camera sensor, and who knows what the IR sensor saw or didn't see, along with the weird lighting conditions - well, who knows how it would have classified that mix?

Lots of variables here - lots of "ifs" too. All we can do is speculate, because we don't have the raw data. Uber would do well to release the entire raw dataset from all the sensors to the community and others to look over and learn from.

Finally - I am not an expert on any of this; my only "qualifications" on this subject is having taken and passed a couple of Udacity MOOCs - specifically the "Self-Driving Car Engineer Nanodegree" program (2016-2017), and their "CS373" course (2012). Both courses were very enlightening and educational, but could only really be considered an introduction to this kind of tech.


Exactly. The dynamic range of the human eye is vastly better than a visible spectrum camera.


> The dynamic range of the human eye is vastly better than a visible spectrum camera.

Certainly better than any camera mounted on a dashboard.

It's honestly a bit surreal how the pedestrian appears out of the splotch of pure darkness in the frame. That's low dynamic range and resolution (or high compression) at work, not how light behaves in reality.


This was my initial reaction as well, but am not 100% certain after thinking about it some more. Take this for example:

https://i.imgur.com/AlO4h7p.gifv

I figured that light in front of the car was mostly just messing with the camera but that driver sure didn’t see that pedestrian either. I’m willing to give a human driver the benefit of the doubt here and say that even with eyes on the road and hands on the wheel the outcome would likely have been the same. The pedestrian was not highly visible - no reflectors, dark clothes, it’s really hard to see people like this.


>that driver sure didn’t see that pedestrian either

unless their phone was showing video of the road in front of them, I don't know why they would have seen her.


Not exactly. The dynamic range of the eye in a single scene is around 12-14 stops, about the same as a high-end digital sensor, like the one in the Canon EOS 5D or Sony Alpha. https://www.cambridgeincolour.com/tutorials/cameras-vs-human...

The eye can gain a lot more stops through adaptation (irising, low-light rod-only vision), but those mechanisms dont come into play when viewing a single scene -- and cameras can also make adjustments, e.g. shutter speed and aperture - to gain as much, if not more, range.


A camera captures the entire scene in a frame with a fixed dynamic range. Human vision builds the scene with spatially variant mapping, the scene is made from many frames with different exposures stacked together in real time.

I'm concerned about poor scotopic adaptation due to the rather bright light source inside the car - maybe it's the display he's looking at. I see a prominent amount of light on the ceiling all the way to the back of the car and right on his face. It's really straight forward to collect the actual scene luminances from this particular car interior and exterior in this location, but my estimation is the interior luminance is a bigger problem for adaptation than the street lights because the display he's presumably looking at has a much wider field of view, and he's looking directly at it for a prolonged period of time. It's possible he's not even scotopically adapted because of this.

And also why is he even looking at the screen? He's obviously distracted by something. Is this required for testing? Ostensibly he's supposed to drive the car first. Is this display standard equipment? Or is it unique to it being an Uber? Or is it an entertainment device?

Retest with an OEM lit interior whose driver is paying attention. We already know the autonomous setup failed. But barriers are in place than also increase the potential for the human backup driver to fail.


I agree, but I don’t think the eye can adapt beyond its inherent dynamic range over a matter of milliseconds - the iris is not opening or closing over that timescale, so you’re relying on the inherent dynamic range of the retina (which is pretty good).

What the eye IS doing is some kind of HDR processing, which is much better than the gamma and levels applied to that video. I bet a professional colorist could grade that footage to make it a much better reflection of what the driver could see in the shadows - even with a crappy camera, you can usually pull out quite a bit of shadow detail.


It's not so simple. Technically, you are not wrong but a video feed should have been sufficient here. It should also be considered that digital video has improved drastically over the last decade.

Even LIDAR aside, computer vision and a raw video feed should have been enough to have prevented this collision.

When a digital camera records an image, a gamma curve is applied to it before display, which makes up for our bias against the darker portions which the digital equipment does not have. We are very capable of guessing the results of bright conditions but not dark conditions via compressed video.

Moreso, these cars should not be using consumer CCDs with compression. They should be utilizing the full possible scope of video.

See: https://en.wikipedia.org/wiki/Gamma_correction


> When a digital camera records an image, a gamma curve is applied to it before display, which makes up for our bias against the darker portions which the digital equipment does not have.

Gamma correction makes up for a bias against darker portions in the display, not in our eyes. It's a holdover from the CRT days where the change in brightness between pixel values of, say, 10 and 11, was far less than the change between 250 and 251. Human eyes have excellent low-light discernment which is why 'black' doesn't really look black and you can make out blocky shapes during dark scenes on some DVDs.


Compressed video lacks information in the blacks and that is why we see blocks. The blocks are not there before compression, so it’s not simply a matter of detecting them. While we are good at seeing objects in blacks, your explanation alone doesn’t account for why compression algorithms reason to remove so much of that data. Maybe we are saying the same thing. It’s hard to tell.

Your assertion about the origins, however, are at odds with what I have been taught, my understanding, and all the supporting info I am finding in a quick search. My understanding is that luminance values from a sensor have something of an empirical scale but I’m sure this no complete explanation. I am speaking from my working knowledge. I can’t find anything supporting that it is simply a fix for discrepancies between display types. Can you link to something or explain what I am missing?


I do know a bit about cameras, and you're spot on.

You will frequently see dash cam footage and night photography blow out the relative highlights and blacken the relative shadows.

This is because (cheap) hardware does not have the same dynamic range as human eyes, especially at night. So "properly exposed" it has to make a call to capture light values in the middle somewhere. Those light values too far out the top it interpreted as white, those out the bottom it interpreted as black, created an artificial high contrast version of what a human eye would see.

This is pretty intuitive, generally when we're driving down the road with our lights on, we aren't literally moving between pools of black, often in many urban areas I'll even forget to turn my lights on because I can see well enough.

You MAY be able to get a VERY BAD interpretation post processing of what a human would see by increasing the brightness of those pixels near the black threshold.


If any of you have a dash cam it's very obvious how the light levels of images captured at night look like this video and is VERY different from what you see as a driver - objects are much brighter than this with your own eyes.

Also - the car is driving way too fast.

I did some driving tonight and paid close attention to when I naturally slowed down - and albeit I'm probably on the higher curve of good drivers in that I don't tailgate, drive the speed limit and generally slow much slower than the speed limit when conditions are poor (fog/rain/snow, night, slick/wet roads, near curves/hills where I can't see the road). I noticed that many of the times I naturally slowed down on the roads here I slowed considerably under the speed limit by 10 to 20 MPH in some areas. It seems this Uber SDV is generally going as fast as it is possibly allowed to regardless of what it can see.


Also even if the camera was perfectly accurate about human field of view, no human driver in his right mind would drive so fast with such a poor visibility. Any judge would qualify this as reckless driving.

So either way the software failed:

-If AI misjudged Lidar information and didn’t compute the slow moving pedestrian it’s a fail

-If it didn’t have enough computer vision space it should have slowed down

Possibly in the second scenario the human test driver is at fault too because he should have noticed bad condition and hit the autopilot kill switch.


Is a driver ever at fault when a pedestrian is in the road at night without a crosswalk nor lights nor reflective gear?


In France it’s 100% (in civil cases), unless the driver can prove it’s a suicide. It just goes by kinetic energy: you store it, you are responsible for it. Other people don’t have to dodge your car. And since death penalty is not part of the arsenal, killing a pedestrian is not an appropriate sentence if they commit a infraction that’s punished with a 50€ fine.


I don't really know how it works in the US or in this state, but in my country, you simply can't drive when it's as dark as the video appears. Either you're not in a city and you can turn your mainbeam headlights on (the blinding ones), or you're in a city and the road is much more lit and the speed limit is 50kph.

With those, the driver would've seen her from a mile away.


Since in the video, we aren't seeing the original scene, but rather, the camera's interpretation of that scene, I think it would be hard to judge except to base it on what your average streetlight brightness is.

Like a camera, your eye also has only so much dynamic range. So if those street lights are bright enough, or your interior lights are too bright, you might have nearly zero visibility in those shadows.

But it is certain that a self driving car "should" be able to see. Even two cheap digital cameras one tuned to see the darker range and the other brighter should easily see in these type situations.


"Even two cheap digital cameras one tuned to see the darker range and the other brighter"

Sounds a lot like rods and cones in our eyes, huh?

There's another difference with eyeballs that would almost certainly have helped here - the low light sensitive peripheral vision that the rods provide is also attuned to movement, we're more sensitive to movement in peripheral vision as well as being better able to see in low light.


You wouldn't need two different cameras, just one camera shooting HDR video (alternating between over/underexposing the frame so that no information is clipped) to get a clear image at all exposure levels.

Eyeballs are pretty good at night vision once adjusted, but good high sensitivity cameras can be much better. And let's not get started on LIDAR/RADAR... it seems clear to me that this was not a sensory deficiency, it was poorly designed/tested software.


No, much more like the iris in our eye. Turn on a bright light inside a car and see how hard it is to see outside on a dark night. Modern HDR cameras have a much higher dynamic range than the human eye. Hence the surreal HDR photos you see.


AFAIK, LIDAR doesn't use just some digicam. So the video we see there should be very different from what the sensors actually see in such a situation.


The driver wasn't fixed on the road but he glanced two times prior the collision and had no hesitation. It seems that it was at least dark enough for him not to notice a person + a bike on a large road.

I'm also very (sadly) surprised that she crossed that kind of road at night without hurrying or reacting to the sound of cars approaching.


However, the interior seemed filled with light, and they didn't glance up long enough for their eyes to adjust properly.


Good point.


I too think visibility is better than it appears in the video, but I'm not so sure it's good enough to help all that much. However, even with visibility as bad as in the video, I'm confident in my ability to handle the situation. I would probably not be able to break in that short amount of time and from that speed, but neither would I drive at that speed. When there are less than ideal conditions (in this case visibility), it is our responsibility as drivers to adapt and lower the speed, possibly dramatically. This goes for autonomous cars too. If the road in front of me and the areas next to it are not clearly visible, I'd drive at such a speed that a collision would in all likelihood only result in scrapes.


FWIW it was a new moon in Tempe, Arizona on Sunday night, so it was probably darker than normal.


The actual spot it happened has a bike lane, and crosswalk sidewalk by it [1].

[1] https://goo.gl/maps/gpugzAZKxcS2


Not only is there no crossing here, but a sign forbidding pedestrian crossing. (Look at the left side of the road.)

The sign directs people to use a crossing, which is some 100 meters away at the lights around the bend.


True but there are trails that cross over the road, it is an odd area. If you zoom out on google maps you will see some of the trails. Note the sidewalk/pathway. It is no pedestrian but has paths for them so it sends mixed signals.


I saw that. The median is landscaped with a bizarre X-shaped paved area. It can't be intended for recreational use or walking; it's a divider between two fast roads. At all four entrances to the X, there is the no pedestrians sign.


I agree; the video alone has exactly zero value in judging what the human driver would have seen under these conditions.


I think you are wrong since this people die of this exact scenario almost everyday. The camera might be making it darker but that doesn't mean that every driver (Everyone's eyes and reaction times are different) would have been able to see her and get out of the way.


Is it possible that driving under intermittent street lights messes with the aperture or image recognition? It would be like flashing a strobe light at the camera.


Then WTF are they doing driving it down that road with intermittent street lights at night then?


It is true. Even a very high resolution camera doesn't capture things that I can see with by eyes.


right, so the camera's night vision mode that detects objects in the dark would have been completely blinded by the street lights while passing under the streetlamp. Take night vision goggles and look at a light. It blinds the whole field of vision.

The only thing that I think was the cars fault was that the car is programmed to drive when the driver is driving around distracted. There is no point to a human driver sitting behind the wheel of an autonomous vehicle if they aren't paying attention.

People need to understand that self driving mode isn't a freedom from the responsibility of driving safely. Rather its a tool to help ensure that driving statistically becomes safer as more self driving vehicles find their ways onto the road.

Hopefully someday all cars will be self driving and dangerous hazards/traffic reduced to the point that they are virtually none existent rather than being towards the top of the list of "preventable death" and "things humans don't want to waste most of their time during the day doing".


How did LIDAR and IR (?) not catch that? That seems like a pretty serious problem.

Something is badly wrong there. That should have been detected by LIDAR, radar, and vision. Yes, they need a wide dynamic range camera for night driving, but such things exist.[1][2] They're available as low-end dashcams; it's not expensive military night vision technology.

Radar should pick up a bicycle at that range. The old Eaton VORAD from about 2000 couldn't, but there's been progress since then.

LIDAR has its limitations; some materials, including the charcoal black fabric used on some desk chairs, are almost nonreflective to LIDAR. But blue jeans, red bike, bare head? Expect solid returns from all of those.

The video shows no indication of braking in advance of the collision. That's very bad. There simply is no excuse for this situation not being handled. The NTSB is looking into this, and they should. I hope the NTSB is able to pry detailed technical data out of Uber and explain exactly what happened. In the first Tesla fatal crash, they didn't get deeply into the software and hardware, because it was clear that the system was behaving as designed, unable to detect a solid tractor trailer crossing in front of the Tesla. The result of that investigation was that Tesla had to get serious about detecting driver inattention, like all the other carmakers with lane keeping and autobrake do.

This time it's a level 4 vehicle, which is supposed to be able to detect any road hazard. The NTSB has the job of figuring out what went wrong, in detail, the way they do for air crashes.

Again, there is no excuse for this.

[1] https://youtu.be/gWqzJF9tOhw?t=211 [2] https://www.youtube.com/watch?v=as12rjzCQnY


LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.

Why it didn't even appear to try to stop? You got me, refresh rate on the LIDAR? LIDAR flat out being mounted to high and relying on optical sensors instead for collision avoidance of small targets (like a human head)?

I'm guessing, I'd love to see an NTSB report on this.


Why even bother having a LIDAR system on your self driving car if it doesn't have sufficient resolution to detect a person standing right in front of it?

This doesn't seem like an edge case at all. Pedestrian crossing the road at a normal walking pace, and no obstructions in the way which would block the car's vision. The fact that it's dark out should be irrelevant to every sensor on that car other than the cameras.

Something obviously went terribly wrong here; either with the sensors themselves or the software. Probably both.


For detecting larger obstacles like buildings or other vehicles would be my guess.

Realistically faster sensors should be used to detect obstacles. LIDARs I could find with some cursory googling can run up to 15hz. Computer vision systems can run much faster (I have a little JeVois camera that'll do eyeball tracking at 120hz onboard, I assume something that costs more can do better).

But more importantly, you're vastly trivializing the problem - Standing right in front of it, sure the LIDAR will see the person no problem. Standing 110 feet away (which would be min stopping distance at that speed)? Realizing that, for a LIDAR with a 400' range at 15hz moving at 40mph you get ~7 samples of a point before you're at it... For at least the first 3 frames that person is going to look like sensor noise. At 110 feet that person (which I'm calling a 2' wide target) is 1 degree of your sensor measurement.

It's not that it's useless or broken, more just this a seriously bad case where optical tracking couldn't work and where LIDAR is particularly ineffective at seeing the person because of how it works. More effective might be dedicated time of flight sensors in the front bumpers, unsure how long a range those can get, but they are also relatively "slow" sensors.


Is 360 degree lidar really needed? A smaller FOV and higher resolution for 120-180 degrees pointed forward seems a better bet.


It’s not mutually exclusive either. You can have lower frequency, lower angular res 360 spinning LIDAR for low granularity general perception, and also have much higher frequency, brighter, and lower FOV (~90-120deg) solid state lidar mounted at the very least on the front corners of the car. We should be absolutely littering these vehicles with sensors, there’s no reason to be conservative at this stage.


> LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.

I highly doubt this is the issue. I am not sure what Ubers setup is, but even a standard velodyne should have been able to pick that up based on angular resolution.


> Realizing that, for a LIDAR with a 400' range at 15hz moving at 40mph you get ~7 samples of a point before you're at it... For at least the first 3 frames that person is going to look like sensor noise. At 110 feet that person (which I'm calling a 2' wide target) is 1 degree of your sensor measurement.

Updating this with math not done at midnight:

  Frame	Distance	Angular Size of Target	Pixel Size 
  0	400	0.286481285	7.162032123
  1	396.0888889	0.289310143	7.232753564
  2	388.2666667	0.295138837	7.378470932
  3	376.5333333	0.304335969	7.608399215
  4	360.8888889	0.317529122	7.93822806
  5	341.3333333	0.3357213	8.393032503
  6	317.8666667	0.360506726	9.012668144
  7	290.4888889	0.394484519	9.862112983
  8	259.2	0.442105838	11.05264595
  9	224	0.511583054	12.78957636
  10	184.8888889	0.619810252	15.4952563
  11	141.8666667	0.807794769	20.19486922
  12	94.93333333	1.207252619	30.18131548
  13	44.08888889	2.600887175	65.02217938
This is based on the velodyne LIDAR specs I could find last night with some quick googling: - 400' range - .04 degree angular resolution - 15hz max update rate

If you have more accurate real world experience with these sensors and can share more accurate performance characteristics I can update.

These calculations were done assuming a vehicle moving at 40 mph. The stopping distance at that speed is about 110ft. I computed the pixel size by assuming 1 measurement = 1 pixel giving me 9000 pixels per 360 degrees.


I think you are using the wrong angular resolution, though.

http://velodynelidar.com/docs/datasheet/63-9194_Rev-G_HDL-64...

Thats the one LIDAR Uber seems to have matching pictures.

5Hz - 20Hz full round sampling rate, lets assume 15 Hz.

The resolution in the horizontal plane is dependent on rotational speed, so at 15 Hz it should be 0,26 degrees.

(0,35/20*15 = 0.26)

For the woman height the angular resolution is 0.4 degrees no matter the rotation speed.

Id est, she would have been atleast one pixel wide from 400 feet and about 2 pixels high and growing in size if we assume 2' wide.

(Not counting bike).

I really see no exuse for Uber messing this up that bad. The LIDAR can't have missed a potential "obstacle" when it got closer, even if the car wouldn't classify it as a human.


I was using Rev E because it's the data sheet I had handy. Mostly I was trying to point out that LIDAR is not some magic thing that always sees everything and there's limitations.

  Frame	Distance	Angular Size of Target	Pixel Size 
  0.00	400.00	0.29	1.10
  1.00	396.09	0.29	1.11
  2.00	388.27	0.30	1.14
  3.00	376.53	0.30	1.17
  4.00	360.89	0.32	1.22
  5.00	341.33	0.34	1.29
  6.00	317.87	0.36	1.39
  7.00	290.49	0.39	1.52
  8.00	259.20	0.44	1.70
  9.00	224.00	0.51	1.97
  10.00	184.89	0.62	2.38
  11.00	141.87	0.81	3.11
  12.00	94.93	1.21	4.64
  13.00	44.09	2.60	10.00
There's with your .26 angular resolution @ 15hz. (I just have a spreadsheet that spits all these out for me.)

These are NOT big targets, they could easily have been mistaken for noise and filtered out. All of the LIDAR data I've ever seen has been fairly noisy and did require filtering to get usable information from it. And given the number of frames they get maybe their filtering was just too aggressive.


Yes, I agree with you that we can't assume that the car could have noticed the woman from 120 meters from LIDAR data alone. Maybe with some kind of sensor fusion with IR-cameras.

But, as it got closer and what the computer though was noise was on about the same place a sane obstacle finder should have given a posetive match. Maybe at 30 - 40 m worst case?

At 142 feet the woman probably had (assuming she was 5.5'):

asind(5.5/142) = 2.21* => 2.21/0.4 = 5.5

So between 5 and 6 "scanlines" going from left to right over her.

Assuming she was 2' wide that's 0.8 degrees which would be 2 to 3 pixels in breadth according to your spread sheet.

That's between 10 and 18 pixels (voxels?) that stand out clearly from the flat road around it, exluding the bike.

If you wan't to get an idea of how LIDAR data looks Velodyne has free samples and a viewer for less resolution models.

http://velodynelidar.com/downloads.html

It pretty hard to identify obstacles far off, but you will still see there is something there. It's especially easy to identify obstacles that are vertical.

As she got closer, she would eventually show up clearly on the LIDAR data. But since the car never slowed down or went left, it didn't notice her at all even at point blanc (or did see her but failed to do anything about it).


A buddy of mine has a lower end LIDAR on a robot, working with them on SLAM on it, trying to get a similar hardware set up locally over the summer. (I have weird hobbies)

Yeah, I'm willing to accept SOMETHING bad happened here, as I said I really just wanna dissuade people from the notion that LIDARs will see all obstacles all of the time. Not going to say the car acted perfectly and it was sensor failure, but definitely willing to say that the LIDAR probably COULD see her but not as well as people would assume.

Really, I think this was a case of the car over driving their effective sensor range, same as what happens when you're on a dark road and a deer runs into the middle of the road, you simply can't react fast enough by the time you realize the danger is there. Computers are fast but they aren't perfect.

What I'd be particularly interested in was if the computer saw her and if it did the calculation - I can't stop safely in this distance, and decided to just hit the obstacle because it was "safer". At that point we start getting into ethics and this problem gets a lot murkier.


It's not safe to hit an obstacle at full speed. It should slow down to lower the damages to the obstacle, the car and the passengers.


Velodyne's current product has more than enough horizontal resolution for that.[1] Look at the pictures.

[1] http://velodynelidar.com/blog/128-lasers-car-go-round-round-...


The person in that last picture is something like 5 feet from the car which is far to close to be useful at 40MPH. At those speeds what's important is what it sees at 150 and 200 feet and how fast it can refresh.


When your resolution is low enough to not see this they stop calling it LIDAR and start calling it a rangefinder. If this was actually a fundamental limitation of the sensor then that's the crappiest LIDAR unit I've ever heard of. When I first heard about the accident my initial reaction was "this is why vision only systems are inadequate". The fact that it didn't even detect the object at all before it collided with it with lidar, radar, and vision is inexcusable. This could set fully autonomous cars back enough that forget the cost of one life, this delay could kill tens of thousands because of a preventable accident.


I wonder if the driver grabbing the wheel disabled the response from the car.


I think the car should have reacted way earlier with a hard brake or starting a lane change. If the lidar detection had worked.


The first reports indicated that the driver claimed they didn't notice the pedestrian until they heard the collision.


Fun thought experiment:

1. Release video to police (with obvious shots of driver/passenger/whatever not paying attention to the road)

2. Investigation reveals accident was caused by driver/passenger/whatever reaction

3. ????

4. Profit (PR win/rescue)


Even Uber couldn't be oblivious enough to consider this a PR win.


I am glad to hear the NTSB is investigating this and not just the local police (who would lack the technical resources to make a useful judgement). Have there been any statements at all from them so far?


The NTSB tends to not make even initial statements until upwards of a several weeks in, and final statements as much as 6 months to a year later. They're nothing if not thorough.


Probably the only explanation was that LIDAR was off, either on purpose (for testing?) or because it broke and there was no safety mechanism to prevent the car from operating if/when LIDAR is off.


If the LIDAR was turned off, then it's pretty unforgivable. You don't get to play experiments with cars on live roads.


Especially since the cabin footage shows the person in the vehicle was only glancing at the road every several seconds.


I don't understand why everyone in this thread is so focused on the sensors alone. The sensors might detect anything they like, but they're not going to stop the car on their own. The car has logic that tells it how to react to what its sensors perceive- that's the AI part. If the car's AI can't identify a woman pushing a bike as a woman pushing a bike, or it doesn't know that it has to stop before hitting her- well, then it won't.

There's so much confusion here, about the capabilities of these systems. People think that a combination of better sensory perception + faster reaction times suffices to drive in the chaos of the real world. That's not so. Sensors and fast thinking won't get you nothing if you can't think right. You have to be able to know what the things are that your sensors detect, and how to react to them.

It's perfectly possible that the Uber' car's LIDAR detected the lady crossing the road- but the AI just didn't know what to do about her and simply did nothing.


This is exactly it. I see people mentioning seeing the victim at the last second but these vehicles are supposed to be better. They scan in non visible spectrums with LIDAR. Lack of safety vest, lack of headlights, none of it is supposed to matter... or at least it shouldn't completely compromise the vehicle's systems. Camera's may not work as well but an obstacle directly in the path should still be detected. Especially an obstacle that would reflect LIDAR and give off a very obvious infrared signature.

This video also shows another point I made recently in a conversation. People need stimulus to keep them alert and focused. I don't think it's at all reasonable to expect someone to sit idly with almost no interaction or responsibility and expect them to stay alert. The human brain doesn't function that way.


I watched the video a bunch of times and I'm not 100% sure how the vehicle could have reacted at the upper maximum of time where she would have been visible to LIDAR, and maybe for Radar, to make a significant difference. At least given the two options where it could have slowed maybe 10km/hr at most (from the 40mph aka 64km/hr speed limit) but that's still an if and I'm not sure it would have ensure the survivability of the jaywalker OR safety of the people in the vehicle at that speed.

The other option is swerving which might have been a possible solution here as well, but that would also have been highly dangerous for the people in the car as well at those speeds, within that timeframe, possibly causing >1 fatality or serious injury.

Regardless I'm very much speculating here regarding reaction times based on watching a low quality video, I'm really looking forward to expert analysis here rather than speculation on the capabilities of LIDAR/Radar + computation speed at 60km/hr... even considering a human driver would have 100% hit this person.


If the vehicle can't detect objects in the non visible spectrum (even just IR) at least as far away as a human can in the visible spectrum then that is a showstopper right there for the technology. Additionally if it can't then it shouldn't be traveling at a speed where it can't react in time.


Radar can detect objects in a non-visible spectrum: https://twitter.com/elonmusk/status/753843823546028040?lang=...

> Good thing about radar is that, unlike lidar (which is visible wavelength), it can see through rain, snow, fog and dust - Elon Musk

The major car companies have even developed the technology to also allow LiDAR to see through snow/rain without the previous refraction problems: https://qz.com/637509/driverless-cars-have-a-new-way-to-navi...

My question is given it could detect the jaywalking object (regardless of visible light) within the very very short timeframe at those speeds, on what looks like a highway, I'm curious if it's rational to expect even the future ideal machines (say 5yrs from now) to have been able to react in that situation.

It's not as obvious as people here are pretending it is.

Yet even then we now have a previously unknown model to test our machines on to prevent it from happening again. Given a human would 99%+ of the time not have seen this woman in time, then I believe we'll at a very minimum be better off as a society as a result of this... as wrong as that sounds, because it's now a high-priority dataset, not just a sad story in the local news (if even) we'll forget about tomorrow as it would be with a human driver.


> Given a human would 99%+ of the time not have seen this woman in time

I'm far from convinced that a human would not have seen this woman in time.

See all the comments in this thread about how the dashcam footage is much worse than reality, and even one person who drives that road regularly saying it's not that bad visibility-wise.

I think if I had seen that lady slowly walking her bike onto the road in my adjacent lane, I would have slowed down for sure. And from seeing my own nighttime dashcam videos, I think I would have seen her. She's the only object nearby, on a fairly straight road with no adverse weather conditions. I would have seen someone pushing a bike onto the next lane.

Maybe I would have hit her still, but I would have slowed down for sure.


So the speed limit on Mill Avenue, where the crash took place is 35 mph. The uber was traveling at 40 mph. The reason Mill’s speed limit is 35 instead of 45 (like most Arizona’s major roads) is because it’s got much heavier pedestrian traffic than typical.

If an autonomous vehicle cannot detect pedestrians crossing a slower-than-typical road with enough time to at least not kill them, it shouldn’t be on the road. If that means uber can’t drive autonously at night, too bad for them.

To be fair, the law currently is very permissive to drivers, and a human may not have been deemed at fault. Despite going 40 in a 35 zone, when (due to reduced visibility) they actually should have been going 25. You are supposed to go only as fast as you can stop, given current visibility. Regardless of the speed limit.


Usually it is the law that you are only allowed to drive as fast as the conditions allow (e.g. CA's "Basic Speed Law".) That includes being able to see obstacles in your path early enough to be able to do something about them.

If you can't see far enough to be able to avoid something in the road, you're simply going too fast. That should apply to machines, but it already applies to humans.


What are you saying exactly, that a human driver would be at fault here given the video evidence?

I believe it's entirely possible for a robot to solve this problem with proper Radar and maybe LiDAR going forward. But I would be extremely skeptical about anyone claiming a human would have been at fault...


If the statement is that "a human would not have been able to stop in time either", then yes, by the letter of the law, a human would have been at fault.

If you can't see where you're going, you need to slow down. Does that seem so unreasonable?


You're simply wrong. Drivers aren't required to drive slow enough to avoid every possible illegal or unexpected natural obstruction.

http://www.zacharlawblog.com/2015/11/whos-at-fault-in-an-acc...

> So, should an accident occur between a jaywalker and a car---if shown that the driver could have/should have seen the person and could have/should have been able to avoid, then without question the driver can be held responsible.

https://www.nicoletlaw.com/blog/2015/07/what-happens-if-a-ja...

> To a large degree, it comes down to the driver's ability to avoid the accident. If a jaywalker steps right out into the car's path and is instantly hit, the driver will usually not be held responsible. It will be determined that the pedestrian caused the accident.

> However, if the jaywalker strolls into the street a few hundred yards ahead of the car and the driver does not slow down or swerve, the driver could be held responsible. Even though jaywalking is illegal, drivers are expected to take reasonable action to avoid crashes when they can, even if they feel they have the right of way.

> Negligence also comes into play if the driver should have seen the pedestrian but did not. For instance, a driver who is texting and driving may look away from the road and not see someone step into the street, hitting them with the car. The driver could argue that the road was clear and that the person shouldn't have been there. While that may be true, he or she could still face charges.


I never said "every possible illegal or unexpected obstruction". What I said was "if you drive so fast you can't avoid something blocking your lane by the time you see it, you're going too fast." Your quote, in fact, confirms what I said:

However, if the jaywalker strolls into the street a few hundred yards ahead of the car and the driver does not slow down or swerve, the driver could be held responsible.

This was clearly not a case of "someone stepping out right in front of the car", since they were more than halfway across the rightmost lane, walking a bicycle.

Edit: This rule is merely a variation on the universally accepted one that says that if you rear-end someone in another vehicle, you're almost universally at fault (unless it can be proven that they acted in such a way that the collision was unavoidable.) The logic being that if you could not avoid a collision, you were going too fast for the distance you had to the vehicles in front of you.

Are you suggesting that drivers should be less liable for running into stationary objects than they are for running into other vehicles? That seems absurd to me.


I think your response time calculations are wildly wrong.

The LIDAR sensor[1] being used here can pick up targets up to 120m away. I'm not sure about the RADAR or vision systems also in place, but even LIDAR alone should have been able to easily pick out the pedestrian with plenty of time to come to a full stop.

This is clearly poorly designed autonomous driving software, not a sensory deficiency.


I watched it again and you might be right, there was likely time for something to happen, given the available reaction speed going 40mph on a highway. I'm curious what that something translates to and what affect it could have had on this situation (ie, swerving, slowing by x mph, etc).


Because you didn't do the math yourself, 40 miles per hour is just shy of 18 meters per second. So 120 meters is almost 6.7 seconds at that constant speed (more if you're slowing down). Start of video to collision is less than 5 seconds.


That should give quite a bit of time to slow down then or at least move slightly out of the way, even given the person may not have been detected exactly within the 5s+ of the video + factoring in computation speed + mechanical response times. Although again this is speculation as I'm not intimately familiar with how the objection detection works and what a bike w/ plastic bags may have looked liked crossing the road, plus what available options were at that speed and given the environment. Thanks.


The braking distance at 80 mph for a modern car is 320 feet (just under 100m) the car should have been able to come to a complete stop if the software had been using the LIDAR correctly.


This isn’t a highway. Mill Avenue is one of Tempe’s most pedestrian-trafficked major roads. And it has a slow 35mph speed limit, and is well lit (much better lit than the dash cam shows), because so many pedestrians cross it.


40mph is actually right at the inflection point where survivability changes dramatically. At 40mph it's about a 50/50 chance of a car crash with a pedestrian causing a fatality. At 35mph the chances of a fatality go down to 1 in 3, at 30mph it goes down to 1 in 5. At a little under 20mph it's 1 in 10.


Okay, I've heard that 40mph plenty of times here when AI cars come up, so is it possible the car could have slowed down 10mph? I only suggested 10km/hr as an ideal maximum (aka 35mph) given better technology, not as a baseline for today...

I've also read that the last speed sign was 45mph before this accident. I used 40mph as it was between the 35mph sign that was coming up before the hit and the 45mph one before it.


Braking time is about 4 seconds from 40 mph (average braking acceleration is a bit under half a gee for ordinary cars), which means every second of braking is a roughly 10 mph reduction in speed.


Per your own data, 30mph is the "inflection point". 30mph to 40mph cuts survivability by 40%.


> I watched the video a bunch of times and I don't see how the vehicle could have slowed down within the 700ms max where she would have been visible to a LIDAR, to make much of a difference.

There's also a steering wheel. (Why is everybody here missing this?!) I could totally see it having moved out of the way.


In order for steering to be a collision avoidance strategy, the system would have to risk the life of the driver. It may be fairly easy to recognise an obstruction on the road, but determining that it's human is much harder. If a plastic bag blowing across the road could kill the driver, or other drivers nearby, that would be a very unpopular solution.


They hit the person with the right side of their car, so even a small move left could have avoided this accident. Given that it was a divided highway with an empty lane to their left, moving half a lane over would have been a reasonable response.

Of course, the Uber vehicle did not take any action at all. It doesn't seem to have ever realized that there was a solid object in front of it. Without that, collision avoidance is impossible.


Veering off the road to avoid a collision is not necessarily the safer option.


well the technology appears to be there, the second one in the sequence is a swerve

https://www.youtube.com/watch?v=--xITOqlBCM

also check out the one at 1:06 - pilot detects slightly before than the autopilot but the dashcam is still pitch black


Indeed! To LIDAR, she was basically standing in the lane, with a bulky bicycle. To visible light, including the driver, who was apparently half asleep or watching the dashboard, she was in shadow until just before the collision.

So yes, LIDAR should have caught this. Easily. So something was clearly misconfigured. And even if the driver had been carefully watching the road, he probably wouldn't have seen her in time.

But I wonder, is there a LIDAR view on the dashboard?


Right, so assuming LIDAR caught her: I'd imagine the algorithm presumed that she wouldn't cross the center line till she did cross the line, I don't know what to think the algorithm would do there after?


Presumably the algorithm had a pretty good idea of where the lanes were, and if the LIDAR detected a non moving object in an adjacent lane and decided it was fine to ignore it because it presumed it was not going to start moving, that's a pretty broken algorithm.

I don't have the link handy, but I was reading a webpage yesterday (related but not about this crash) which showed Google's self driving car's "view" of a road scene - it's clearly painted different color boxes and identified pedestrians, bicycles, other cars - along with "fences" where it had determined it'd need to slow or stop based on all those objects.

Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public), it was faulty (but being used anyway, either because its self test is also faulty, or because the driuver/company ignored fault warnings) - or _perhaps_ Google's marketing material is faked and _everybodies_ self driving tech is inadequate?


> Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public)

I think this is a very good possibility considering that autonomous vehicles is the goal of the company and they're racing to get to that point before they run out of investment money. They have a lot of incentive to take short cuts or outright lie about their progress.


The last video of uber's system view i'm aware of is https://youtu.be/4CK0RhM-fos

(2016)


Looks like a velodyne 64 based laser. It is virtually impossible for those to not be able to see the the bicycle well in advanced. Uber had a serious issue here. Something like:

1. System was off

2. Point clouds were not being registered correctly (at all!)

3. It was actually in manual mode -- safety driver didn't realize or didnt react fast enough.

4. Planning module failed

4. Worst outcome in my opinion: Point cloud registered correctly, obstacle map generated correctly, system was on, planner spit out a path but the path took them through the bicycle.


The LIDAR data look pretty noisy, especially for distant objects. Could not they filter out the pedestrian thinking it is a bush or something like this?


I get your concern, but I would probably reserve the word inadequate. If this is the only situation you have to worry about a self driving care hitting and killing you in, and it's the only know data point at this time, some may consider that much more than adequate.


Yeah - _maybe_.

A website that "does something weird" when you use a single quote in your password... That _could_ be "the only situation you have to worry about". It is _way_ more often a sign of at least the whole category of SQLi bugs, and likely indicative that the devs are not aware of _any_ of the other categories of errors from the OWASP top 10 lists, and you should soon expect to find XSS, CSRF, insecure deserialisation, and pretty much every other common web security error.

If you had to bet on it - would you bet this incident is more likely to be indicative of a "person pushing a bicycle in the dark" bug, or that there's a whole category of "person with an object is not reliably recognised as a person" or "two recognised objects (bicycle and person) not in an expected place or moving in an expected fashion for either of them - gets ignored" bug?

And how much do you want to bet it's all being categorised by machine learning, so the people who built it cant even tell which kind of bug it is, or how it got it wrong, so they'll just add a few hundred bits of video of "people pushing bikes" data to the training set and a dozen or so of them to the testing set and say "we've fixed it!"


If this is the only data point then uber self driving cars are about 50 times more dangerous than average human drivers (see numbers quoted repeatedly elsewhere; uber has driven about 2 megamiles; human average is 100 megamiles between fatalities)

If that's your idea of adequate, you'd be safer just vowing to get drunk every time you drive from now on, since a modest BAC increases accident rates, but not by a factor of FIFTY!



I really don't bundle Tesla in with Waymo, Lyft, Toyota, Uber that are trying to build ground-up self driving cars. Are Tesla actively testing self-driving cars on public roads yet? Are their included sensors even up to the task? I didn't think they even have LiDAR?


True, but this seems to be a simple case of reacting to a person who steps in front of the car. Automatic braking technology exists on even cars that aren't considered "self-driving".


Tesla does automatic lane-change and automatic braking.


If the last possibility is the case, there would likely have been a lot more accidents by now


Unless they human drivers have been a lot more active then we've been led to believe, or perhaps they don't drive as much at night.


It’s that last possibility that’s horrifying above all others. The backlash either way is going to be terrible, but if these cars are just not up to the task at all, and have driven millions of miles on public roads... people will lose their minds. Self-driving tech will be banned for a very long time, public trust will go with it, and I can’t imagine when it would return.

This is going to sound bad, but I hope this is just Uber’s usual criminal incompetence and dishonesty, and not a broader problem with the technology. Of the possible outcomes, that would be the least awful. If it’s just Uber moving fast and killing someone, they’re done (no loss there), but the underlying technology has a future in our lifetimes. If not...


Waymo actively test edge cases like this both in their test environments in the desert and via simulation, they have teams dedicated to coming up with weird edge situations like this (pushed bicycle) where the system does not respond appropriately so that it can be improved. All of these situations are kept and built up into a suite of regression tests. https://www.theatlantic.com/technology/archive/2017/08/insid...


"I hope this is just Uber’s usual criminal incompetence and dishonesty"

I, for one, certainly won't be betting against you there...


Like the AI Winter.


Not "center line", because this is a divided highway. So she had to cross two lanes from the median, in order to step in front of the Uber. "Human-sized object in the roadway" should have been enough to trigger braking, even if the trajectory wasn't clear.


Anything that is tracking an object moving on the road should be looking at the velocity of the scanned object as well as keeping track of some sort of difference from normal. I would think the car should know it's on a two lane one way road, realized an object was moving in one lane with some sort of velocity towards the path of the vehicle, and that perhaps something was not normal.

From the reports of cars running red lights and then this I would imagine they have an extremely high level of "risk" (what it takes for the car to take actions in order to avoid something/stop) that is acceptable.

What would be far worse than a hardware or sensor failure would be to learn that Uber is instead teaching its cabs to fly through the streets with abandon. Instead of having cars that drive like a nice, thoughtful citizen we'll have a bunch of vehicles zooming through the streets like a pissed of cabby in Russia.


> who was apparently half asleep or watching the dashboard

It is possible that a screen provided a clearer (somehow enhanced) view of the road, so I'm reserving judgment for now.

Of course using that screen could be a grave error if the screen relied on sensors that missed the victim. But if it appeared to be better than looking out of the windshield then that points to a process problem and not necessarily a safety driver inattention one.


He startles just before the collision, so anything he was watching on the dashboard arguably showed no more than the video that was released. But maybe the video camera had poor sensitivity at low light, and the driver could have seen her sooner, looking out of the windshield.


I'm not so sure that's just before the collision. The driver claimed that he didn't notice the pedestrian until he heard/felt the collision and it's not like the car hit a large object. I'm not convinced that he startled before the car hit the pedestrian.


Probably the LIDAR did catch it. Probably the algorithm (neural network) that takes in LIDAR data and outputs whether there is a object in front failed or gave a really low probability which was less than the threshold specified. This happens all the time with deep neural networks. In any case technology is to blame and self driving should be banned until these issues are resolved.


> self driving should be banned until these issues are resolved

_Ubers'_ self driving trials should be banned since they don't seem to exercise enough caution. That shouldn't hold the progress of competitors back.


LIDAR view on dashboard! Uber, if you are reading, take heed.


How would this have helped at 40MPH? The user would have milliseconds to react and hit the brakes. The point is that the car is self-driving. If a user has to watch a video display and intervene for every edge case it's more dangerous than just driving yourself.


From the released video, if LIDAR was including the entire roadway (all three lanes) there would apparently have been at least four seconds warning.

In production, having a LIDAR display would be pointless. But for testing, it might be useful. But maybe better would be to tell drivers to keep their eyes on the road.


Even an extra visual camera or hdr setup should easily have caught this. Or a $20 webcam with IR for that matter.


> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

Seems more likely that it's a software problem. Especially given the rest of Uber's behavior, I wouldn't be surprised if they're aggressively shipping incomplete/buggy software in the name of catching up to more careful competitors like Waymo.


The video here is misleading. A human driver has a much wider field of view and better low-light vision than this video renders the situation. That's not to say that this would have been prevented by an attentive driver. But it's also clear that the safety driver was not paying attention, so it's even harder to know.


Humans have amazing low-light adoption, but not if the human is blinded by/staring at a bright cell phone screen as seen on this video.


It isn't at all clear from the video. Video cameras of this quality vastly underperform human vision in low-light conditions.


Like parent said, it's low-light in the visible spectrum, but I'd totally expect these vehicles to scatter tons of light in non-visible spectrums, making these conditions well-lit in those spectrums. Like a bat using echolocation.


I think the point GP was making is that an attentive human driver may have been able to see the pedestrian much earlier than when she becomes visible on the video.


this, the human visual system adapts to darkness. Consider that the victim who obviously is crossing the street as part of their lifestyle has likely done this many times before, and of all the vehicles that could have hit the victim, the one that did happened to be one of uber's self-driving vehicles with a clearly inattentive driver behind the wheel. A driver paying attention to the road would have at least hit the brakes well before impact.


Intermittent street lights reduce the adaptation though and add glare - the human visual range for scotopic ('dark adapted') and mesopic (twilight conditions) vision is about 4 orders of magnitude of luminance (cd/m²) that the retina can perceive simulataneously from 0 to saturation, without adaptation (dilating pupil). Dark to light adaptation is very rapid and happens in fractions of a second, light to dark adaptation happens over minutes.

The eye will adapt to a mean level of light in the larger FOV (not fovea only) - that is why instrument clusters on cars need to be low-level lit, to not disturb this adaptation. Exterior light sources like headlights and street lights further influence adaptation and veiling glare can lead to light sources overshadowing smaller luminance signals and pushing them out of the range that the eye is adapted for.


Don't forget compression artifacts. A dark object against a dark background in a highly compressed video is going to get compressed into just looking like the background.


I downloaded the video directly and it looks like it's 133 kbps for the video data at about 360p, which is just abysmally low. So it's not surprising that it's difficult to pick out detail in such a contrasty scene given the degree of compression and the resolution.


I don't think modern digital sensors actually under perform human vision, or it's at least not obvious. They have made huge improvements in the last decade.

Also, When a digital camera records an image, a gamma curve is applied to it before display, which makes up for a bias against the darker portions which the digital equipment does not inherently have.

Considering the streetlights, I cannot imagine any excuse. This video will sadly give them the benefit of public doubt but anyone familiar with lighting digital video will be unconvinced that the video feed was the culprit.


I take a photo with my iPhone in night conditions and the image I see on the screen comes out way darker than I saw with my eyes. That has to be corrected for, no?


Yes, but if you take a photo with a Sony A7s it will vastly outperform your eye.


No. The term we need to introduce here is "Dynamic Range". It is pointless to say "Sony A7s can vastly outperform the eye" where it could have been set for a low light exposure. Human eye has an amazing dynamic range - I don't know the exact number today, but last time I checked was like 3 years ago and the cameras at the time (D800) were not even close to human dynamic range.


That's because you're comparing a single exposure from a digital camera. You can have dynamic range far in excess of the human eye with HDR techniques, by combining difference exposures and/or by exposing different pixels on the same image differently.


>> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

A human is not an "obstruction", dammit. I mean, literally- it's not like hitting a wall. The driver's life will never be in danger and the care may not even be significantly damaged. There's a very special reason why we want self-driving cars to avoid humans, that has nothing to do with the reason we want to avoid obstacles. And because this special reason is very, very special indeed, we need much better guarantees that self-driving vehicle AI is extremely good at avoiding collisions with humans, than we do for anything else.


In pitch darkness, IR cameras can only see more than a visual light camera if you somehow had IR headlights that were more powerful than the visual headlights (they probably don't; that would blind other AVs just like high beams blind other drivers). It doesn't grant you the ability to see in the dark with infinite range. Lidar can sense shapes with more range, but at the cost of dramatically worse resolution and latency. It's conceivable the radar/lidar sensor caught the person in the left lane with a bike and decided that was a reasonable place for a person with a bike to be, then lost track of the person while she walked into the right lane (where the visual/IR system could not see her yet).

It's also entirely possible there was an egregious bug. This video doesn't really tell us much.


Humans radiate in IR. A midrange FLIR can pick out a human against a cold background from miles away.


Long IR. No one is using that for AVs. (that might change after this incident!)


This is Arizona, however. The nights do cool down, but the concrete will still store a lot to heat.


A garden-variety uncooled LWIR camera from FLIR can see a difference of 0.05 degree. As long as the pedestrian isn’t wearing a thermal blanket from head to toe he/she can be seen.


Regardless, a human and metal bike should be very easily discernible with FLIR of adequate resolution.


Human beings radiate IR do they not? Silly question, of course they do. Perhaps not above an Arizona level of thermal background?


Thermal imaging and near-infrared imaging are not the same. Waymo/Uber/etc are not using thermography for their autonomous vehicles.


We see a nIR video of the driver. Those weird eyes everyone seems to have are the tell-tale sign of nIR as it interacts with the retina. So, we know that nIR sensors are being used in the interior, at least.

That said, Arizona in the summer is going to play havoc with lIR and thermography in erms of false positives and negatives. The sensor suite probably should be using lIR at night for this reason and the switching it off in the day. But given Uber's history, the lack of lIR reeks of cost-cutting.


I'm not an expert in IR or computer vision by any means, so take this with a grain of salt.

Air has such a low thermal mass that it doesn't measurably affect most IR sensors. Hot pavement could be a potential issue, but that shouldn't have a major effect on forward-facing sensors.

Besides, it's only March. Even Arizona isn't that hot yet.


> In pitch darkness, IR cameras can only see more than a visual light camera if you somehow had IR headlights

Did you ever use a thermal IR camera ? What you're saying only apply to cheap chi-com cheapo"IR" CCD (the ones you find in home security), not the FLIR/military-grade stuff.


Either way, the LIDAR must have seen her, and would have been emitting it’s own laser light.


The lady was walking the bike at the edge of a light cone. The inner cam was actually filming in IR. Uber cars have IR cameras, LIDAR and an also multiple radar sensors.

My guess is that the algorithms have never met a person crossing the street with a bicycle during night time so they just ignored it or considered it to be a glitch.

You can have to approaches regarding labeling driving situations. Either you label with positive tags the situations where the car needs to react. Or you label with positive tags the normal situations when the car does nothing.

Depending on the two approaches you can have a car that kills pedestrians that appear in weird circumstances. I also bet a pedestrian that ducks in the middle of a lane would 100% be killed by a car. Or two people having sex while standing in the middle of a lane.

The other situation you have cars avoiding invisible obstacles that may appear due to some aberrations from sensors (which are far from perfect).


Which is exactly the reason this is nowhere near production: "kinda works, sort of, in trivial cases" is not quite the SDV promise I keep hearing: "my car will also go reasonably straight when I let go of the wheel on a wide, straight stretch of the road, and won't even run over anyone if there is nobody on the road. Therefore, autonomous driving!"


You mean like the Mercedes system from 2007 - https://www.youtube.com/watch?v=l6DMidknVCo ? I agree that IR should have caught the bicyclist earlier, and that it didn't suggests that these cars might also kill quite a few deer with a similar low light profile. (or this one from Cadillac : https://www.youtube.com/watch?v=SPIpNG6mLCY)


You either kill a deer or get killed by a moose... at highway speed.


Indeed, if they can't spot moose at these speeds, it's a non-starter in parts of the country.


Arizona and Tempe especially have lots of darker roads. The LIDAR/computer vision tuning at night doesn't seem right. Maybe it was adjusting for the changing street light brightness but yes this is one situation and the vehicle had just come from the Mill bridge that has festive lights that are strung along the bridge [1].

This is a situation though where the LIDAR should have clearly been better than it was. Maybe it was in a strange state after having seen all the lights and then complete darkness, looks like they were headed north on Mill Ave over the bridge [2] just past the 202 where it is indeed very dark at night and probably the spot right here [3] which matches up with the building in the background, the other way is South and is busy/urban by ASU. They had just crossed a lit up bridge, then dark underpass, then into this area [3]. The area that it happened in [3] does have bike lanes, sidewalks and a crossing sidewalk close by [4] but is by a turn out so not a legal crossing however there are lots of trails through there.

This video is worse than expected by far and may be forever harmful to the Uber brand in terms of software.

In AZ I usually see the self-driving cars out in the day, maybe there is lots of night tuning/work to do yet.

[1] https://i.imgur.com/kwxjW36.jpg

[2] https://goo.gl/maps/ey1RA47tKBJ2

[3] https://goo.gl/maps/gpugzAZKxcS2

[4] https://goo.gl/maps/Ni18GfjMP962


What the heck are those trails doing in the median if it's not a legal crosswalk?


I've been staring at those myself for a couple of hours. My best guess is the crosswalk was moved, but the paths were left. There's a lake nearby and tons of parking under the overpasses, with trails and picnic areas.

Also, the woman was right under a working street lamp. And as was stated in an earlier article the car continued on at 38 mph after the accident. The bike ended up 50 yards down the street.

EDIT: "That spot is east of the second, western-side Mill Avenue bridge that is restricted to southbound traffic, and east of the Marquee Theatre and a parking lot for the Tempe Town Lake. It can be a popular area for pedestrians, especially concertgoers, joggers, and lake visitors. Mid-street crossing is common there, and a walkway in the median between the two one-way roads across the two bridges probably encourages the practice."

"Pedestrians can cross a street without using a crosswalk in many instances without risking a jaywalking ticket, but Arizona law requires pedestrians not using a crosswalk to yield to traffic in the road."

http://www.phoenixnewtimes.com/news/cops-uber-self-driving-c...


There's a saying "a sign is not a wall." You can make dangerous things illegal all you want (such as "not overdriving your headlights," hint hint), but it won't stop people doing that. "But that person was not supposed to be there" is a rather weak excuse for manslaughter.


Lots of parks around and trails that do go across the road, it is an odd area.

If you zoom out on google maps you will see some of the trails. Note the sidewalk/pathway, it is no pedestrian but has paths for them so it sends mixed signals.


The pedestrian is a lot more obvious to the eye than I suspected, and it's actually quite shocking. They are correct to stop all road tests until they have investigated why they are missing this.


It strikes me as extremely disingenuous if this is all Uber gave to the police. They should be making as much raw data as possible available. At the very least it'd let other companies test their AIs against the scenario and see if they would catch sight of and be able to avoid the pedestrian, if not then this is one more data point to train them on so it doesn't happen again.


The NTSB is also involved and I’m sure they’ll have more than than video to go by.


I hope so, but I'm not certain what sorts of legal precedents could be leveraged here. Uber, for instance, might try and avoid sharing non-visual recording data on the basis that it's proprietary information. IANAL but I'm very curious if companies can be compelled to share proprietary formats and tools for examining those formats or translating them into non-proprietary formats (which... is that even a thing legally speaking?) in a case like this.

For instance, if law enforcement had testimony and other warrant allowing things that indicated that a user had stored some vital secret plan in a password field what could the government compel a company to do, assume the disk it relies on is also encrypted for extra fun time

1. Hand over the physical disk

2. Hand over the disk image

3. Hand over the decrypted disk image

4. Hand over the unobfuscated (enc or hashed) string of interest from the decrypted disk image

5. Compel the company to decrypt the string if it was encrypted with a common algorithm (i.e. AES)

6. Compel the company to decrypt the string if it was encrypted in a proprietary manner (i.e. in-house custom encryption)

7. Compel the company to devote resources (how much?) to brute force a one-way common hashed string (i.e. bcrypt)

8. Compel the company to discover a hash salt assuming the company doesn't store it locally but may be able to procure it from the user to do the above.

9. 7 & 8 if the one-way hashing algorithm is proprietary (and weak) and the company raises objections that the process of breaking this string will reveal key components of how the algorithm works (i.e. the hash is just md5(string) XOR "IMMA SECRET_STRING")

10. 7 & 8 if the proprietary algorithm is not weak but the company raises objections over trade secrets for other reasons.


The legalities are beyond me, but the core principal seems pretty simple: if Uber isn't willing to cooperate fully with the NTSB to make autonomous cars safe drivers, then Uber doesn't get to make autonomous cars. Full stop.


Im conviced 99-100% that the Automatic Emergency Brakeing (AEB) on my Tesla would have braked for that. The promise of these systems, as you also point out, is that they can see things, humans cant. The "real" cameras on this car (not this dashcam footage), and the LIDAR, should be fine with it beeing near pitch black.


I can almost guarantee every variation on the above accident is being tested now by waymo, tesla et-al..


Yeah releasing the sensory data beyond human visible spectrum would be way more informative about if a better designed AV would have dealt with this better.

I'm glad it was not me driving down that road that night, I don't think I could have prevented it.


I think you sell your driving skills very short here. Assuming you have normal eyesight, you would have likely a 10-fold higher dynamic range than the visible camera footage shown. The gap between streetlights would have been easily discernible with your eyes, unlike in the camera footage.


See I wonder about that. In compromises conditions (many) drivers will drive slower and be more cautious. Perhaps computers need to be given a sense of fallibility? Computer can sense low light conditions and drive even more cautiously as a result.


It's pretty clear to me (from the second half of the video) the driver was looking down at her phone and glancing up at the road periodically. IMO if she had been focusing on the road, she would have at least started braking before hitting the pedestrian. Or perhaps actually stopped before that happened.


My more generous interpretation is that they were looking at the computer screen where the car shows its interpretation of the situation, people tend to lift their phone towards their face.


This is only a fig-leaf of a driver (for catching legal flak by sitting in the front left seat), not actually operating the vehicle at all. This part was inevitable, given the unbounded technooptimism.


I agree thats a textbook case for the non-visual-specrum sensors. Its possible that lidar DID catch it, but the avoidance logic decided to continue forward. For example if it decided a collision was immpossible to avoid, swerving might make things worse. Also its possible the logic thought the timing was such that the bike would pass after the car crossed where the bike was going, so slowing down would actually cause a collision.


The lady can be seen fairly clearly (even in this poor quality video) at 0.03 and impact occurs at 0.04. That's 1s, which means a distance of approx 17m. If the guy was watching (sort of the point of him being there really) he could have slowed the car significantly and probably even stopped it. These are test vehicles being treated like prod vehicles. They should probably not be on streets with pedestrians quite so soon.


You're massively underestimating human response time to visual cues and also the distance needed to stop a vehicle traveling at 35-40mph. It takes a full quarter of a second to respond to a visual stimulus on average, and more than that to also move your foot and depress a brake pedal. By that time the car was less than 50 feet from the pedestrian. At 40mph braking distance is about 80 feet in good conditions. There is absolutely no way a human driver could have avoided this accident assuming the same visual distance and dynamic range as the camera. Best case, the car may have slowed down a bit before impact.


You absolutely don’t need to brake in this situation, there are 5 lanes, you release the gas to give the pedestrian more time to finish crossing and if it’s a bit short you go to the next lane over, aiming behind the pedestrian. That’s what everybody was doing in France, now that I am in Arizona the drivers are just murderous towards pedestrians, they don’t release the gas or move to aim behind them, like it’s perfectly ok to kill someone. (Don’t get me started on right on red and exiting from a drive in, I think I will get killed on a sidewalk here)


Swerving is almost always the wrong thing to do from a safety perspective. This situation is identical to a deer in the road at night, a situation in which most traffic safety experts advise hitting the horn and brakes, but not swerving (note that human drivers hit over a million deer in the US every year, despite supposedly being alert and able to see better than cameras at night). You don't have time for a mirror check to see if there's a car next to you, the shoulder might not be safe, and swerving at speed is an excellent way to lose control of your vehicle entirely. There's also a bike perpendicular to the lane, so you would have to swerve way more than just enough to get around a person.


For a deer, swerving is the wrong thing to do, because a deer's life does not matter, so it's worth it to hit the deer.

With a person, though, you are seeking to protect everyone, so the tradeoff swaps in favor of swerving, because the person in the car next to you is far more likely to survive a collision.


Human would have had significantly longer than this video. It's not that dark there at night. (Not as dark as the camera would lead you to believe). Swerving distance to avoid death is well within possibility. Aside from that, the possibility of death was further exacerbated by the vehicle not reacting to impact.


Or swerved perhaps? I think the Uber employee developed a false sense of security with respect to the car's capabilities.


Regarding how the LIDAR did not catch that, there are 4 possibilities I can think of:

1. A software bug failed to recognize the obstacles, or misclassified them, or it fell below some probability threshold.

2. LIDAR didn’t work at the time, and the car did not shutdown.

3. The victim‘s clothing absorbs the LIDAR‘s wavelength pretty much completely, such that it appeared as a „black hole“ and was ignored by the algorithm since this occurs commonly. Unlikely though since the bike itself would surely have registered?

4. It’s hard to see on the video, but is the car going up a slope? In that case, if the LIDAR didn‘t look up far enough, it could have failed to see the victim for optical reasons.


For years, I've been told by AV technooptimists that these are all "never happens" scenarios. I have a feeling we have a Therac-class failure here.


Another option (related to your #1) could be disagreement between the visual field cameras and the LIDAR. Which could result in a lower confidence of an object being a pedestrian.


4) That portion in particular is a slight down slope


It seems like this perspective may come from the idea that processing camera input is a formality. But the best estimates of the practical computing power of the brain are based on its visual processing capacity, because we know that's a hard problem. CAPTCHAs all depend on humans' ability to process images semantically faster than a computer (spambot). While it probably isn't unsolvable, I don't think it's surprising that this is consistently a challenge.


> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

It's a bit too early to make that conclusion. For all we know, the equipment was malfunctioning. Which I guess technically leads to your point, but we'll have to wait for the investigation to actually know what failed vs. what met expectations (I worry that expectations and tolerances, as set by the car companies, will be revealed to not be as comfortable as we might assume).


It was also a bit too early for the police to release a statement less than 24 hours after the incident saying that it appears that Uber/the car/observer was not at fault.


Jaywalkers ar at fault in Arizona in the case of an accident so it doesn't seem too early.

On the subject: this lady I used to know hit someone who ran out in front of her and started freaking out (thinking they were in some serious trouble) until the police told her "you're fine, they were jaywalking".


> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

This was taken by a video camera - which has a much lower range of detectable brightness then the human eye. The pitch-black spots in the video are almost certainly not pitch-black if you were to look at them.


" ... since the pedestrian showed up in the field of view right before the collision"

Either the woman had just said the words "Beam me down Scotty" and materialised there like the video feed footage implied - or she'd been in view for quite some time - at least enough time for a person pushing a bicycle to cross en entire lane. If Uber's tech is only capable of detecting her as she "showed up in the field of view right before the collision" - their tech is not fit for purpose and should he held 100% at fault here. (Not that doing that will help her family or friends, but it might help stop Uber and their competitiors from doing it again...)


This car should have slowed down below the speed limit if it cannot safely stop or maneuver upon seeing something enter its field of view.


This is drummed into students during the motorcycle training syllabus here (Sydney Australia) - "Do not ride beyond your field of view. It you can't see beyond a curve, crest, fog, rainstorm, queue of traffic or whatever - make sure you're going slow enough that you can stop before you get to the end of where you can see".

I always explain it to friends starting out "you need to assume that just around every corner there's a stationary shipping container that's fallen off a truck. If you cant stop in time by the time you see it - it's your fault for going too fast."


Many people don't seem aware that the reduced speeds at curves aren't because your car can't take the curve at that speed (most can) but because you can't tell if there's an obstruction from a sufficient distance.


Also, it should have been using its brights to increase the visible range, to provide backup for the LIDAR that should've caught this.


This does not seem reasonable. Should it drive at < 35 MPH on highways at night? That's probably even more dangerous.


A car should not ever be driven faster than conditions allow. If the driver cannot see (from rain, snow, darkness, etc.), then they need to slow down. To do otherwise is putting people on the roads at sever risk of injury or death.


People shouldn't be "on the roads" (outside of intersections) anymore than cars should be "on the sidewalks".


a) there are people on the roads inside those metal boxes on wheels, y'know. b) "shouldn't be there" is not a bianco cheque for "run them over", at least in the civilized world


The point is that if a autonomous car can't drive faster than 35 MPH and be able to detect objects, it shouldn't be on the road.


> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

Actually a human driver would be expected to have less visual trouble in this case. People's eyes are far more adaptable to low light conditions than a camera's video. If you've ever tried to take a picture on a visible night using your phone, you've seen this effect.

> When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).

Except that the computer did not do that in this case. This car also uses LIDAR and should have noticed the pedestrian long before the accident occurred.

> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

Either the sensor equipment or the software was defective, otherwise the pedestrian would have been detected.


I'd like to see an Uber SDV drive on Waymo's test tracks (where they have the employees pop all the shit at it). And just see what it does. I'm guessing it will be ridiculous and nightmarish.


In addition to that, even if we were limited to the "last moment", there was about half a second or a second time to react. Correct me if I'm wrong, but that should be enough for the car to at least try something.

Isn't the car supposed to brake to minimise the collision, if the swerving is too dangerous (and it wasn't in this case, as the road wasn't too busy)?


Mercedes has used thermal imaging commercially for years: https://www.youtube.com/watch?v=d03SuJ0TVcY

I would be very surprised if there is no thermal imaging in autonomous vehicles.


The human driver appears to be reading. I know that's a pretty hefty accusation, but I can't shake off the impression.


I would think the car would have been able to at least do something in this situation? It looks like it didn't react at all.


> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

I'm not sure, pls look that pic https://imgur.com/a/VfBck, you can clearly see there exists at least 10-15 meters b/w them right at the time when she pops up. Now I don't know the speed of the car, but I'd wager, a human driver (if s/he was alert) would have attempted a breaking at that moment.


At 38 MPH, the car would cover that distance in 0.7 seconds. That is on the low end of human reaction times for braking, so an average person might not have time.


I don't work on self driving cars or even vision, but I have heard when you are on a highway you start with filtering out small stationary reflections, since they are almost certainly not cars. (Which is maybe why the tesla didn't see the big red firetruck - it was not visible). It's not a big leap to imagine that Uber's LIDAR ignored the bike because it was not recognized as a person, was moving perpendicularly at a low speed, so got pre-filtered as a puddle. We can only guess and Uber will report whatever they want.


It’s what high beams are for. It would have been easy to avoid.


How is it in America? Over here (North Europe), one may not use high beams when there is street lighting (even though in this case the lighting seemed not very good).


Speculating: IR interference might have jammed this vehicle's LIDAR for an instant as it sampled the vicinity of the hazard. Any power in the passband of the detector might have been sufficient to saturate it and make it less sensitive to the returning LIDAR signal. This could come from another LIDAR unit in the vicinity. Other possible jammers might include IR laser pointers, TV remotes, or IR camera illuminators.


And this is why I have retroreflective tape on my bike’s forks. It’s legally required while riding at night too.


> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

Because the software is still critically flawed, of course...this only represents a present-day failing, not some sort of permanent obstacle for the future.


Serious question: What is the argument for the need for autonomous driving? I haven't really pushed this question so figured I'd start here.


The car also had headlights on and I think they should give enough light to see the pedestrian in 20-30 meters ahead even without street lights.


>> How did LIDAR and IR not catch that?

The LIDAR can catch anything it wants. If the car's AI doesn't know how to deal with it, it won't.


check the below article and embedded video, talks about the Uber ADAS https://www.buzzfeed.com/nicolenguyen/not-too-fast-not-too-f...


The code isn’t perfect obviously. The software is probably programmed to avoid anything in the opposite lane.


IIRC IR is ITAR controlled and not available above abysmal resolution. I am curious if this limitation applies to self-driving cars, too.

If so, then it's really time to do what was done for GPS and declassify it for use by the general public. It's a public safety/public good issue.


No. ITAR controlled, in the context of IR imaging, means that you can't export any thermal imaging systems above 9fps. You can use whatever you want inside the US, and 60fps systems are available for consumers today.


You can get 320x240 60hz for $2000 as a consumer[1]. On the one hand I wonder what kind of resolution the military grade version is, but I also wonder if the bandwidth/dollar is actually better. I imagine Raytheon basically just makes up a number when they're setting the price of a helicopter gun cam.

[1]: eg https://www.amazon.com/FLIR-Systems-III-320-Thermal-Detector...


lidar would've, but Uber had to trash all lidar sensors since they were stolen from waymo!

or so it would seem... ;)


totally agree. I've seen crash detection catch an impending crash two cars ahead of the car in front of the car watching.

If that crash detection can detect shit going on two cars ahead why couldn't LIDAR see that?

having said that. wth was she doing?


Maybe we just don’t have the tech yet to make it work. At the very least, Uber surely doesn’t. There’s no way the car didn’t see her, but it didn’t react, which means it failed to recognize what it was seeing.

This is pretty much the worst case scenario.


You are the same type of person that would have halted the industrial revolution at the first report of a human casualty or injury in a machine.


Please don't cross into personal attack on HN.


The bike --rofl




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: