Hacker News new | past | comments | ask | show | jobs | submit login

This is pretty much the experience I have with my dash cam, a Yi. In its recorded video, its automatic exposure control make it look like everything outside of the headline cone is pitch black, but it is actually not. I have seen deer and possums by the side of the road, and debris etc, that did not show up when I later checked the video for the same period. There is enough spillover light from modern headlights that a human whose eyes are dilated and adjusted to dark conditions will see a pedestrian standing on the median, stepping off it, crossing the inner lane towards the car's current lane. More than enough time to begin to brake and possibly swerve. I have dodged animals in a situation similar to this.



Yep that exposure control / sensor quality of the dash cam in the video was rubbish. My own Blackvues produce far, far better results than that. Just look at how nothing is illuminated by street lights, this clearly has the effect of making the poor rider appear "out of nowhere". Also agree it appeared driver was on smart phone most of the time, thus not in control of the vehicle, and had thus no business being on the road as these are systems UNDER TEST.

If that's the best Uber can produce then they ought to hang their heads in shame. Unless it was doctored with... as I find it hard to believe they'd put such rubbish quality cameras in their trials.


Do you trust Uber to provide all the data, or would they selectively produce data favorable to them?

Do you trust Uber to provide unedited raw video, or would they process it to increase contrast, make it appear that nothing was visible in the dark areas of the frame, reduce the resolution, drop frames, etc.?


It's funny how the internal camera which shows how distracted the driver was has way better night vision than the external road camera...


The key here is contrast; plus a IR light at 2 feet works great, at 60 feet...not so much.


The internal camera (let's be honest and call it the scapegoat camera, because that's the only practical use for human "safety drivers" when they are not permanently engaged) must take almost all its light from IR, because we don't see anything of the smartphone screen glare that the eye movement so clearly hints at.


I don't think the driver is looking at her smartphone. I think she's checking the car's monitor (as in a computer screen). Although to be fair, that should be showing the car's view of its surroundings so I don't know what's going on there.

Edit: Nevermind. Someone posted a picture of the car's interior, below and there's no computer screen.


Link?


Sorry - I can't find it. This thread has grown rather.


Ok so this is getting old now, but I just came across the following - which show what I'd expect the roads to look like, and geesh were Uber ever full of crap to release their video which pretty much had the effect of exonerating them.

https://arstechnica.com/cars/2018/03/police-chief-said-uber-...

Please check the videos out.


Yep, exactly..


> Yep that exposure control / sensor quality of the dash cam in the video was rubbish.

Is that the same cam used by the AI to detect obstacles?

I would expect a safe self driving car to include IR cameras that can be more cautious about moving warm blooded creatures.

Surely some more detailed telemetry data would reveal whether the main issue is with the sensors or with the algorithm.


I highly doubt that camera is part of the perception pipeline.


> I have seen deer and possums by the side of the road

Both of those have eyes that act as reflectors and you can see their eyes well before you can actually see the whole animal.

This[0] suggests that the total time required for a human to avoid an incident like this is 3.6s (at 35 mph, casual googling suggests the car was doing 40). Even if we add 1 second of extra time to deal with it I'm not sure that makes the cut.

0) http://www.visualexpert.com/Resources/pedestrian.html


Other people in the thread have pointed out the woman stepped out in a darker area between where the street lights are placed. Reflecting eyes are not the only way to detect an object. A person watching the road would have seen her dark silhouette contrasting to the next patch of light.

Also remember she was not a stationary object. She was in the act of crossing the road. Human eyes/brains are good at detecting motion in low light even if we can't 100% make out what the object is.

I have lived in Tempe and know that part of town well. There are apartments, gas stations, hotels, strip malls, fast food restaurants and a strip club. It's not a pitch black country road.


I know what you're talking about with the eyes, I spend a lot of time driving rural WA highways at night, but no. I have seen deer that had their heads facing the other way and were standing in the shoulder/ditch area. In conditions where i can definitely make out the shape of the deer and its location but the dash cam sensor misses it entirely.

Your last paragraph is a valid calculation if this were a case of a person stepping directly off a curb into the lane of traffic. However, it appears that they were probably standing on the median looking to cross, then stepped off into the left-most lane of traffic, an empty lane, proceeded across that lane towards the lane in which the car was traveling. In this sort of situation human intuition will recognize that a person standing on the median of a high-speed highway is likely to do something unusual. Particularly when you observe the visual profile of, as media has reported, a homeless person who is using the bicycle with numerous plastic bags hanging off it to collect recycling.


Driver didn't see this person because the driver was occupied with smartphone, only occasionally glancing up.

Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.

Perhaps the video was intentionally darkened to simulate this effect. :P


>Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.

Using bright interior lighting at night is something that we've known not to do for more than a century. If the driver couldn't be expected to see the pedestrian because the interior lighting or UX was too bright that is not something that does not reflect favorably upon Uber.


I wonder if the driver is liable.


That's their only purpose. Nobody in their right mind could expect human observers to stay as alert as an actual driver when cruising for days with an AI that is good enough to not require interventions all the time. Passengers add nothing to safety, and an almost reliable AI will make anyone a passenger after a short while.


Completely agreed, but the law needs to take this into account. Human psychology can't just be ignored on this.


I'd like to have an interior view of what driver was actually looking at. It couldn't have been a FLIR monitor, for sure.. it seems more likely to be a phone held in the right hand? Bit hard to tell with the quality of the footage, but driver looked rather tired to boot.

If so (a hand held phone), in Australia that driver would be going to jail for culpable driving causing loss of life.


It could have been anything readable. I got the feeling it was either a Kindle or something like that or maybe even a hardcopy of something printed or written on paper. This was just a hunch but I think it's being validated in my mind by the fact that there was no light seeming to shine on the driver's face but that's probably due to the night vision camera not picking up that type of light? I don't really know. My mind is filling in a lot of gaps here, I realize.

EDIT: Upon re-watching the video a third time and really paying attention to this I don't think there is any real way for us to know without confirmation from the driver them self or an official report on the incident. My mind was definitely deciding things that just aren't discover-able from the video itself.


Here we have it, I believe:

"Uber also developed an app, mounted on an iPad in the car’s middle console, for drivers to alert engineers to problems. Drivers could use the app anytime without shifting the car out of autonomous mode. Often, drivers would annotate data at a traffic light or a stop, but many did so while the car was moving"

https://mobile.nytimes.com/2018/03/23/technology/uber-self-d...

The whole project seemed designed for an outcome like this. Eg allowing app to be used whilst on the move, after reducing from 2 to 1 operators. Culpability ought to lie with Uber.


Here's a picture of the Uber car from inside. No FLIR, just GPS:

https://cdn.geekwire.com/wp-content/uploads/2018/02/Front-_-...

A different picture from that article shows that under the GPS is the gear stick, an emergency button, and a cellphone charger.


According to the filename, those are iPads, which implies they could have been displaying anything (not just GPS).


I think he is, at least, I've never heard of any law that removes responsibility from a driver if driving a self-driving car. I think this will also apply to empty cars, if they get into an accident, the owner is liable.


If I recall correctly the military has done a lot of studies on this.


I compare it to the backup camera in my car. While close up at night it is good, if something or someone is a short distance away I can barely make them out. However, looking in my mirrors I can see them or at least make out that someone or something is there.


A camera can have pretty good dynamic range at night, but it needs a big sensor, a huge lens to operate with a fast shutter speed. In the video, you can already see the motion blur, indicating shutter speed is slower than what it needs to be to identify nearby objects in low light.

Autonomous cars are never going to be viable. Just looking at the cost of high-end SLR sensors and lenses that you'd need to match human eye dynamic range, and you're already looking at an expensive setup, before we even get to things like 360-degree vision and IR/LIDAR/Hyperspectral imaging. And that's in addition to all the compute problems.

Sorry Silicon Valley tech-bros, but it's a fantasy you're chasing that's never going to happen. The quicker we can end this scam industry, the better.

People really need to be told "no".

Probably better to chase after flying cars..


I think you’re comparing object detection to high quality photography though. There are plenty of options that can detect objects at night. Even cheap infrared technology, I would think, would be sufficient for picking up moving objects at night.


Wetware is astonishing stuff. All the propaganda to anthropomorphize machines is showing here... cheap IR sensors are not the issue. AI is not intelligent and inanimate objects have no self.

They should pivit to augmenting drivers, not attempting to drive for them. I would happily utilize a properly designed HUD (meaning I have source access) connected to a fast MerCad or bolometer array.


Sorry for lack of input or varied discussion but I just had to stop and say how goddamn friggin cool it would be to have bolometers hooked up to a smart HUD that didn't interfere with your vision of the road. Something really translucent that smartly blended it's color scheme as to not interfere with the coloration of signs and details beyond your view on the road / around the road.

But you are right, though. I think augmenting drivers sounds like a great idea in the sense you talk about. The kind of augmenting drivers I don't want are those stupid headbands you'd wear that beep like crazy if your head starts tilting in a way that resembles falling asleep. If you are in danger of falling asleep at the wheel and need a device like that I think it's pretty obvious one should take a nap on the side of the road or in a free parking lot, haha. Hopefully if we do wind up headed in that direction the people inventing will have a similar way of thinking and inventing.


> Wetware is astonishing stuff.

It really is. The eye can detect a single photon. Fingertips can detect 13nm bumps, smaller than a transistor on Coffee-lake CPUs.

We're better off acknowledging machine limits to work on other problems instead.


High-quality exists because the human eye is that sensitive and discerning. And there aren't plenty of options that can detect objects at night. IR isn't any cheaper, and then you have to figure out what IR bands you want to detect.


> A camera can have pretty good dynamic range at night

No. A camera's dynamic range is pretty much fixed. If you can capture low-light objects it means high-light objects are completely blown out.


I've read (see [1]) that humans have a low-light ability that approximates ISO 60,000, a pretty large value and larger than simple video cameras provide. However, very high end pro/enthusiast SLR's go considerably higher, see this real-time astrophotography with the Sony a7s at ISO 409,600 (youtube video [2]). The same Sony will work great in full sunlight too.

The Canon ME20F-SH is a video camera reaches ISO 4,000,000. This camera has a dynamic range of 12 stops and is available at B&H for $20,000. [4]

Of course, this isn't exactly the challenge that cameras face when assessing a scene. The dynamic range happens within a single scene all at the same time. Wide dynamic range (WDR) is the term I've seen used in describing video cameras that can handle both bright and dim areas within the same scene.

[1] http://lightartacademy.com/blog/tutorials/camera-vs-the-huma...

[2] https://www.youtube.com/watch?v=ZRzXgSMbBu0

[3] https://www.cambridgeincolour.com/tutorials/dynamic-range.ht...

[4] https://www.bhphotovideo.com/c/product/1187825-REG/canon_100...


The extreme numbers are for static cameras mounted on a tripod with a slow shutter speed. They won't do a tenth of that in a moving car at 45 mph.


No that's not how ISO works. The Canon ME20F-SH shoots high definition video at professional video shutter speeds and has an available ISO range of 800 to 4,560,000. At $20,000 I'm not suggesting that this exact camera would be appropriate for use in autonomous vehicles, but I am pointing out that video systems can now exceed the capabilities of human eyes.

There are a number of video samples shot on the Canon ME20F-SH on YouTube. In these one can see that under low light situations the camera is shooting at ordinary video speed (the camera supports shutter speeds from 24 to 60 fps). I'm not trying to push the Canon ME20F-SH; I don't have any association with Canon. The manual for this camera is available on-line if you'd like to read up on it: [1].

The actual exposure of a video frame or image depends upon the f-stop of the camera's lens (aperture), the shutter speed, and the ISO of the image sensor. See [2].

Basically, each doubling or halving the shutter speeds corresponds to one "full-stop" in photography. Each full stop of exposure doubles or halves the amount of light reaching the sensor. Changing the aperture of the camera's lens by full stops also doubles or halves the amount of light reaching the sensor. Full stops for camera lenses are designated as f1, f1.4, f2, f2.8, f4, f5.6, etc. The light sensitivity of the film or sensor is also customarily measured in full stops. Very slow fine grained color film is ISO 50 and is usually used in full sunlight. ISO 100 is a bit more flexible and ISO 400 used to be considered a "fast" film for situations where more graininess would be acceptable in exchange for low light situations. Each doubling of ISO number corresponds to a full stop. So a photo take with ISO 400 at f2 with 1/1000 second shutter would have the same "brightness" as a picture taken at ISO 100 at f2.8 with 1/125 second shutter (less 2 stops ISO, less 1 stop aperture, and plus three stops shutter speed). Naturally, other factors come into play, the behavior of film or digital sensors at extremely slow or extremely fast shutter speeds isn't linear, there are color differences, and noise issues too. See [3] if you are interested in more about how photography works.

[1] https://www.usa.canon.com/internet/portal/us/home/products/d...

[2] https://photographylife.com/what-is-iso-in-photography

[3] https://www.amazon.com/Negative-Ansel-Adams-Photography/dp/0...


They could have 2 cameras--one for low light, one for high-light fairly cheaply.


1. A good start would have been to put a good camera in the first place because that one is absolute crap.

2. I'm not sure you can fix the exposure on "fairly cheap" dashcams.

3. "high-light" in night settings are likely much lower than standard daylight, let alone bright daylight.

Though I guess you could have an auto-exposure dashcam standard and add a low-light one which is only active in low light conditions.


Which is the same as with our eyes. There are ways around that limitation.


IR can help with that. Headlights are limited by the disincentive of blinding oncoming drivers...

Now that I think about it, self driving cars may be paralyzed by other self driving cars running IR boosted headlights.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: