Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's how night mode works on Pixel phones, right? I believe it takes a few images in rapid succession and took advantage of the noise being random which meant a high quality image under a noisy sensor with some signal processing.


It also can actually allow you to identify positions within the image at a greater resolution than the pixels, or even light itself, would otherwise allow.

In microscopy, this is called 'super-resolution'. You can take many images over and over, and while the light itself is 100s of nanometers large, you actually can calculate the centroid of whatever is producing that light with greater resolution than the size of the light itself.

https://en.wikipedia.org/wiki/Super-resolution_imaging


Are the 100s of nanometers of light larger than the perturbations of Brownian motion?

This oldish link would indicate inclusions of lead in aluminum at 330°C will move within 2nm in 1/3s but may displace by 100s of nanometers over time:

https://www2.lbl.gov/Science-Articles/Archive/MSD-Brownian-m...


Integrating over a longer time to get more accurate light measurements of the a scene has been a principal feature of photography. You need to slow down the shutter and open up the aperture in dark conditions.

Combining multiple exposures is not significantly different from a single longer exposure, except the key innovation of combining motion data and digital image stabilization which allows smartphones to approximate longer exposures without the need of a tripod.


I agree with you wholeheartedly and just want to add one more aspect to this: it also allows you do handle the case where the subject is moving slowly relative to the camera. Easy example is taking long exposures of the moon from a tripod. If you just open the shutter for 30 seconds the moon itself is going to move enough to cause motion blur; if instead you take a series of much faster photos and use image processing techniques to stack the subject (instead of just naively stacking all of the pixels 1:1) you can get much better results.


For bright stuff like the moon, it's my understanding the best way is take really high-speed video, hundreds of frames per second, then pick out the frames which has the least amount of atmospheric distortion and stack those.

So not only can you compensate for unwanted motion of the camera rig, but also for external factors like the atmosphere.

For faint deep-sky objects, IIRC you really do want long exposures, to overcome sensor noise. At least the comparisons I've seen using same total integration time, a few long exposures had much more detail and color compared to lots of short exposures.

That said, lots of short exposures might be all you can do if you're limited by equipment or such, and is certainly way better than nothing.


This is how we reduce noise in filmmaking. My de-noise node in DaVinci has two settings: spatial and temporal. Temporal references 3 frames either side of the subject frame.


some phones shine IR floodlight, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: