Before I read the paper, I was adamant about "more light is better." But read the paper:
"We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise."
You're right that you still need good, small sensors to enable good, small lenses, but my ultimate point is that digital camera sensors scale with advances in silicon. Lens technology is much, much slower to advance. The more of this we can do in software (and thus, silicon) the better.
Related to this and presumably not common knowledge is that there was recently a breakthrough in camera CMOS sensor technology.
A few companies now offer scientific cameras (price tag 10000 USD, for example
http://www.andor.com/neo_scmos) that allow read out at 560MHz and 1 electron per pixel readout noise as opposed to 6 electrons per pixel in the best CCD chips at 10MHz.
This means one can use the CMOS at low light conditions and at an extremely fast frame rate (the above camera delivers 2560 x 2160 at 100fps). You will actually see the poisson noise of the photons.
Unfortunately representatives (the few I spoke with) of those companies don't seem too eager to bring these sensors to mobile phones.
"We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise."
You're right that you still need good, small sensors to enable good, small lenses, but my ultimate point is that digital camera sensors scale with advances in silicon. Lens technology is much, much slower to advance. The more of this we can do in software (and thus, silicon) the better.