Hacker News new | past | comments | ask | show | jobs | submit login

Surprisingly well faked in face of the fact that there's no real 3D information in the pictures.

There simply cannot be new 'pixels' appearing that were (i.e. before applying the effect) hidden from objects in the cameras line-of-sight. This is evident upon closer inspection (look at edges around the approximate middle of the DOF). The trompe-l'œil then falls a bit apart.

Interestingly, I did not notice above with the IOS7 background parallax; I'm wondering why? Special images, or a stricter constraint on movement?




It's exactly that - single image, and a depth map calculated from a series of shots made upwards. Both these are bundled in a fake-bokeh image. There is obviously no more pixels. However... If you choose your subject wisely, the effect can be pretty believeable, using a simple displacement map.

On iOS though it's something different. There's no depth map in there, just a flat image moving counterwise to your hand movements.


It sounds like there are two issues here. One is how the depth map is generated, and the other is how the resulting image file is formatted. For the former, several still images are collected while the camera is moving, which provides parallax which can be used to generate the depth map. For the latter, I don't know, but it would certainly be possible to bundle both the depth map and multiple "layers" of photograph that could be used to actually expose new pixels in the background when the foreground moves.


There is an app for iOS - seene.co. The amount of movements you have to do to capture enough pixels is prohibitive for my taste. I think that google has nailed it - it's super simple.

As for storing the layers - you would only have the "from above" pixels, and only a few. Probably that's why there is only a LensBlur in their app in the first place.

If you just want a small displacement effect like on depthy - then the key is no sharp edges in the depthmap. We will try to tackle this in one of the upcoming updates...


Now that you explained the IOS7 trick, it's obvious. (Given the environment: background filling image on a physical device movement as opposed to a image on a website).


On iOS you have the icons as a first plane - that makes the trick.

If you have an Android device around, try depthy on it - it's way better this way imo.


There is more pixels captured compared to a single shot image as you said: "series of shots made upwards". So it captures some pixels that is hidden when the camera moves upwards, but if the simulated parallax is bigger than the original camera movement then there will be still missing pixels. Probably this could be improved by doing bigger movements with the camera, like with other 3D reconstruction software.


Indeed -- there are a bunch of spots where, if you look closely, you can see the foreground texture being "repeated" in the background texture, when you've moved your cursor to the edge. I wonder if a better kind of pixel-filling-in algorithm could make this more believable, or if the problem is with the inherent inaccuracies of the depth map.

But still, as a whole it's remarkable effective. I'd like to be able to appreciate it without moving my mouse -- I wonder what kind of camera movement function, at what speed, would make for the most unobtrusive effect that would still give the perception of depth?


I'd like to be able to appreciate it without moving my mouse

Oculus Rift?

Not just different left/right angles for each eye, but also can rotate the angles by tilting head. That would be a spectacular way to view still photos in VR.


Given it only takes panning the image slightly and adjusting the parallax a bit, I think that's probably doable. It would be the modern day version of this: http://en.wikipedia.org/wiki/Stereoscopy


People in the machine learning community have been working on this sort of problem for awhile, the results are often very impressive:

http://make3d.cs.cornell.edu/


From my understanding there actually is 3D information reconstructed via sfm from the hand jittering. Only few points and of questionable quality but enough for the foreground extraction (for blurring).


Yeah, it appears to be a regular JPG with a depth map added to the metadata. Originally Google uses this for refocusing, but the parallax is a neat trick


There actually is 3D information in the form of a depth map including in the images, put their by the Google Camera app.


Yeah.

Also, as the depth map only has a single depth value per pixel, you can get aliasing around the edges of the objects in focus as they have a halo around them.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: