Hacker News new | past | comments | ask | show | jobs | submit login

You could hack up something very roughly like it with Kinect but it wouldn't work as well, it'd be a neat hack but not actually usable for anything serious.

Kinect's image capture is very low resolution (even by today's cellphone camera standards), it doesn't give you depth information at a per-pixel resolution and even ignoring those issues in addition to depth information you also need a source image which has critically sharp focus across the entire viewing range (you can't selectively focus in software that which was captured out of focus with a standard digital sensor), which means using a very small aperture (large F-stop value). So it'll be very difficult to capture anything but still-life images because the small aperture means a long exposure time, and thus motion blur if anything moves. Granted this is already less of an issue with Kinect because the sensor in it is so tiny that getting out-of-focus areas is not that much of a concern, but the cost of that is that the image resolution is also atrocious.

Once you get up to usable sensor resolutions, if you're already limited to taking long exposures of still-life images on a tripod, you might as well skip the IR depth perception and just take a series of wider aperture pictures at different focus plane levels, focus stack the results and preprocess the image series for blur levels to work out the relative depths of the in-focus bits of each source image. At least doing it that way you can use a DSLR to get quality photos.

Neither of these is a true replacement for what they are doing here, though.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: