Very fun. The "map brightness to Z" trick would seem to be too simplistic, but it totally fooled me into thinking this was doing some kind of crazy computer vision trick. I'm almost sad now that I read that detail; now it sounds easy to implement. :(
I will say, though, that I hate the mouse UI: you can't use relative offsets to make changes on a device that might disappear form one edge of your control and reappear on any other. It's very confusing to the user. Please use a drag for that.
>The "map brightness to Z" trick would seem to be too simplistic, but it totally fooled me into thinking this was doing some kind of crazy computer vision trick.
That got me wondering about what sort of tricks might be out there.
For human users, you could use a face detection algorithm to find faces in the image and then place smoothed face meshes (or even just blobs) under them, and you'd have a fake depth-camera that would be pretty convincing until someone put their hand up.
Googling around, I found that there's a lot of research being done in depth inference from video. This paper: http://people.csail.mit.edu/celiu/pdfs/ECCV12-autostereo.pdf is particularly impressive, although given their one minute/frame benchmarks, we'll be waiting 15 years or so before Moore's law can bring this technique to a JS demo.
Really great demo, quite effective! It's a nice trick using various kinds of noise. The visible mesh hides some of the innaccuracy, and the Perlin noise is a nice way to provide some motion to show the depth. Here's a quick video of me frozen in the white carbonite wall behind me: http://vine.co/v/bn3JJA7bX5d
Fun. Some years ago I wrote a Processing hack with almost the same name and intent, but the effect is different (I rebuild a wireframe based on the points recognized in the image).
http://www.airtightinteractive.com/2012/08/webcammesh-demo/