Also known as.. an animation! A 2D movie is not a "3D picture" merely by alternating two frames of it. You can get a sense of depth from any 2D sequence where the camera is moving. A more realistic approach is when you have two such frames side by side and you focus an eye on each.
That's true, but this is a way to perceive depth for people who can't view stereoimages (e.g., people blind in one eye). You do experience some amount of depth even though each eye is receiving the same information.
Wiggling works for the same reason that a translational pan (or tracking shot) in a movie provides good depth information: the visual cortex is able to infer distance information from motion parallax, the relative speed of the perceived motion of different objects on the screen. Many small animals bob their heads to create motion parallax (wiggling) so they can better estimate distance prior to jumping.
True, but with a strong emphasis on the word "some." If you wiggle a 1D line, you experience "some" sort of 2D. If you flicker a 3D scene back and forth a second, you experience "some" sort of time. But these things are woefully poor mirages that only seem impressive for a limited time.
No... Its not just animation. The alternating images tricks ur brain. BUT this gives me major headaches almost immediately.
While yes animation does sort-of give the perception of 3d, the images here really do appear to have depth in the monitor. Its a cool trick. Now I am guessing we need to play this at a rate of 30fps each image (thus 60fps) or so so that we don't "see" the non-3d image alternations but we experience it nevertheless without a headache I might add.
There are a bunch of comments about what effect stereoscopic vision actually has on depth perception, so let me explain:
Depth perception from stereoscopic vision primarily matters for things which are very close to you. I have amblyopia (1) and no particular problems in most of my day to day activities. The main place that I lose out is in sports, where there are frequently small fast moving objects moving towards you and a small amount of time getting to them. If I'm moving to pick up a glass of water things are going slow enough, and I have enough other feedback mechanisms, that I can determine the distance by the time my hand reaches the glass.
My favourite story about depth perception was when I was playing ping pong with a friend of mine in university. I'm not a great player, but we were going back and forth okay. Then on one of the returns he very slightly changed the speed that the ball was coming at me. I completely failed to hit it, and it became clear that I had been using timing as a mechanism to help me compensate for not knowing where in space the ball was.
It's unclear whether it's because of choosing activities based on what my condition lets me be good at, but I don't find the lack of 3D vision to be my biggest issue. My biggest issue is that when doing things I need to rotate my head to focus on things that are on my left. This is a really big deal when I'm biking, because I am frequently out in traffic. Since it still functions as peripheral vision I am aware of what's going on on my left side, but I have to turn my head way over to get more detail. This is unsafe since when I turn my head I'm more blinded on the right side than I was on the left when I was facing straight.
(1) Amblyopia causes the image from one eye to be suppressed. In my case it means that I'm primarily looking out of my right eye, and my left eye functions almost solely as peripheral vision. I can also force myself to switch which eye I'm looking out of, but I default back to the right eye if I'm not focussing.
The page says that the brain should integrate both pictures and show a 3D view. I don't think the 'integrate' part works for me, the rapidly changing pictures are quite annoying and I'm unable to stare at it for a longer time. What's your experience?
The moderately-fast, subtle ones are probably the most effective. Some are pretty bad (the Jurassic Park one in particular, the change is too drastic).
Try just looking, not staring. Relax and move through them at a decent pace. You may be trying too hard, preventing the illusion from working (similar to how thinking / staring too hard with some optical illusions diminishes the effect).
Also, the two separate images probably won't integrate into a single 3D percept (it will still look like alternating viewpoints), but your spatial perception will be stimulated enough to give you a fixed, coherent sense of depth, i.e. even though the image is clearly changing, your depth perception of the scene remains constant.
That's a pretty effective algorithm for blending the two... And I love how there are different ways to show the image, exactly what I was hoping to find.
Nice site :D Shall get plenty of browsing from me for at least a while.
I have seen this kind of thing before, but the specific images on this post were well worth seeing. You get such a more intimate sense of the environment, especially in the third and fourth images.
A little while ago, I did an animation in HTML5 that uses various 3D hinting techniques to create an illusion of depth
http://seanmcbeth.110mb.com/stereo.html
That's neat looking, though it isn't doing anything for me. What are the things you tried to do? Can someone else with stereoscopic vision tell me if it's doing anything for them?
I'm really interested in 3D techniques which work for people without the brain mechanisms to fuse the images of their eyes (unsurprisingly).
== Stereopsis: in this case, simple linear transformation in X-axis proportional to Z-axis as an approximation to a real 3D transformation
== Occlusion of one object by another: basic Z sorting, i.e. "painters algorithm"
== Subtended visual angle of an object of known size: I scale the object size proportional to its distance in the Z-axis
== Haze, desaturation, and a shift to bluishness: I interpolate the object color proportional to its distance in the Z-axis
If you can get the effect to work well, it goes from being a fairly incomprehensible jumble of grey circles (too information-dense for 2D display) to a box of bouncing balls where you can easily track individual objects as they move around.
EDIT: looking at the code, it seems I forgot about a technique that I used! I also threw in the "Vertical position" bullet point from the Wikipedia article, but without a frame of reference in the rendered scene you wouldn't be able to tell.
This particular example requires that you cross your eyes to 'overlap' the two images, so those without good sight in both eyes will not perceive any depth.
That works for me, nice job. You could improve it by faking a light source and having them cast shadows. Shooting 3D photos, I've found that shadows really enhance the feeling of depth.
Edit, kind of related, here's a "virtual" stereo where I only used shadows to create the effect: http://www.flickr.com/photos/miura-ori/294022551
It works best if you use the large version of the photo.
Unfortunately, the rendering speed just isn't there. The bottleneck actually seems to be the timer for running the animation, because reducing the number of objects doesn't seem to have a significant effect on the performance. I'll probably end up reworking this as an OpenGL application.
I look forward too hearing how that goes. Too bad the HTML version didn't work out — it's pretty slow on my iPad too.
I vaguely remember a company that sold a plastic parallax barrier you could put onto touchscreen phones to do glasses-free stereoscopy. I'll post it if I find it.
"The most significant factor which contributes to depth perception is bionocular disparity"
There's no way this is true. How impaired are people with only one functional eye? Not very--their diminished peripheral vision is much more impairing than their lack of binocular disparity. I don't have any evidence to back it up, but I believe that motion parallax is the single most important source of depth information. Notice that one-eyed people and non-3D films don't suffer from any lack of motion parallax.
You say that peripheral vision is more impairing than lack of binocular disparity. But peripheral vision has zero impact on depth perception, so the existence of the peripheral vision problem does not mean that binocular disparity can't be the biggest impact to depth perception, which was your initial assertion.
Also, motion parallax is useless for stationary objects seen by a stationary observer. You make it sound like lacking one eye is no trouble at all, I don't buy it.
> You say that peripheral vision is more impairing than lack of binocular disparity. But peripheral vision has zero impact on depth perception
Ah, but he didn't say "more impairing with respect to depth perception." He was talking about impairment in general. As in, the loss of peripheral vision that comes with having only one eye is probably a bigger deal than the loss of binocular depth perception that comes with having only one eye, because binocular depth perception doesn't confer that much of a perceptual advantage over the ability to process motion parallax.
That last part is certainly debatable, and really depends on the context. Your tennis example brings to mind the fact that binocular depth perception is really helpful for hunters (of many species), presumably because it gives them more precise depth information with less orthogonal movement. Wider peripheral vision is probably better for avoiding predators and other dangers. (I was just reading about animals with binocular 360 degree vision, after wondering why 360 degree vision wasn't more prevalent.)
For a modern human, which sort of visual impairment is more impairing likely depends a lot on lifestyle and environment.
I'm sorry I wasn't clear. Like dieterrams said, I just meant that when you lose sight in one eye, loss of peripheral vision is probably more impairing overall than the loss of binocular parallax.
If your depth perception was truly devastated by losing one eye (as the article implied by calling binocular disparity the most important facility for depth perception), losing eye would be a lot more handicapping than it is in my experience (I know several people with only one eye, one of whom is a pretty good casual volleyball player).
Simple trick: Take a pen with a cap. Close one eye. Hold it at some distance - as if you were holding it just above the desk while sitting at it. Without making the hands touch (just hold the end of the cap and the end of the pen), try to put them together again. It's actually not that easy and I miss almost every time. Moving your head a bit doesn't help that much in this situation.
So yeah - for big objects, scenery, etc. it works. For some manual actions 2 points of vision are much more useful than paralax when moving.
I have dubious depth perception; two modestly nearsighted eyes. I just took your challenge. I did it perfectly, first time. That's probably because my brain's depth perception mechanisms are used to adding in more cues than brains that can lean on perfect binocular vision do.
People tend to get really romantic about how brains work, so I should probably make something clear. When I say that my brain is more used to integrating multiple other environmental clues to get depth perception than brains with perfect stereoscopic vision, that's actually a bad thing. The resulting depth perception is clearly worse than good stereoscopic vision. I am lucky in that if I wear glasses, which I do not routinely, I can get true stereoscopic vision back; I do not lack the brain mechanisms entirely as some people with bad vision do. And that's clearly better. My brain is better adapted to degraded input, but it remains degraded input. Brains prefer stereoscopic vision when available because they naturally gravitate to the best sources of data.
Rolling back around to my main point, depth perception is a great deal more complicated than most people give it credit for. There's really two distinct depth perceptions, one using stereoscopic vision that only works out to a few tens of feet regardless of how good your vision, and one that works on the rest of the world using motion and occlusion cues and a variety of other things. These flashing pictures manage to fire some of my depth perception based on motion cues, but completely lack the precise depth perception I get from stereoscopic vision. There's really no such thing as a monolithic "depth perception", it's a lot of things that get integrated in the brain.
You ever tried to play tennis with one eye covered?
I'm not saying your life is ruined with only one eye, but don't downplay their plight. You can still see in 3D, it's true, but (in my experience) you loose a lot of accuracy.
I wear glasses and my left eye is pretty useless - my brain pretty much ignores any signal from my left eye unless I cover my right. So unfortunately I can't really tell how much importance there is in things like this. You're probably right for people who have really good/equal eyes.
Lots of comments here assume you have to have two images. (And how to we reduce the flickering, etc.)
Nope. Nothing to do with stereo here; it is all about simulating a moving viewpoint within a 3-D scene. Three cues help the brain figure out 3-D: stereovision, focusing, and the effect of movement on a scene.
This is doing the last one above. Now, the simplest thing to do is move rapidly back & forth between 2 viewpoints, but there are other things one could do. For example, move the viewpoint smoothly from one point to another and then back. Or you could move the view point smoothly in a small circle. Etc. Want to reduce flicker? Then do one of those. Of course, they require more than 2 frames.
BTW, my favorite example of the "movement helps with 3-D" effect is the map room scene in Star Trek Generations. (Probably on YouTube, but I can't find it.) If you watch it, notice how the scene being viewed continually moves around, even when it doesn't really need to.
However, if it's actually useful, why wasn't it invented layer as part of OpenGL / taught in standard graphics courses?
Given that OpenGL already has the 3D model, and has to redraw everything 30 or 60fps anyway, there's no performance hit to slightly change the camera angle.
What are the limitations / side effects of this technique that cause it to not be used in practice?
The side effect is that people made to endure this more than a couple of minutes as a novelty will want to murder you. They'll then plead temporary insanity and likely get off with a light scolding and a medal.
Doing multiple passes with slightly different camera settings is becoming more common, but you would only use it with something like the NVidia 3D system, or any other 3D display technology. You would never use it to do something like this.
I feel that it would be useful if browsers supported 3d image formats by displaying them in this manner to most monitors, and supported monitors would allow better 3d images:
It wouldn't work properly since both eyes would be receiving both views. You'd end up getting a shimmer type thing going on even if you couldn't figure out exactly what was going on.
I now know that these are stereoscopic image examples. My question still stands though, what would be the easiest way to go about reproducing this effect? It seems like mounting two flip minos could be really cool if this would work.
Anyone know of any really smooth examples of this technology?
I work as a software developer for a company that develops autostereoscopic displays (For seeing 3D without glasses - like the Nintendo 3DS will do) and stereoscopic software. We are currently developing a line of products for iOS devices to watch videos in 3D and see images and read comics (eventually) in 3D. So at some point this year when it all gets launched hopefully you can pick one up for like $15 for your iPod/iPhone/even iPad (although I don't know that the iPad version will end up being $15).
Edit: I should add that it appears the Nintendo 3DS is using a parallax barrier (but I don't know how true that is - I've heard it could be using a different system too, haven't seen one though). Our technology is using lenticular mainly, so the images will be quite different from both. Lenticular, for example, preserves the brightness of the image much better than parallax barrier.
Sounds like it could be called a "novelty" app to me, and one I'd really like to see happen. Regardless, what other methods are available for getting "3D" out of iOS devices? "Cross-eyed" 3D seems even more of a "novelty" to me.
EDIT: Perhaps "a line of products" means hardware in addition to software. I read it initially as meaning a line of apps.
You print vertical black lines on a transparent sheet of plastic, and attach it to a 10mm thick piece of transparent plastic, and attach all that to your monitor for a cheap and easy DIY parallax barrier. From the proper distance, your left eye sees a different set of pixels than your right eye. Add photoshop, and voila, 3D.