Thursday, August 5, 2010

Why 3D Makes Your Brain Cry

Far more eloquent and well-learned people than me have weighed in on the 3D fad in movies and video games. I'm going to talk about how your eyes and brain actually process 3D, and why current 3D is just not good enough to replicate the effect with complete fidelity.

The first myth about 3D vision that needs to die a horrible, flaming, burning death is the misconception that closing one eye will destroy 3D vision. Human beings perceive depth in many ways, most of which are monocular--they require only one eye. Closer things are more detailed than faraway things. Parallax, the same visual illusion that makes railroad tracks converge in the distance, is another way to tell depth. Relative size of objects, objects overlapping, perspective, and relative apparent movement--these are all monocular cues of depth perception, and they work just fine with one eye shut.

However, there is a benefit to binocular vision; this kind of seeing in 3D is called stereoscopic vision. When the eyes focus on a particular object--let's say, a tree--the tree is in the center of the retina of both eyes. But because there are two eyes that occupy different horizontal positions in the head, slightly different images of the scene surrounding the tree will be projected on the retinas of both eyes. Some neurons of the striate cortex in the brain, part of the brain's visual processing system, respond and become extremely active when a visual stimulus produces retinal disparity--images on different parts of the retina of each eye. It's not that each eye sees a two dimensional scene, then combines them for a three dimensional scene. It's that each eye sees a slightly DIFFERENT three dimensional scene, and the difference between what the eyes "see" is what produces difference in depth perception.

When watching a 2D movie, each eye sees only monocular cues. There are monocular cues for depth within the flat plane of the screen, but no stereoscopic vision. What almost all 3D technology works on is fooling your eyes into producing retinal disparity. 3D movies project two images onto the screen, and place filters over the images to polarize the light differently. Viewers wear 3D glasses that have polarized lenses to match the polarities of the two different images of light.

For example, let's say that there's two images of a tree on the screen. The left image may be polarized to setting X, and the right image may be polarized to setting Y. The left lens of the glasses is also polarized X, so only the left image goes to the left eye, and the right lens is polarized Y so only the right image goes to the right eye. This is, of course, quite a simplification, but that's the essential idea. Bada bing--you have two different images going to your retinas, and hey presto, those little neurons in the striate cortex are fooled into responding to retinal disparity.

Except not really. First of all, the polarization only applies to light that's being projected onto the screen. Looking at other objects doesn't make them appear "more 3D than before," and there isn't a screen yet--unless you're sitting right under an IMAX screen--that eats up your entire visual field. You'll still be able to see the theatre, the walls, your fellow moviegoers, and the visual cues, monocular and binocular alike, from these more mundane things won't mesh with the stereoscopic cues you're receiving from the screen. There's no way that giant tree can be that detailed, and also occupying the same space as the guy sleeping through the movie in front of you. Linear stereoscopic glasses even require you to not move your head due to the way they are polarized--turning your head will wreck the illusion.

In addition, stereoscopic glasses do nothing for improving monocular cues of depth. Things like shadows, light, and edges. Ever notice how 3D in movies resembles a pop-up book more than it does real life? The edges are too sharp and clean, and the transitions between objects and backgrounds are too abrupt. This is especially noticeable in live-action movies that have then been adapted for 3D. Our brains in real life combine monocular and binocular cues for the illusion of depth, but that synergy simply doesn't work on the silver screen. Take something like detail. Most movies are shot or animated in very crisp detail, and while the image may "pop" due to stereoscopic glasses, the monocular cue of less detail in the background may not be present. This leads to a "fake" looking 3D experience in which monocular and binocular cues don't match up.

I'm not saying there's anything WRONG with 3D. It's an interesting technology, and it displays some of what we understand about visual processing between the eyes and brain. But in no way does it capture and readily combine all the cues we use in the real world for 3D.

1 comment:

  1. This is very interesting. I do know the only time I've ever felt nauseated from motion is watching a first-person rollercoaster in "3D" ... probably due to the fact that the motion on the screen doesn't match my inner-ear-perceived motion. This doesn't happen to me when watching things in 2D for some reason, even playing flight sim style games where I do a bunch of looping and stuff.

    Otherwise, most of the time I don't seem to "notice" the 3D ... but then I'm always wearing glasses, so perhaps the visual disparity and sharp edges are there for me in real life. I remember there was a transition period for me to get used to seeing things with the glasses on. I remember it looked like everything was much flatter and that it was harder to tell depth because things farther away didn't blur out as much. But that may be a personal peculiarity. Still, I often can't remember whether I saw a movie in 3D or 2D. There's a lot of visual trickery that goes on in movies anyway, especially with shot sequence, so stereoscopic just seems to be another layer of that. But then again, I spend most of my day looking at a flat screen, so what do I know? ;)

    ReplyDelete