The best 3D theater is the one you have with you . . . 

 
 

     Well, one of the best.  Big-screen 3D cinema is good for spectacle, and a large 3D TV screen can bring some of that home, but there is a third type of 3D theater, as yet hardly noticed, that you can carry with you anywhere. 


     Right now it’s a small two-lensed viewer into which you place your phone or other portable media player.  This sort of player-and-viewer combination was not a viable alternative to other forms of 3D until 2010, when high-resolution screens of more than 300 pixels per inch appeared on mobile phones and media players.  When lenses magnify these screens, pixellation is  hardly noticeable. 


    In the near future, we will have affordable eyeglasses-like wearable screens that will bring a stereo world right up to our eyes.  This means that now is the time to start making and watching movies for portable 3D viewers, because all the necessary elements are in place and the field is wide open.  If you want to get started working and playing in this new medium, this website is dedicated to you, and to the idea of the VSS—the Very Small Screen.  See this page for some homebrew, made-for-the-portable-theater movies.


Differences compared to 3D TV,  computer  and cinema screens

    The VSS viewer presents its pictures to the eyes with no loss of brightness from filters or LCD “shutters.”  Instead there is an unobstructed screenful of light going straight through a lens to the retina of each eye.


     The moving images arise within a dark box.  It is easy to overlook the significance of this isolation from the outer world, and there will be more to say about it in the Articles on this website.   For now, it should be not be a great leap to imagine that the brain has a more direct and intimate connection to these pictures than it does when watching a screen “out there” in space, as in a theater or on a home screen, or even on a glasses-free 3D screen that you hold in your hand, like the recent stereocam cell phones and game platforms.


    The left and right views are separated side by side on the screen, with a divider, a septum, that isolates them completely within a dark chamber under each lens.  With wearable screens, you get a separate screen for each eye.  In either case, each lens focuses each eye on each view -- about as clean and noise-free a process as one could imagine.  The result is that, psychologically, the brain has no alternative but to treat the moving pictures as if they were views of the real world seen through the lenses and retinae of the eyes. 


     The final step, the fusion of the two views in the brain, happens with ease and great effect, as long as the photography and editing has been done with proper care.  There are no noise artifacts introduced by projection or glasses filtering, no loss in resolution  or crosstalk from sharing the horizontal pixels or the refresh rate between left and right views.  There is instead a mysterious gain in resolution in fusion, and along with the smaller screen, it becomes possible to use cheaper, lower-resolution cameras that would be acceptable for enlarged or projected TV and cinema images.  The dark-box video stereoscope, both hand held and head mounted, can thus be seen as an accessible, low-budget laboratory for examining our ability to conjure up pictures of three-dimensional space and depth using two parallaxed image sources.


But it’s not just about space and depth

     In the various clips in the sampler movies above, texture, contour, tactile detail and surface qualities like color, gloss or sheen, are all revealed with more “presence” when seen in fused stereo pictures.  (You can test this easily by closing one eye, then opening it to see the difference, and some shots are presented in 2D and then 3D for comparison.)  You will notice that the sharpness improves dramatically in the fused view over the separate 2D views.  So eager is the brain to make use of parallax for fusion that it appears to subtract or filter out off-focus “noise” when it constructs the final stereo picture.


     One way to look at this is to see that stereo photography is about capturing the is-ness of everything photographed.  Position in space is only one aspect of is-ness; the skin, the bark, the outer membrane, the light-reflectivity of every object, the stuff of what things are made of if you were to touch them, is what co-defines space/depth.  We can’t have space without objects-in-space, and vice-versa.  3D photography is about the shape and surface of things as they are, and 3D video is about how these things move or reflect movement.


     Also of interest is the density of elements making up a pictorial object.  The space within the fur of a squirrel tail is defined by the brush hairs making up the tail.  To stereo-videograph the squirrel tail is to photograph the air-space between the hairs, allowing us to see into its nature as a sort of holographic array, an impressionistic projection of a bushy tail.  In a similar perceptual feat, it is the comparative binocular filtering of tangled edge-perceptions that allows us to see into a thicket of grassy stalks as a three-dimensional space that we perceive to be enterable or separable.


       On the other hand, the gloss of chrome, polished aluminum, enamel, varnish, the sheen of water, leaf, petal and feather, are all suitable subjects for the 3D cameras.   Moving reflections are surprisingly tangible in 3D, something of a surprise.  Is it because of novelty?  Will we get used to this? 


Stereo imperatives

       So we have here a new medium of capture and presentation of reality. While stereo photography has been with us for 150 years, our new ability to do it in moving color pictures (with stereo sound as well)and to make it ourselves and have it with us all the timeis new, and so unexpected as to be under nearly all media-watchers’ radar.  Clearly this medium has significant implications, which we explore in some detail in the Articles on this site.  Here are a few highlights: 


      —As mentioned above, the dark enclosure around the two views, and the “mainline” pathway to the eyes and the brain, mean that this is by nature a more intimate and personal form of 3D. Cinema-style spectacle is perhaps an absurd thing to reach for.  Rather we may see a trend toward new kinds of documentary, more probing kinds of news or travelogue, more lifelike performance recording, more penetrating instructional podcasts—not only because they are in 3D, but also because the person viewing the picture is invited into the scene in a way not possible when the screen is out in space, in front of and separate from his or her neural pathways. 


    —Essentially the entire world will want to be explored and recaptured, this time in moving stereo. The addition of spatial and textural information in this sort of photography places a new demand on the aesthetic grammar of motion pictures. Stereo photographers and editors who understand this will be needed in great numbers.  Depth mapping, texture grabbing, atmosphere and space conjuring, all will be whole new artistic occupations. The whole scene is the thing in a way not possible before—endless wonders beckoning the stereographer, all the better that hardly anyone sees it coming.


    —3D anywhere anytime, in your pocket, is one thing, having that connected to the world is another.  After all, we are talking about mobile phones and wi-fi connected media players.  As bandwidth becomes more available and dual video cameras become common, we are compelled to ask whether this always-available 3D medium is going to lead to new communication forms, maybe powerful enough to comprise a revolution in global knowledge, if not understanding.  Post-linguistic, post-symbolic communication?  Where still and 2D photography and video lack the force of intimacy, moving 3D is compelling enough to be watched for its own sake, and the entire world can be the subject.  Crowd sourcing—how about planetary sourcing.?


    —When two cameras are used to capture a scene in depth, rather than just one camera for 2D, we are in a new realm of sensory extension, and we have removed a very great barrier to personal immersion in any given scene.  In effect, we are placing our eyes outside ourselves and into another location—remote viewing, for real this time—rather than training a lens on a scene.  No longer a “window on the world,” tele-vision finally comes into its own as remote seeing, remote consulting, virtual visiting.  The prematurely applied term “telepresence” will have a real future beyond 2D talking heads.


     To see how to get started, we will be offering videos on this site to pass on what we’ve learned over the last several years preparing for this new delivery system.  As mentioned above, our first two sample videos  are available  here.


     More will be coming soon, including how to take and edit your own movies, some surprising cameras you can use, the software we use, and so on, so please stay tuned.

 

A 3D video sampler.  Both natural and artificial things are sharpened and enlivened when massaged by your brain’s own stereo vision pathways.  Let your brain do the final Photoshop-ing: the first part of this movie is for cross-eyed viewing, the second part, at the halfway point, is for parallel viewing.  To see how to do one or the other, see the freeview a 3D movie page.  The wide center divider in these movies is the format needed for the Hasbro My3D.  See the page on the video stereoscope.    




. . . in your pocket.