When shooting a 3D image, two cameras are used to capture separate images of the same object from slightly different angles at one fixed viewpoint. When played back on a plano-stereoscopic display, the left image is shown only to your left eye and the right image only to your right eye. The brain then fuses these two images to give you a perception of depth. A pair of matched cameras, typically spaced at roughly adult eye ‘interocular’ distance is used to capture the image. This horizontal offset produces a binocular disparity. This binocular disparity, together with other information in a scene, including the relative size of objects, occlusion, shadows and relative motion, is processed by the brain to create depth perception. The distance between the left and right camera is called the ‘interaxial’ (Fig 1.3). By adjusting the interaxial distance between cameras, we are able to dynamically increase and decrease the depth in a scene.
Fig 1.3: interaxial distance
The convergence point determines where the object appears in relation to the screen.
Convergence can be adjusted by toeing-in (an inwardly-angled adjustment) of the cameras or by horizontal image translation (H.I.T) in post-production. Simultaneously manipulating the convergence and the interaxial as shown in Fig 1.4 gives control over the depth, and the placement of objects within that 3D space.