3 d in neurosurgery (an overview) a report Submitted by britty baby



Download 2.85 Mb.
Page2/22
Date conversion08.07.2018
Size2.85 Mb.
1   2   3   4   5   6   7   8   9   ...   22
3D Cinematography

When shooting a 3D image, two cameras are used to capture separate images of the same object from slightly different angles at one fixed viewpoint. When played back on a plano-stereoscopic display, the left image is shown only to your left eye and the right image only to your right eye. The brain then fuses these two images to give you a perception of depth. A pair of matched cameras, typically spaced at roughly adult eye ‘interocular’ distance is used to capture the image. This horizontal offset produces a binocular disparity. This binocular disparity, together with other information in a scene, including the relative size of objects, occlusion, shadows and relative motion, is processed by the brain to create depth perception. The distance between the left and right camera is called the ‘interaxial’ (Fig 1.3). By adjusting the interaxial distance between cameras, we are able to dynamically increase and decrease the depth in a scene.



Fig 1.3: interaxial distance

The convergence point determines where the object appears in relation to the screen.



Convergence can be adjusted by toeing-in (an inwardly-angled adjustment) of the cameras or by horizontal image translation (H.I.T) in post-production. Simultaneously manipulating the convergence and the interaxial as shown in Fig 1.4 gives control over the depth, and the placement of objects within that 3D space.


1   2   3   4   5   6   7   8   9   ...   22


The database is protected by copyright ©dentisty.org 2016
send message

    Main page