3 d in neurosurgery (an overview) a report Submitted by britty baby



Download 2.85 Mb.
Page12/22
Date conversion08.07.2018
Size2.85 Mb.
1   ...   8   9   10   11   12   13   14   15   ...   22
Factors required for neurosurgery endoscopy

  1. Illumination is not usually a problem: A Bright Xenon arc-lamp illuminates the scene via a fiber-optics bundle.

  2. High depth-of field (DOF) is essential: Objects must be in focus within a range of few millimeters to several centimeters.

  3. Very wide field-of-view (FOV) from 70± to 140±.

  4. Smallest possible camera diameter: To minimize the incision for camera insertion and reduce the disturbance to the surgical tools.

  5. Video images are utilized as a part of a feedback loop: The surgeon uses the information he sees in real time. The most important information are anatomic and pathologic markers as well the position of the surgical instruments relative to the tissue.

    1. 3 D imitations for earlier endoscopes

  1. The first generation 3D systems based on dual CCD cameras or rapidly alternating views had poor resolution and other negative effects on the surgeon’s senses (e.g., headache, dizziness, disorientation).

  2. The use of dual system or scopes increases the diameter. There is also lack of angled endoscopes.

  3. The surgeons need to wear 3-D glasses, which permit 3-D sensation only at certain viewing angles.

    1. Insect-eye” technology in neurosurgery

The larger diameter and low resolution of the first generation makes them unsuitable for neurosurgery. The “Insect-eye” technology is a promising new technique for neurosurgery applications because of its high resolution and use of miniature cameras.

The idea of a stereoscopic camera based on a lenticular array (“integral fly-eye”) was first proposed by the French physicist G. Lippmann. The Visionsense adaptation of this stereoscopic plenoptic camera is shown in Fig 3.4: The imaging objective is represented by a single lens (L) with two pupil openings at the front focal plane (P). This setup generates a telecentric objective, in which all the rays passing the center of each pupil emerge as a parallel beam behind the lens. The CCD chip is covered by a lenticular array (LA) — an array of cylindrical microlenses with zero power axis perpendicular to the paper plane. Each lenticule covers exactly two pixel-columns. Rays that pass through a point at the left aperture (l) are emitted as a parallel beam (dashed lines in the drawing) after the imaging lens. These rays are focused by the lenticular-array on the pixels on the right side under the lenslets (designated by dark rectangles). Similarly, rays that pass through the right aperture (r) (dashed-dotted lines) are focused by the lenslets on the left (“dark”) pixels. Thus a point O on the object is imaged twice: Once trough the upper aperture generating an image on pixel o1, and once through the lower pupil generating an image on pixel o2. The pixels o1 and o2 are upper and lower views (in the real world — left and right views) of the point O on the object. The distance between pixels of the left view to that of the right view (disparity) is a function of the distance of the corresponding point from the camera.




Fig 3.4: Visionsense stereoscopic camera scheme
A plenoptic camera can take advantage of the small pixel sizes enabled by the new VLSI design rules to encode a stereoscopic image on the image processor. Thus the surplus sampling frequency can be converted to an essential depth indication for the surgeon. In the plenoptic camera there are two imaging systems:

  1. The imaging objective which has a high f(in the range of 8–20), and therefore having a large (several pixels wide) PSF.

  2. The lenticular array with a low f— on the order of 1.5, which enables a focusing of the light into one pixel.

The light penetrates through the pupils (Pleft and Pright) and focused by the imaging objective. The imaging objective PSF (dotted line) is significantly wider then a lenslet as shown in Fig 3.5. In comparison, the focus of a lenslet (which is in fact the image of a pupil), is smaller than a pixel width. Thus a portion of the image, which is projected across a lenticule (of width of two pixels), is refocused into a single pixel column of the sensor. The visual data from the sensor is then processed by software that reconstructs the stereoscopic image.


Fig 3.5: point spread functions in Visionsense stereoscopic camera
The Visionsense plenoptic visualization system provides high resolution stereo images from a compact device. The small, single camera dual aperture device provides true stereo vision, with an efficient usage of image-sensor area. This technology can be used to take advantage in applications that demand high quality images from miniature cameras. So, this visualization technique is being used in neurosurgery applications.

1   ...   8   9   10   11   12   13   14   15   ...   22


The database is protected by copyright ©dentisty.org 2016
send message

    Main page