Bionic vision can be adjusted depending on the scene type. Different scenes and visual aid devices can be suited to increase their effectiveness. For a scene to be translated into an algorithm it first must be categorized. There are some automatic scene categorization models, for instance Chernyak and Stark created one using Bayes’ theorem. Segment features such as aspect ratio and average color are obtained from a test image. Once the scene as but accurately determined by the user the authors want to apply context dependent importance weighting to the image. Image mapping, finding the most important object in an image, is used through several algorithms like lossless compression, military target detection and advertising. They performed an experiment where they proposed that scene weighted processing can improve perception of low quality images. Figure 2 was presented to 20 normally sighted people. Concluding that scene dependent importance mapping is a good tool to use in automatic optimization of low quality images. Their work applies to importance map methods which bring together aspects of image that are known to influence the attention of the human viewer. Such features include shape, size and contrast.
Figure 2 Contact Lenses
These contact lenses combined with a computer chip have the capability to connect to a wireless device and provide a visual image of the data that would normal display on the device into your field of vision. High –resolution images will be displayed before the user such as video games, texts, and music. Researchers at the University of Washington felt they could make a really tiny functional device that could be embedded into a contact lens. A metal circuit and light –emitting diodes are placed in a polymer-based lens where the lens is biologically compatible with the eye. Ultra-thin antennas, a few nanometers thick, are used to send information wirelessly to devices.
Prototype of the contact lens
fused with computer chip
3.41 Adaptive Optics
Adaptive optics is a technique originally designed to sharpen images for military surveillance devices and astronomical telescopes. In relation to the human eye, adaptive optics allows people to see at high resolution, but it also works in reverse allowing researches to capture detailed images of the eye’s retina. David R. Williams of the University of Rochester has developed an approach to obtain this vision. The goal of this technology is to improve human vision but in the present focus on preventing vision loss and correcting eyesight problems. This is done by collecting light waves with a deformable mirror which can be shaped to compensate for distortions in an image; then coupled the mirror with a high resolution camera and takes images of the retina to correct the distortions produced by the imperfections in the eye. This method was used to help cure bad vision but, when people with normal vision used the adaptive optics they experienced up to six fold improvement in sight. The EyeTap differs from device such as head-mounted-displays (HMDs). HMD normally are used to provide or add information into what a user perceives. However, “In EyeTap devices, the diverter places the centre of projection of the camera at the centre of projection of the lens of an eye of the wearer. When no computer mediation is used, EyeTap video can be displayed to the user in such a way that the user perceives what he/she would otherwise have in the absence of the device. EyeTap mediates a portion of the user's vision, in such a way that it is integrated with the un-mediated portion of the user's field of view, without any mismatch between the mediated area and the real world.”
 Jennifer Anderson, “Bionic Vision: Rare Operation Brings Sight to Blind Woman” Ergonomics Today, 2005.
 Steve Mann, “Continuous lifelong capture of personal experience with EyeTap,” Proceedings of the 1st ACM workshop on Continuous archival and retrieval of personal experiences, p.1-21, October 15-15, 2004, New York, New York, USA
 Alexi Mostrous, “And next- the contact lens that lets e-mail really get in your face,” TimesOnline, 2008.
 Boyle, J.; Maeder, A.; Boles, W., "Scene specific imaging for bionic vision implants," Image and Signal Processing and Analysis, 2003. ISPA 2003. Proceedings of the 3rd International Symposium, pp. 423-427 Vol.1, 18-20 Sept. 2003
 Gregg J Suaning, Nigel H Lovell, Klaus Schindhelm, Minas T Coroneo,” The bionic eye (electronic visual prosthesis): A review, ” Clinical and Experimental Ophthalmology, 1998
 Brendan Z Allison, Elizabeth Winter Wolpaw, Jonathan R Wolpaw. (2007) Brain–computer interface systems: progress and prospects. Expert Review of Medical Devices4:4, 463-474
online publication date: 1-Jul-2007.
 Lotfi BMerabet, Joseph FRizzo, AlvaroPascual-Leone, EduardoFernandez. (2007) ‘Who is the ideal candidate?’: decisions and issues relating to visual neuroprosthesis development, patient testing and neuroplasticity. Journal of Neural Engineering4:1, S130-S135
Online publication date: 1-Apr-2007.
 Normann R, Maynard E, Rousche P, Warren D, A neural interface for a conical vision prosthesis, Vision Research 39(15), pp. 2577-2587, 1999
 121 Suaning G, Lovell N, CMOS Neurostimulation System with 100 Electrodes and Radio Frequency Telemetry, Inaugural Conference of the IEEE EMBS (Vic), Melbourne, pp.37-40, Feb 1999
 Walker, B. N., & Lindsay, J. (2006). Navigation performance with a virtual auditory display: Effects of beacon sound, capture radius, and practice. Human Factors, 48(2), 265-278.
 Ninad Thakoor, “Aiming to help the Blind See,” University of Texas Arlington College of Engineering.
 no author, “ EyeTap: The eye itself as display and camera ,”EyeTap.