Introduction: The Researcher's Life 1



Download 2.35 Mb.
Page12/16
Date conversion08.07.2018
Size2.35 Mb.
1   ...   8   9   10   11   12   13   14   15   16

Future Directions in Sensation and Perception


As discussed early in this chapter, recent discoveries in the behavioral neurosciences have greatly affected research in sensation and perception. An appreciation of the extent to which sensory systems process information in parallel required us to reconceptualize how sensory systems build our perceptual experiences. The advances in the behavioral neurosciences have led a substantial number of researchers to direct the focus of their research to explaining perception at this level of analysis. Today, research in sensation and perception often involves using psychophysical data to make inferences regarding the underlying neuroanatomy and neurophysiology. This theoretical approach to understanding sensation and perception, called neuroreductionism, has enjoyed great success. Many of the simpler perceptual phenomena can be explained by (or related to) the structure or function of underlying neural structures.

We have every reason to believe that this trend will continue and that other equally stunning discoveries are just around the corner. However, we need to recognize that in the future the neuroreductionist approach may result in diminished returns. Many of the perceptual phenomena that are amenable to simpler neuroreductionistic explanations have been described. Also, the extent to which this approach can provide similar and satisfactory explanations of higher-level perceptual phenomena remains to be demonstrated.

Additionally, as a theoretical approach to understanding perception, neuroreductionism has its share of detractors. William Uttal (1998), in his book Toward a new behaviorism: The case against perceptual reductionism, argues that researchers in sensation and perception should resist the very seductive enterprise of trying to link psychophysical data to specific underlying neural mechanisms. One of the theoretical problems associated with this approach, according to Uttal, is that neural events are not identical to mental events. Uttal calls this ìpsychoneural equivalence,î and he argues that it will always be difficult, if not impossible, to satisfactorily relate the two. He further adds that psychologists should not feel compelled to attempt it. Additionally, Uttal argues that ìÖif perceptual psychology is to survive, and not be inappropriately absorbed into the neural or computational sciences in the next millennium, it will have to return to its behaviorist, positivist rootsî (p. xii).

Although Uttalís (1998) argument is a compelling one, this large a shift in theoretical approach is unlikely. First, the neuroreductionist approach continues to provide us with valuable insights into sensory mechanisms. Second, the advent of superior brain imaging techniques, like the fMRI, have opened new avenues for researchers to visualize perceptionís neurological substrates in the awake, behaving human.

However, despite todayís neuroreductionist leanings, we still recognize that our individual perceptions are best described as gestalts, where the whole is far more than the sum of the anatomical or physiological parts. Also, many psychologists would agree that an understanding of these higher-order, emergent properties will do more to enhance our understanding of human sensation and perception than will neuroreductionist explanations involving the activity in small groups of neurons. Psychophysics, the set of methodological tools that sensory psychologists possess, is ideally suited to investigate perception at this level of analysis. Perhaps the unique promise of sensory psychology, relative to the other perceptual sciences, is that it does provide explanations of behavior at the molar rather than the molecular level. In a sense, this may be our ìscientific niche.î

We believe, then, that some of the most dramatic and important contributions that psychology will make in the near future will be in our understanding of higher-level perceptual mechanisms. We will see more studies of phenomenology, perception and action, and sensory integration. A good example of one of the intriguing research techniques and issues on the horizon for perceptual psychology is virtual reality (VR).


Virtual Reality


VR is the development of artificial environments that can be navigated directly. They can be the relatively simple or very complex. They fall into two general categories: window on the world and immersion. In window on the world, the user views the environment as from a window into the world. The monitor screen is the window and the information on the screen provides the visual information about this world. This is the type of VR involved in most video games and a very primitive version of this type of VR is illustrated in Figure 21. The more compelling and interesting type of VR occurs when the person is immersed in the environment. This type of VR uses helmet-mounted displays to generate the visual information, often has an integrated sound system, and occasionally provides tactile feedback. It is this latter type of VR that holds the most interest, but research and study is proceeding on both types of VR.


Figure 21. Here is a very primitive virtual environment of the window on the world type. Use the arrows to navigate to the right and the left and observe the movement of the two objects in the display. [Figure 21 description]

VR is both a research technique, because of its ability to provide sensory input from multiple sensory systems in a controlled manner, and a research area because it is also an application where sensory knowledge will be fundamental for success. Some examples of the application of VR that are relevant to psychology have been in clinical psychology (Huang, Himle, & Alessi, 2000; Jang, Ku, Shin, Choi, & Kim , 2000; Roessier, Mueller-Spahn, Baehrer, & Bullinger, 2000), neuropsychological evaluation (Kesztyues et al., 2000), memory research (Gamberini, 2000), and education and training (Cromby, Standen, & Brown, 1996; Mohler, 2000).

One of the research advantages of the most advanced VR systems is that it can provide controlled inputs to the visual, auditory and tactile systems. To date, the vast majority of studies in sensation and perception have primarily investigated the senses separately.However, many experiences are based upon inputs from multiple sensory systems. For example, hearing and seeing a bat hit a ball. Consider how jarring it is to sit so far away that the sound and the sight are not integrated.

The study of body orientation, for example, focuses on a fundamentally integrated sensory system. A series of recent studies suggest that visual information alone may be sufficient for determining whole-body translation and linear movement in the virtual environment. However, feedback from the tactile systems may be needed for accurate determination of rotation (Chance & Loomis, 1987; Richardson, Hegarty, & Montello, 1997).

Richardson et al. (1997) found that going around a staircase in a virtual building leads to larger errors in determining their location relative to their starting point than either learning the environment from maps or actually moving through the real version of the environment. Chance and Loomis (1987) studied perception of direction in individuals moving in virtual environments, with or without tactile feedback. Chance and Loomis found that if a person actually rotates but translates via the virtual environment, thus receiving the tactile input from the rotation, they kept their sense of direction far better. We know that visual input is suppressed during saccadic eye movements, which accompany body rotations. Perhaps the orienting system, not expecting good visual input during physical rotation, has developed a tendency to rely more on tactile input (Krantz & White, 1989; Volkmann, 1986) . The need to rely on tactile input may result from the fact that during the illusion of rotation in a VR environment, the vestibular system is not activated, an illustration of the importance of understanding the integration of different sensory systems (Cohn, Dizio, & Lackner, 2000). The VR environment is especially suited to studying multi-sensory and sensory-motor integration.

Sensory research is also proving to be helpful to engineers working on VR systems. In a recent paper, Cutting (1997) gives a review of the visual information needed for VR applications, including how space perception and the use of depth cues can assist VR engineers in developing appropriate visual inputs. An important feature of Cuttingís work is his quantitative approach to VR. Just as it was necessary to develop equations for color matching to be used in monitors and printing, so it will be necessary to provide quantitative functions for other visual functions before they can be applied to VR. Thus, research in sensation and perception may well take the form of taking well-understood phenomena and developing quantitative models for application. This research may also indicate new visual functions that need to be explored. One possible issue is the location of the center of projection relative to the persons eye height. Dixon, Wraga, Proffitt, & Williams (2000) found that the relationship between a subjects eye height and the center of projection profoundly affected the perception of size.

Another interesting question related to VR that is both a research question and an application issue is the difference between the two forms of VR. The experience of VR in the immersion techniques is far more immediate than the window on the world. What are the features that makes this so? One difference is that the field of view tends to be far more restricted in the window on the world (Dichgans & Brandt, 1978) though Dixon et al. (2000) found that an immersion technique with a restricted field of view had as strong a relationship between eye height and perceived size as did a full immersion technique. The window on the world condition in the same paper showed now effect of the relationship between eye height and center of projects. These results suggest that the difference between the two forms of VR is more than just a difference between the size of the field of view. All in all, VR is a fruitful field for psychological research into sensation and perception and vice versa. In fact it appears that the development of VR and the use of VR as a research tool in sensation and perception may be tightly intertwined.

Summary


Today, research in sensation and perception continues to identify interesting and important perceptual phenomena that contribute to our understanding of sensory system structure and function, the nature of perceptual processes, and the human mind. The discovery of additional cone photoreceptor types in the retina is one recent example of a contribution to our fundamental understanding of the visual system. Studies of sensation and perception have also made important contributions to applied fields such as human factors and neuropsychology. In the case of human factors, research in sensation and perception helped in the design of machines that better fit human capabilities. In the case of neuropsychology, this research contributed to our understanding of a variety of neuropathological disorders and has shown promise as a tool for early identification. The future research in sensation and perception will continue to involve studies of fundamental sensory processes as well as complex perceptual mechanisms. One future area of research that shows particular promise is the study of integrated perceptual experiences such as VR.

References

Alpern, M., & Wake, T. (1977). Cone pigments in human deutan colour vision defects. Journal of Physiology, 266, 595-612.

American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.) American Psychiatric Association: Washington D.C.

Ballew, H., Brooks, S., & Annacelli, C. (2001, March). Children with dyslexia have impaired low-contrast visual acuity. Presented at the 47th Annual meeting of the Southeastern Psychological Association, Atlanta.

Bassi, C., Solomon, K., & Young, D. (1993). Vision in aging and dementia. Optometry and Vision Science, 70, 809-813.

Blanks, J., Hinton, D, Sadun, A., & Miller, C. (1989). Retinal ganglion cell degeneration in Alzheimerís disease. Brain Research, 501, 364-372.

Boring, E. (1950). A history of experimental psychology (2nd ed.). New York: Appleton Century Crofts.

Borsting, E., Ridder, W., Dudeck, K., Kelley, C., Matsui, L., & Motoyama, J. (1996). The presence of a magnocellular defect depends on the type of dyslexia. Vision Research, 36, 1047-1054.

Brunswick, N., & Rippon, G. (1994). Auditory event-related potentials, dichotic listening performance and handedness as indices of lateralization in dyslexic and normal readers. International Journal of Psychophysiology, 18, 265-275.

Carpenter, R. H. S. (1977). Movements of the Eyes. London: Pion.

Chance, S., & Loomis, J. (1997). From sensory inputs to knowledge of of spatial layout. A paper presented at the 38th annual meeting of the Psychonomic Society, Philadelphia, PA.

Chino, Y., Smith, E., Hatta, S., & Cheng, H. (1997). Postnatal development of binocular disparity sensitivity in neurons of the primate visual cortex. The Journal of Neuroscience, 17, 296-307.

Cohn, J. V., Dizio, P., & Lackner, J. R. (2000). Reaching during virtual rotation: Context specific compensations for expected coriolis forces. Journal of Neurophysiology, 83, 3230-3240.

Cohn, S., Emmerich, D., & Carlson, E. (1989). Differences in the responses of heterozygous carriers of colorblindness and normal controls to briefly presented stimuli. Vision Research, 29, 255-262.

Collett, T., & Harkness, L. (1982). Depth perception in animals. In Ingle, Goodale, & Mansfield (Eds.) Analysis of visual Behavior (pp. 111-176). Cambridge: MIT Press.

Cornelissen, P., Hansen, P., Hutton, J., Evangelinou, V., & Stein, J. (1998). Magnocellular visual function and childrenís single word reading. Vision Research, 38, 471-482.

Cornelissen, P., Richardson, A., Mason, A., Fowler, S., & Stein, J. (1995). Contrast sensitivity and coherent motion detection measured at photopic luminance levels in dyslexics and controls. Vision Research, 35, 1483-1494.

Cromby, J. J., Standen, P. J., & Brown, D. J. (1996). The potentials of virtual environments in the education and training of people with disabilities. Journal of Intellectual Disability Research, 40, 489-501.

Cronin-Golomb, A., Corkin, S., Rizzo, J., Cohen, J., Growdon, J., & Banks, K. (1991). Visual dysfunction in Alzheimerís disease: Relation to normal aging. Annals of Neurology, 29, 41-52.

Cronin-Golomb, A., Suguira, R., Corkin, S., and Growdon, J. (1993). Incomplete achromatopsia in Alzheimerís disease. Neurobiology of Aging, 14, 471-477.

Cutting, J. (1997). How the eye measures reality and virtual reality. Behavior Research Methods Instruments and Computers, 29, 27-36.

Dartnall, H., Bowmaker, J., & Mollon, J. (1983). Human visual pigments: Microspectrophotometric results from the eyes of seven persons. Proceedings of the Royal Society of London B., 220, 115-130.

DeValois, R. L., & DeValois, K. K. (1975). Neural coding of color. In E. C. Carterette and M. P. Friedman (eds.), Handbook of Perception, vol 5. (pp. 117-166). New York, NY: Academic Press.

DeYoe, E. A., & Van Essen, D. C. (1988). Concurrent processing streams in monkey visual cortex. Trends in Neuroscience, 11, 219-226.

Dichgans, J., & Brandt, T. (1978). Visual-vestibular interaction: effects on self-motion perception and postural control. In H. W. Leibowitz & H. Teuber (Eds.) Handbook of sensory physiology, vol. VIII: Perception, (pp. 755-804). Heidelberg, Germany: Springer.

Dixon, M.W., Wraga, M. Proffitt, D. R., & Wiulliams, G. G. (2000). Eye height scaling of absolute size in immersive and nonimmersive displays. Journal of Experimental Psychology: Human Perception & Perfomrance, 26, 582-593.

DíZmura, M. (1991). Color in visual search. Vision Research, 31, 951-966

DíZmura, M., Lennie, P., & Tiana, C. (1997). Color search and visual field segregation. Perception and Psychophysics, 59, 381-388.

Eden, G., Stein, J., Wood, M., & Wood, F. (1995). Verbal and visual problems in reading disability. Journal of Learning Disabilities, 28, 272-290.

Eden, G., VanMeter, J., Rumsey, J., Maisog, J., Woods, R., & Zeffiro, T. (1996). Abnormal processing of visual motion in dyslexia revealed by functional brain imaging. Nature, 382, 66-69.

Edwards, V., Hogben, J., Clark, C., & Pratt, C. (1996). Effects of a red background on magnocellular functioning in average and specifically disabled readers. Vision research, 36, 1037-1046.

Enroth-Cugell, C., & Robson, J. G. (1966). The contrast sensitivity of retinal ganglion cells of the cat. Journal of Physiology, 187, 517-552.

Evans, B., Drasdo, N., & Richards, I. (1994). An investigation of some sensory and refractive visual factors in dyslexia. Vision Research, 34, 1913-1926.

Farmer, M., & Klein, R. (1995). The evidence for a temporal processing deficit linked to dyslexia: A review. Psychonomic Bulletin and Review, 2, 460-493.

Fisher, D., & Tanner, N. (1992). Optimal symbol set selection: A semiautomated procedure. Special Issue: Safety and mobility of elderly drivers II. Human Factors, 34, 79-95.

Friedman-Hill, S., & Wolfe, J. (1995). Second-order parallel processing: Visual search for the odd item in a subset. Journal of Experimental Psychology: Human Perception and Performance, 21, 531-551.

Gamberini, L. (2000). Virtual reality as a new research tool for the study of human memory. CyberPsychology & Behavior, 3, 337-342.

Geisler, W., & Chou, K. (1995). Separation of low-level and high-level factors in complex tasks: Visual Search. Psychological Review, 102, 356-378.

Gilchrist, A., Humphreys, G., Riddock, M., & Neumann, H. (1997). Luminance and edge information in grouping: A study using visual search. Journal of Experimental Psychology: Human Perception and Performance, 23, 464-480.

Gilmore, G., Wenk, H., Naylor, L., & Koss, E. (1994). Motion perception and Alzheimerís disease. Journal of Gerontology: Psychological Sciences, 49, P52-P57.

Gilmore, G., & Whitehouse, P. (1995). Contrast sensitivity in Alzheimerís disease: A 1-year longitudinal analysis. Optometry and Vision Science, 72, 83-91.

Hinton, D., Sadun, A., Blanks, J., & Miller, C. (1986). Optic-nerve degeneration in Alzheimerís disease. The New England Journal of Medicine, 315, 485-487.

Hof, P., & Morrison, J. (1990). Quantitative analysis of a vulnerable subset of pyramidal neurons in Alzheimerís disease: II primary and secondary visual cortex. Journal of Comparative Neurology, 301, 55-64.

Huang, M. P. Himle, J. & Alessi, N. E. (2000). Vivid visualization in the experience of phobia in virtual environments. Preliminary results. CyberPsychology & Behavior, 3, 315-320..

Hubel, D., & Wiesel, T. (1962). Receptive fields, binocular interaction and functional architecture in the catís visual cortex. Journal of Physiology, 160, 106-154.

Hubel, D., & Wiesel, T. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195, 215-243.

Jacobs, G. (1998). Photopigments and seeing-Lessons from natural experiments: The Proctor Lecture. Investigative Ophthalmology and Visual Science, 39, 2205-2216.

Jang, D. P., Ku, J. H. Shin, M. B., Choi, Y. H., & Kim, S. I. (2000). Objective validation of the effectiveness of virtual reality psychotherapy. CyberPsychology & Behavior, 3, 369-374.

Jorgenson, A., Philip, J., Raskind, W., Matsushita, M., Christensen, B., Dreyer, V., & Motulsky, A. (1992). Different patterns of X inactivation in MZ twins discordant for red-green color-vision deficiency. American Journal of Human Genetics, 51, 291-298.

Kesztyues, T. I., Mehlitz, M. Schilken, E., Weniger, G., Wolf, S., Piccolo, U., Irie, E., & Rienhoff, O. (2000). Preclinical evaluation of a virtual reality neuropsychological test system: Occurrence of side effects. CyberPsychology & Behavior, 3, 343-349.

Kiyosawa, M., Bosley, T., Chawluk, J., Jamieson, D., Schatz, N., Savino, P., Sergott, R., Reivich, M., & Alavi, A. (1989). Alzheimerís disease with prominent visual symptoms. Ophthalmology, 96, 1077-1086.

Kolb, B., & Wishaw, I. Q. (1996). Fundamentals of Human Neuropsychology, 4th ed. New York: W. H. Freeman and Company.

Krantz, J. H. (2000). A Computational Model of the Retina. Presented at the 30th annual Conference of the Society for Computers in Psychology. New Orleans, LA.

Krantz, J. H., Silverstein, L. D., & Yeh, Y. (1992). Visibility of transmissive liquid crystal displays under dynamic lighting conditions. Human Factors, 34, 615-632.

Krantz, J. H., & White, K. D. (1989). Postural stability during saccadic eye movements. Presented at the Annual Meeting of the Association for Research in Vision and Ophthalmology, Sarasota, Florida.

Kuffler, S. W. (1953). Discharge patterns and functional organiation of mammalian retina. Journal of Neurophysiology, 16, 37-68.

Lajunen, T., Hakkarainen, P., & Summala, H. (1996). The ergonomics of road signs: explicit and embedded speed limits. Ergonomics, 39, 1069-1083.

Lappin, J., Doner, J., & Kottas, B. (1980). Minimal conditions for the visual detection of structure and motion in three dimensions. Science, 209, 717-719.

Lennie, P., Trevarthen, C., Van Essen, D., & Waessle, H. (1990). Parallel processing of visual information. In L. Spillman & J. S. Werner (eds.). Visual perception: The neurophysiological foundations (pp. 103-129). Orlando: Academic Press.

Liu, Y. (1996). Interactions between memory scanning and visual scanning in diplay monitoring. Ergonomics, 39, 1038-1053.

Livingston, M., Rosen, G., Drislane, F., & Galaburda, A. (1991). Physiological and anatomical evidence for a magnocellular deficit in developmental dyslexia. Proceedings of the National Academy of Sciences USA, 88, 7943-7947.

Lovegrove, W., Garzia, R., & Nicholson, S. (1990). Experimental evidence for a transient system deficit in specific reading disability. Journal of the American Optometric Association, 61, 137-146.

Lubow, R., & Kaplan, O. (1997). Visual search as a function of type of prior experience with target and distractor. Journal of Experimental Psychology, 23, 14-24.

Marr, D. (1985). Vision. San Francisco, CA: Freeman.

Merrigan, W., & Mansell, J. (1993). How parallel are the primate visual pathways? Annual Review of Neuroscience, 16, 369-402.

Merzenich, M., Jenkins, W., Johnston, P., Schreiner, C., Miller, S., & Tallal, P. (1996). Temporal processing deficits of language-learning children ameliorated by training. Science, 271, 77-81.

Mittenburg, W., Malloy, M., Petrick, J., & Knee, K. (1993). Impaired depth perception discriminates Alzheimerís dementia from aging and major depression. Archives of Clinical Neuropsychology, 9, 71-79.

Miyahara, E., Pokorny, J., Smith, V., Baron, R., & Baron, E. (1998). Color vision in two observers with highly biased LWS/MWS cone ratios. Vision Research, 38, 601-612.

Mohler, J. L. (2000). Desktop virtual reality for the enhancement of visualization skills. Journal of Educational Multimedia & Hypermedia, 9, 151-165.

Mollon, J. (1992). Worlds of difference. Nature, 356, 378-379.

Mollon, J, Cavonius, C. & Zrenner, E.(1998). Special Issue: Proceedings of the International Colour Vision Society. Vision Research, 38.

Muller, H., & Found, A. (1996). Visual search for conjunctions of motion and form: Display density and symmetry reversal. Journal of Experimental Psychology: Human Perception and Performance, 22, 122-132.

Nathans, J., Piantanida, T., Eddy, R., Shows, T., & Hogness, D. (1986). Molecular genetics of inherited variation in human color vision. Science, 232, 203-210.

Neitz, J., & Jacobs, G. (1986). Polymorphism of the long-wavelength cone in normal human colour vision. Nature, 323, 623-625.

Neitz, J., & Jacobs, G. (1990). Polymorphism in normal human color vision and its mechanism. Vision Research, 30, 620-636.

Neitz, M., Neitz, J., & Grishok, A. (1995). Polymorphism in the number of genes encoding long-wavelength sensitive cone pigments among males with normal color vision. Vision Research, 35, 2395-2407.

Neitz, J., Neitz, M., & Jacobs, G. (1993). More than three cone pigments among people with normal color vision. Vision Research, 33, 117-122.

Neitz, M., Neitz, J., & Jacobs, G. (1995). Genetic basis of photopigment variations in human dichromats. Vision Research, 35, 2095-2104.

Newton, I. (1730/1979). Optiks. New York, NY: Dover.

Nothdurft, H. (1993). The role of features in preattentive vision: Comparison of orientation, motion, and color cues. Vision Research, 33, 1937-1958.

OíToole, A., & Walker, C. (1997). On the preattentive accessibility of stereoscopic disparity: Evidence from visual search. Perception and Psychophysics, 59, 202-218.

Palmisano, S. (1996). Peceiving self-motion in depth: The role of stereoscopic motion and changing size cues. Perception and Psychophysics, 58, 1168-1176.

Pettigrew, J., & Konishi, M. (1976). Neurons selective for orientation and binocular disparity in the visual wulst of the barn owl (tyto alba). Science, 193, 675-678

Proctor, R. W., & Van Zandt, T. (1994). Human factors in simple and complex systems. Boston, MA: Allyn and Bacon.

Richardson, A. E., Hegarty, M., & Motello, D. R. (1997). Spatial learning from maps and from mavigation in real and virtual environments. A paper presented at the 38th annual meeting of the Psychonomic Society, Philadelphia, PA.

Roessler, A., Mueller-Spahn, F., Baehrer, S. & Bullinger, A. H. (2000). A rapid prototyping frameword for the development of virtual environments in mental health. CyberPsychology & Behavior, 3, 359-367

Sacks, O. & Wasserman, R. (1987). The painter who became color blink. New York Review of Books, 34, 25-33.

Schultz, D., & Schultz, S. (1996). A history of modern psychology (5th ed.). Ft. Worth: Harcourt Brace Jovanovich.

Shaywitz, S., Escobar, M., Shaywitz, B., Fletcher, J., & Makuch, R. (1992). Evidence that dyslexia may represent the lower tail of a normal distribution of reading ability. The New England Journal of Medicine, 326, 145-150.

Shaywitz, S., Shaywitz, B., Pugh, K., Fulbright, R., Constable, R., Mencl, W., Shankweiler, D., Liberman, A., Skudlarski, P., Fletcher, J., Katz, L., Marchione, K., Lacadie, C., Gatenby, C., & Gore, J.(1998). Functional disruption in the organization of the brain for reading in dyslexia. Proceedings of the National Academy of Sciences, 95, 2636-2641.

Silverstein, L. D., Krantz, J. H., Gomer, F. E., Yeh, Y., & Monty, R. W. (1990). The effects of spatial sampling and luminance quantization on the image quality of color matrix displays. Journal of the Optical Society of America, Part A, 7, 1955-1968.

Silverstein, L. D., & Merrifield, R. M. (1985). The development and evaluation of color systems for airborne applications: Phase I-Fundamental visual, perceptual, and display systems considerations (Tech. Report DOT/FAA/PM085019). Washington, DC: Federal Aviation Administration.

Stein, J., & Walsh, V. (1997). To see but not to read: the magnocellular theory of dyslexia. Trends in Neuroscience, 20, 147-152

Summala, H., Pasanen, E., Rasanen, M., & Sievanen, J. (1996). Bicycle accidents and driverís visual search at left and right turns. Accident Analysis and Prevention, 28, 147-153.

Tallal, P. (1980). Auditory temporal perception, phonics, and reading disabilities in children. Brain and Language, 9, 182-198.

Todd, J., & Norman, J. (1991). The visual perception of smoothly curved surfaces from minimal apparent motion sequences. Perception and Psychophysics, 50, 509-523.

Triesman, A., & Souther, J. (1985). Search asymmetry: A diagnostic for preattentive processing for separable features. Journal of Experimental Psychology: General, 114, 285-310.

Troy, J. B. & Enroth-Cugell, Christina. (1989). Dependence of center radius on temporal frequency for the receptive fields of X retinal ganglion cells of cat. Journal of General Physiology, 94, 987-995.

Uttal, W. (1998). Towards a new behaviorism: The case against perceptual reductionism. Mahwah New Jersey: LEA publishers.

Volkmann, F. C. (1986). Human visual supression. Vision Research, 26, 1401-1416.

Vortac, O., Edwards, M., Fuller, D., & Manning, C. (1993). Automation and cognition in air traffic control: An empirical investigation. Special Issue: Practical aspects of memory: The 1994 Conference and beyond. Applied Cognitive Psychology, 7, 631-651.

de Vries, S., Kappers, A., & Koenderink, J. (1993). Shape from stereo: A systematic approach using quadratic surfaces. Perception and Psychophysics, 53, 71-80.

Wang, Q., Cavanagh, P., & Green, M. (1994). Familiarity and pop-out in visual search. Perception and Psychophysics, 56, 495-500.

Winderickx, J., Lindsey, D., Sanocki, E., Teller, D., Motulsky, A., & Deeb, S. (1992). Polymorphism in red photopigment underlies variation in colour matching. Nature, 356, 431-433.

Wolfe, J. (1998). What can 1 million trials tell us about visual search? Psychological Science, 9, 33-39.

Wolfe, J. (1992). The parallel guidance of visual attention. Psychological Science, 1, 124-128.

Wong-Riley, M. T. T. (1979). Changes in the visual system of monocularly sutured or enucleated cats demonstrable with cytochrome oxidase histochemistry. Brain Research, 171, 11-28.

Woods, C., & Oross, S. (1998, March). Recognition of contrast and texture defined letters in individuals with developmental disabilities. Presented at the 1998 Gatlinburg Conference on Research and Theory on Mental Retardation and Developmental Disabilities, Charleston, SC. Abstract published in the Abstract Book, 1998 Gatlinburg Conference.

Yu, C., & Levi, D. (1997). Cortical end-stopped perceptive fields: Evidence from dichoptic and amblyopic studies. Vision Research, 37, 2261-2270.

Zeki, S. (1990). A century of cerebral achromatopsia. Brain, 113, 1721-1777.

Zeki, S. (1993). A vision of the brain. London: Blackwell Scientific Publications.

Zihl, J. Cramon, D. von, & Mai, N. (1983). Selective disturbance of movement vision after bilateral brain damage. Brain, 106, 313-340.





Charles "Barrie" Woods received his bachelor's degree in Psychology from the University of Wyoming and his master's and Ph.D. in Experimental Psychology from the University of Florida. Woods' graduate work was in the area of visual perception. He is presently Associate Professor of Psychology at Austin Peay State University in Clarksville, TN.

Woods is a strong believer in the importance of undergraduate research experiences, which he works hard to support. He has received grants from the National Institutes of Health and the National Science Foundation to help support undergraduate research activities. Additionally, he writes a great deal of software for use in class demonstrations, lab course experiments, and independent student research projects. 



Away from campus Woods is a cycling enthusiast. He is fond of one-day club rides, weeklong state tours, and one summer rode coast to coast from San Francisco to Maine. He has recently decided to try his hand at restoring vintage racing bicycles.

The author may be reached atwoodsc@apsu.edu





John Krantz did his undergraduate work at St. Andrews Presbyterian College and his graduate work at the University of Florida. While at the University of Florida he received a National Science Foundation Fellowship. Since graduate school he has worked in industry at Honeywell where he worked on the visual factors related to cockpit displays in commercial aircraft. In 1990, he returned to academia taking a position at Hanover College. He has done research in vision, human factors and the use of the web as a medium for psychological research. He has been the program chair (1996) and president (1999) of the Society for Computers in Psychology. He has also been a faculty associate for The Psychology Place developing both interactive learning activities and their best of the web listing. In addition he has been elected a member of the Guild of Scholars of the Episcopal Church. His current research is in modeling of the activity of the retina and he is writing a textbook in Sensation and Perception.

The author may be reached atkrantzj@hanover.edu



These are descriptions to accompany the figures found in Chapter 9, Sensation and Perception

Visual Neuroscience Section

Figure 1 is an illustration of the differences between serial and parallel processing. The top part of the figure illustrates serial processing using arrows to go from step one to step two, etc. The bottom part of the figure illustrates parallel processing by three lines going horizontally the length of the figure. None of the processes require any of the other processes to complete before it can start. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 2 illustrated the response pattern of x and y cells. The figure is a plot with time on the horizontal x axis and how fast the neurons are firing on the vertical y axis. Below the x axis is a plot showing when the stimulus is on. The stimulus is on for the middle 3/4th's for the figure. The plot of the x cell rises when the stimulus comes on and while it varies moment by moment it stays high until the stimulus goes off when it returns to its baseline firing rate. The plot for the y cell rise when the stimulus goes on, stays high for a brief period of time and then falls back to the baseline firing rate. The firing rate for the y cell goes up briefly again when the stimulus goes off. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 3 graphically represents a cross section of the cortex as it is in the striate cortex. There are 6 horizontal layers parallel to the surface of the cortex. The 4 layer is the thickest here and is stained darker to show the stripe that give this region its name of striate cortex. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 4 graphically represents the connections between the Lateral Genuculate Nucleus and the striate cortex. On the left portion of the figure is a schematic LGN, with the bottom two layers in black representing the magnocellular layers of the LGN. The top four layers are gray representing the parvocellular layers. Both types of layers are labeled. On the right are the words blobs and interblobs, and the same representation of the cortex as in Figure 3 with a portion of Layer 4 indicated, layer 4b. Lines go from the parvocellular label to both the blobs and interblobs label. Lines go from the magnocellular label to both the blobs and the Layer 4b labels. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 5 indicates the upward connections from V1 to the other regions of the cortex. V1 is broken into three sections (blobs, interblobs, and layers 4b). V2 is broken into sections called Thick stripes, interstripes and thin stripes. The other sections indicated are V3, V4, and V5. The blobs show arrows connecting to the interstripe and v4 regions. The interblobs also connect to the interstripe and v4 regions. Layer 4b connects to thin stripes, thick stripes, v5 and v3. V3 is interconnected to v4 and v5. V4 is interconnected to v3 and v5. V5, likewise is interconnected with v3 and v4. [End of description; Use your browser's BACK button to return to section you were reading.]

Take a look at the figure below
There is a central medium gray square that does not change. Surrounding that is a larger square that goes from black to white. As it does, the brightness of the central square changes. When the larger square is light, the central square looks darker. When the larger square is dark, the central square looks lighter. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 6 is a traditional representation of the simultaneous contrast. On the left is a gray square surrounded by a larger black square. On the right the same gray square is now surrounded by a larger white square. The gray square on the right looks brighter on the left. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 7 shows a plot of model of the output of center-surround receptive fields of ganglion cells to Figure 6. Only the edges of the two inner squares show up all other cells respond the same as the background level. For the square on the left, the edge next to the square are higher than the center of the square and the edge next to the black surround square are lower than the rest of the black region. For the square on the right, the edge next to the square is lower than the center of the square and the edge next to the white is higher than the rest of the white region. [End of description; Use your browser's BACK button to return to section you were reading.]

[There is no Figure 8. It was removed late in the editorial process and, for convenience sake, the numbers of the following figures were retained.]



Human Color Vision section.

Figure 9 shows the traditional spectral sensitivity of a three cone photopigment system. Wavelength is on the X-axis (horizontal) and sensitivity is on the Y-axis (vertical). Each curve resembles an upside down "U". One curve has its peak in the short wavelength part of the spectrum, one in the middle wavelength part of the spectrum, and one in the longwavelength end of the spectrum. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 10, like Figure 9, shows the spectral sensitivity of cone photopigments. This Figure contains 5 curves, however, representing the newly discovered cone types. Shown is an additional curve in the middle wavelengths and an additional curve in the long wavelength end of the spectrum. The two added spectral sensitivity curves are very nearly identical in shape and placement to the two original middle wavelength and longwavelength cone photopigments. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 11 shows several rows of colored rectangles. Each rectangle represents a single color vision gene; red rectangles symbolizing genes encoding the long wavelength cone type, green rectangles symbolizing the middle wavelength cone type; and half red / half green rectangles representing hybrid genes which are linked to "unusual" spectral sensitivity. Different rows, representing different individuals, often contain multiple copies of the different gene types indicating diverse gene arrangements. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 12 shows a graph of the color matches of 94 different men. Shown on the X-axis (horizontal) are the individuals, numbered 1 to 94, and on the Y-axis (vertical) is their color match. This color match, called the Rayleigh match, is one where red and green light must be mixed in some proportion to make yellow. The Y-axis (vertical) shows this proportion. Each individual is represented by a vertical bar that shows all the proportions of red and green that would make an acceptable match to yellow. Longer bars, therefore, represent poorer color discrimination. The Figure shows many individuals with good color discrimination and several with poor color discrimination. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 13 shows a representation of the distribution of photoreceptors in the retina. Red, green, and blue filled circles represent the different cone types. The ratio of R / G / B cone types varies: The long wavelength cones are the most prevalent; short wavelength cones the least prevalent in the retina. [End of description; Use your browser's BACK button to return to section you were reading.]



Visual Search section.

Figures 14 to 17 are animated demonstrations of visual search tasks. These figures demonstrate that the ability to search a visual display for a single target item among a group of distracter items depends on certain factors. Some targets are quickly found regardless of the number of distracters present in the display. Other targets, however, are not as easily found and increasing the number of distracter items increases the length of time required to successfully find the target. [End of description; Use your browser's BACK button to return to section you were reading.]



Human Factors Section.

Figure 18 shows the CIE chromaticity diagram. The x value of the color is on the x axis (horizontal) and the y value is on the y axis (vertical). Inside the axes is a figure that represents all colors that it is possible to see. Around the curved portion of the outside are the labels for the wavelengths in the spectrum of visible light (from 380 to 750 nanometers). This is a straight line connecting from the 380 nm to the 750 nm point to indicate the mixtures of these two extreme colors in the spectrum. Inside the range of possible colors is a triangle. The three points represent typical primaries on a color CRT and the region inside the triangle are all of the colors a CRT can reproduce. Much of all possible colors falls outside this triangle. [End of description; Use your browser's BACK button to return to section you were reading.]



Neuropsychology section.

Figure 19 shows a representation of the stimuli used to test low-contrast visual acuity. The panel on the left shows rows of high contrast black "E" stimuli on a white background. The "E" can be pointing up, down, left, or right and the size of the "E" gets progressively smaller from top to bottom in the panel. The panel on the right is similar except the "E" stimuli are low contrast: a very light gray on a white background and are much more difficult to see. [End of description; Use your browser's BACK button to return to section you were reading.]

Figure 20 shows the data collected on children with and without dyslexia with visual stimuli similar to those in Figure 19. Each panel shows Snellen acuity on the X-axis (20/20, etc) and proportion correct identification on the Y-axis. The panel on the left shows the data when children with and without dyslexia are tested with high contrast "E" stimuli. Both groups show similar visual acuity. The panel on the right shows the data when these same children are tested with low contrast "E" stimuli. When tested with these stimuli the children with dyslexia perform more poorly than children without dyslexia. [End of description; Use your browser's BACK button to return to section you were reading.]

Virtual Reality Section.

Figure 21 is a very simple virtual reality illustration. It has a blue square about 3/4th the way to the top, a larger red square about 1/3rd the way to the top and two arrows at the bottom. One arrow points to the left and the other to the right. When you press on these arrows, the two squares will move in that direction. The red square moves much more. When in movement, the red square gives the impression of being closer to you. [End of description; Use your browser's BACK button to return to section you were reading.]


1   ...   8   9   10   11   12   13   14   15   16


The database is protected by copyright ©dentisty.org 2016
send message

    Main page