Introduction: The Researcher's Life 1


Chapter 9 Sensation and Perception



Download 1.5 Mb.
Page10/16
Date08.07.2018
Size1.5 Mb.
1   ...   6   7   8   9   10   11   12   13   ...   16

Chapter 9
Sensation and Perception


Charles Woods, Austin Peay State University and
John Krantz, Hanover University

Introduction


Studies of sensation and perception have historically been the starting points for the scientific study of the mind. An interest in the structure of sensory systems and the nature of human perception predates psychology. In fact, research in this area in the early and middle 1800ís was instrumental in creating the academic climate that gave rise to psychology as a distinct scientific field.

The year 1879 is often cited as the date of the founding of psychology, marked by the establishment of Wundtís experimental laboratory. The year 1860 is probably a better choice, however. This is the year that Fechner published his book Elements of Psychophysics (Boring, 1950). Fechner (below) described a set of methods for studying and quantifying the relationship between sensory stimuli and perceptual experiences. The realization that the relationships between stimulus events and mental events might be reducible to simple laws apparently occurred to Fechner while lying in bed on the morning of October 22nd, 1850 (Schultz & Schultz, 1996). This early work on the relationship between sensation and perception and the accuracy of the perceptual representations of sensory stimuli made up a substantial portion of the bedrock of early experimental psychology. Now, scientists celebrate Fechner Day (October 22nd) each year with scientific meetings and other events in remembrance of this contribution to our field.



Gustav Fechner (1801-1887)
Image from the International Society for Psychophysics

The intervening 150 years of research on sensation and perception has led to technological advances in experimental apparatus, the discovery of new research methods for gathering observations, and the addition of advanced quantitative methods for describing and analyzing psychophysical data.

In the very early days, researchers in sensation and perception had fairly similar training and interests. Quite rapidly, however, work in the area diverged to include a broad set of research areas and scientific approaches. Today, the general area of sensation and perception is composed of researchers with vastly different training, and psychologists make up only a small percentage of these individuals. As sensory psychologists approach the millennium, they are re-examining their theoretical approaches to the traditional problems in the field, their selection of problems to investigate is heavily influenced by work in related fields (e.g. neuroscience), and they have become more applied.

Depth perception is a good example of the diversity of training and theoretical approach of todayís researchers in sensation and perception. Some sensory psychologists study how we use depth cues to help perform object recognition (de Vries, Kappers, & Koenderink, 1993) or to perform a visual search (OíToole & Walker, 1997). Others study how we recover 3-D shapes from motion cues (Lappin, Doner, & Kotas, 1980; Todd & Norman, 1991) or how we use depth cues to move through natural environments (Palmisano, 1996) or virtual ones (Cutting, 1997). In addition, neuroscientists study disparity sensitive cortical cells (Chino, Smith, Hatta & Cheng, 1997), optometrists study binocular function in the presence of strabismus or amblyopia (Yu & Levi, 1997), and biologists study depth perception in animals (Pettigrew & Konishi, 1976; Collett & Harkness; 1982).

Although scientific heterogeneity defines our field today, advances in the neurosciences heavily influence research in sensation and perception. This impact is understandable; we have made great strides in our understanding of the brain and we have developed exciting new research methods for investigating it. Only a handful of sensory psychologists would classify themselves as behavioral neuroscientists. However, most psychologists today are unlikely to investigate perceptual phenomena without at some point considering how their psychophysical data relate to the underlying sensory physiology or to brain organization.

Sensation and Perception Today


In the late 1990's sensation and perception consists of many exciting research areas. These areas represent an expansive continuum from studies directed at the earliest, ìlow-levelî stages of sensory processing to those directed at later ìhigh-levelî perceptual mechanisms.

In this chapter we consider examples of what we feel are exciting areas of research in sensation and perception from both ends of this continuum, with an emphasis on the current state of the discipline. These examples, then, are primarily from the areas of vision and visual perception and represent both basic and applied science. We conclude with a brief discussion of possible future directions for the discipline, using as an example research in the area of virtual reality. Because it currently drives a great deal of the research and thinking in sensation and perception, we begin with current explorations in the area of visual neuroscience.

In the late 1990's sensation and perception consists of many exciting research areas. These areas represent an expansive continuum from studies directed at the earliest, "low-level" stages of sensory processing to those directed at later "high-level" perceptual mechanisms.

In this chapter we consider examples of what we feel are exciting areas of research in sensation and perception from both ends of this continuum, with an emphasis on the current state of the discipline. These examples, then, are primarily from the areas of vision and visual perception and represent both basic and applied science. We conclude with a brief discussion of possible future directions for the discipline, using as an example research in the area of virtual reality. Because it currently drives a great deal of the research and thinking in sensation and perception, we begin with current explorations in the area of visual neuroscience.


Visual Neuroscience


The study of the biological basis of sensation and perception is a fascinating areas of study for two reasons. The first reason is that the pace of discovery is breathtaking. It seems that every few weeks a new set of findings challenges our conceptions of how the brain processes sensory information. The second reason is interrelated with the first. In many ways, the understanding of how the brain processes sensory and perceptual information is used as a basis for understanding how the cerebral cortex, as a whole, operates. The cerebral cortex is where it is believed that our most advanced functions reside. The most extreme version of this approach is seen in Zeki's (1993) Vision of the Brain. Zeki unapologetically uses the knowledge of the architecture and function of the visual areas of the cortex to develop some general ideas about cerebral cortex. While most researchers will not go quite so far, it is certain that some general insights about cerebral cortex will come out of the study of the sensory cortices. A brief review of the current knowledge of the visual system and how it has changed in the last 30 years will illustrate these points.

The idea that the brain operates with many parallel elements is the major emerging theme in our present understanding of the neural bases of the sensory systems; different pathways and different modules of the sensory systems operate to extract unique features of the information in the sensory stimulus. Throughout the sensory systems, there is evidence of parallel (that is simultaneous) operations, both with parallel pathways and parallel targets for sensory information.



Sensory information processing was first viewed as predominantly serial (that is sequential), based upon the groundbreaking work of Hubel and Wiesel (1962, 1968). However, even in these early studies researchers began to find evidence of parallel processing.For example, Hubel and Wiesel uncovered the existence of cortical columns. In a cortical column, all cells process the same feature of the environment from the same location on the retina. For example, one column processes oriented stimuli of a certain width from a certain location on the retina; the next column processes stimuli with a slightly different orientation. Each column is organized in a parallel fashion processing information simultaneously.

Figure 1. An illustration of the difference between serial and parallel processing. Serial processing is in stages and parallel processing is different processes proceeding at the same time. [Figure 1 description]



Another important early finding that foreshadowed todayís emphasis on parallel processing was the study of cat retinal ganglion cells by Christina Enroth-Cugell and John Robson (1966). Enroth-Cugell and Robson identified two types of cells in the feline retina, named X and Y cells, that responded very differently to specific features of light stimuli. X cells tend to respond consistently throughout the time that the stimulus was presented, had relatively small receptive fields, and required fairly high contrast. Y cells, in contrast, tend to respond primarily to stimulus onsets and offsets and do not respond well to a stimulus that does not change over time (see Figure 2). Also, the receptive fields of y cells tend to be larger and require less contrast. These findings, which have been replicated and extended to primates, clearly indicate that there is something fundamentally parallel in the processing of visual information. Studies by Hubel and Wiesel (1968) found a similar functional segregation in the cortex between their simple and complex cells although at the time they interpreted these cells as sequentially linked.

Figure 2. The response patterns of X versus Y cells. [Figure 2 description]

At the lateral geniculate nucleus (LGN) in the thalamus, for example, the visual system is still divided into two main pathways. The LGN is a composed of six layers in the primate. Two of the layers have relatively large cell bodies and are called magnocellar layers while the other four layers have relatively small cell bodies and are called parvocellular layers (Lennie, Trevarthen, Van Essen and Waessle, 1990). The -cellular suffix is often dropped and the two types of cells are called magno and parvo or even M and P. The magno and parvo cells show very different response patterns. In fact, the response patterns are very similar to the X and Y ganglion cells seen in the cat retina. In addition, the parvo pathways carry color information. There is a tendency to apply these distinctions, based initially upon anatomy of the LGN, to the retinal ganglion cells. There is a lot of functional similarity between the magno and Y systems and the parvo and X systems. For example, the magno and Y systems have larger receptive fields and have transient response patterns. Similarly both the parvo and X systems have smaller receptive fields and more sustained response patterns. The Y cells even have relatively large cell bodies just like the Magno cells of the LGN and X cells have relatively small cell bodies like the Parvo cells of the LGN. The overlap of functions and anatomy of the retinal and LGN systems allows this distinction to be nicely applied. See Table 1 for a summary of the distinction between x/parvo and y/magno systems.

Table 1
A Summary of the X/Parvo and Y/Magno systems of the retina, LGN


___________________________________________________________

Feature X/Parvo Y/Magno

Response pattern Sustained Transient

Receptive Field Size Smaller Larger

Contrast Sensitivity Relatively Low Relatively High

Color Sensitivity Yes No

The first stop for visual information in the cortex is the striate cortex (V1) in the occipital lobe at the very back of the brain. A little geometry of the cortex is helpful. First, all parts of the neocortex are composed of six layers and each of these layers are involved in the same type of function regardless of where that layer is on the cortex. For example, the fourth layer from the surface of the brain, Layer 4, always receives input from sensory systems (see Kolb and Whishaw, 1996, for a more detailed review of the organization of the cortex). In the sensory areas of the brain, such as the Visual cortex, this layer is very thick. This layer stains very darkly, which gives this first part of the visual cortex its name of striate cortex (See Figure 3).

Brain Surface


Inside of the Brain

Figure 3. An illustration of the layers of the cortex as might be see in V1. Layer 4 is where the input from the LGN occurs and is the thickest layer here. It stains darkly giving this region one of its names, the striate cortext. [Figure 3 description]

Within the striate cortex (V1) the parvo and magno pathways are now segregated into three separate units that function independently: blobs, interblobs, and Layer 4b (Wong-Riley, 1979). Blobs receive inputs from both the magno and parvo system and seem to play an important role in the processing of color. Interblobs receive inputs from only the parvo system and seem to process fine patterns in the stimulus. Layer 4b is a subpart of one of the six layers of the neocortex. Its input is solely from the magno system and these cells seem to respond to motion and very low contrast. Thus, the two parallel pathways have divided now into three pathways. These pathways from the LGN to the striate cortex are summarized in Figure 4 .



Figure 4. The relationship of the layers in the LGN to the different functional parts of V1, the striate cortex. [Figure 4 description]



In the primate, five visual areas have been identified and given the clever names of V1, V2, V3, V4 and V5. Each section is anatomically distinct and is thought to perform different functions relevant to our perception of the world. A simplified view of how these regions are connected is shown in Figure 5. The current evidence suggests that V3 is involved with the processing of form, V4 with color constancy, and V5 complex motion processing. Some of these findings have been supported by the study of lesions in the occipital cortex. For example, a person with a lesion of the human region analogous to primate V4 on the left side of the brain, would report having no color vision in the right half of their visual world (Zeki, 1993).

Figure 5. A simplified diagram of the interconnections between different regions of the cortex. [Figure 5 description]

In addition to all of the connections from V1 and V2 to V3, V4 and V5, each of these regions connects back to V1 and V2. These seemingly backward or reentrant connections are not well understood but one possible role for them is to index the processing in V3, V4 and V5 to the more precise visual maps found in V1 and V2. Although each of the visual regions has a map of the visual world, they are not nearly as precise and detailed as those found in V1 and V2. In other words, their receptive fields are much larger which allows for poorer localization of a stimulus or object. For example, although V4 locates a color region in space, the receptive field is rather large and might not indicate whether this color belongs to the cup or the book on the table in front of you. These reentrant connections, feeding back to the precise maps of V1 and V2 may provide a mechanism that allows the visual system to assign the color precisely to the appropriate location and object, say a part of the cover of the book in front of you.

Our present understanding of the visual system is very different from the serial model Hubel and Wiesel initially proposed. There are separate functional modules that operate relatively independently; information, instead of flowing in one direction, now flows both directions. Thus, later levels do not simply receive information and send it forward, but are in an intimate two-way communication with other modules.

This emerging and modular and parallel view of the visual cortex has helped us better understand some terrible but fortunately rare disorders of the visual system. They are achromatopsia and akinetopsia. Achromatopsia refers to a selective loss of color vision, usually as a result of stroke (Zeki, 1990). It is important to note that the loss seems to be selective to color vision. These patients can read, recognize patterns, and respond to motion. In fact, current research indicates that little if any other visual function is lost, with the exception of occasional but temporary loss of form vision immediately following the injury (Sacks & Wasserman, 1987). These patients describe the world as being very drab and gray. It might be like permanently viewing the world on an old black and white TV set. Apparently this is not a very bright TV set either. With this modular view of the brain, it is possible to understand how a patient might lose color vision only. Loss of the human equivalent of V4 would take out color vision selectively. PET scans on these human patients that allow doctors to examine the living functioning brain, find hat the lesions in achromatopsia is found in V4 (Sacks & Wasserman, 1987).

Akinetopsia is basically the same type of syndrome, only the person looses the perception of motion (Zihl, von Cramon, & Mai, 1983). The patient can only see objects as stationary. They might be perceived in different places at different times but they are not perceived to be moving. At each place they are perceived to be still. Imagine the danger of crossing the street with this condition. This syndrome is associated with damage to visual cortex V5, mentioned above. A with achromatopsia, other visual functions are spared. The person sees in color and can read, for example. Again, our developing modular view of the brain allows these syndromes to be clearly understood.

Where does the study of the biological basis of the brain go from here? Probably in two or three directions. One direction certainly will be the further breaking down of the different sensory regions. Subregions for each of the primary visual areas will be discovered for each of the visual areas discovered so far and new regions will probably be discovered. Some have already been found. For example, while we have a sense for the processing of form, color, and motion, we do not yet know clearly how the brain functions to process depth and texture. A second direction for future research will be how information from these regions is used by the rest of the brain. The neural pathways to the parietal and temporal lobes have been described but little is known about how sensory information is integrated with more cognitive functions. A third possible path would be to better understand how these separate modules lead to unified perceptions. While color can be selectively lost, color is not usually perceived separately from the object.Somehow, that fact, and other integrative perceptual experiences, needs to be a part of our understanding of the brain.

The discovery of the extent of parallel processing in sensory systems has greatly changed how we conceptualize sensory information processing. This work has been coupled with a grown in the use of the computer to understand the operation of the visual system.


Computational Approaches to Sensation and Perception


Along with the rapid development of visual neuroscience, there has been an equally rapid development of computational approaches to understanding perception. These approaches ranges from models of large scale like that of Marr (1985) effort at a unified computational theory to more limited efforts such as Krantz, Silverstein, & Yeh's (1992) model of visibility of displays underdynamic lighting. In this section a brief presentation on one computational model being developed by one of the authors will be presented to illustrate some of the power of these models (Krantz, 2000).

The motivations for this effort comes largely from teaching some of the concepts in the last section on Visual Neuroscience. Basically the effort is to try to examine the impact and function of the receptive fields from the retinal ganglion cells, particularly the x cells.

Take a look at the figure here. [Description of visual display]

The center does not change in any way. Does it look that way. The same is true for the two center squares below - they are identical.



Figure 6. Simultaneous Contrast [Figure 6 description]



This effect is called simultaneous contrast. The model that has been developed will plot the output of many of these cells in a regular array - like an x and y grid. Figure 7 shows the output of this model for using the same simultaneous contrast image as in Figure 6. The receptive fields illustrated are small and would be very good at resolving fine details. The intersections of lines on the figure show where the center of a receptive field was located. So the x and y axes, defining the horizontal plane, represent a set of cells. The height of the surface in the figure represents how fast that cell would respond according to the model. The higher the point on the graph the faster the cell responds.

Figure 7. The model output for a simultaneous contrast image. [Figure 7 description]

There are several interesting and important features of this figure. The most noticeable thing in the output of the model is the edges. If the receptive field does not have an edge in it, then the cell has about the same output whether the input is bright (white), moderately bright (gray) or dark (black). Light filling the entire receptive field is not an effective stimulus for the cell (Kuffler, 1953). So edges are excellent stimuli for cells and that shows up in model in Figure 7 (Troy & Enroth-Cugell, 1993). This finding that only the edges are sent back to the brain for processing agrees with the general finding about vision and it's filling-in of areas. This same process seems to play a role in why we do not see our blind spots.

Now, look at the edges a little more closely. Notice that the hills are next to brighter regions and valleys are next to darker regions. In particular, look at the big hills and valleys where the back background meets the light background (Figures 6 and 7). The square on the left has a hill next to it suggesting it is light and the square on the left has a valley right next to it suggesting a dark regions. This is exactly the illusion. I offer a more complete description of this model, its development, and how it helps us to understand how we see in Krantz (2000).

This model is barely at the beginning of its development. There are many ways to expand this model in the future. So far only X cell responses in the fovea have really been examined. Some of the possible extensions are to model Y cells, add color responses (DeValois & DeValois, 1975), and make the modeled eye jiggle just a little bit all the time like it does in real life (Carpenter, 1977).

While visual neuroscience and computational approaches seem to play a role in revealing new phenomena, reexamination of what are thought to be well established findings can also yield new insights. A recent advance in studies of the earliest stages of sensory information processing was the stunning discovery that the human eye contains a greater variety of cone photoreceptors than previously thought.


Human color vision and the cone photoreceptors


Color is a fundamental visual feature that imparts a richness to the man-made or natural world. Color vision has long fascinated those with an interest in sensation and perception. Isaac Newton (1730/1979), in his famous experiments using prisms to produce and mix colored lights, deduced that ìcolorî is not a property of light wavelengths per se, but rather a property of the eye. This raises an interesting question: Do all eyes perceive colors similarly? The answer is no.

Many of the interesting research questions in this area have been directed at individual differences among humans and inter-species differences. The rationale for this is straightforward: An understanding of "how" and "why" these differences exist will tell us a great deal about how color vision is achieved by the eye and brain.

One of the most fascinating features of color vision is the fact that, more than any other aspect of perception, color vision shows large individual differences both across and within species. Not all animals, in fact not even all humans, share the same color experiences. For many species, including humans, there are even differences between males and females. Oddly enough, there are species of monkey where most of the females have normal color vision but all the males are color-deficient (Jacobs, 1998). We consider below some exciting recent advances in our understanding of human color vision.

A long-standing and fundamental property of human vision is trichromacy. The trichromatic theory explains our color perceptions and color discriminations. The anatomical basis of trichromacy begins with the complement of cone photoreceptors in the retina. For over one hundred years researchers thought that the color-normal eye contained three cone types whose photopigments were later psychophysically estimated to have peak spectral sensitivities near 440, 540, and 560 nanometers. A Figure showing the spectral sensitivities of these pigments, across the visible spectrum, is shown below:



Figure 9. The relative sensitivity to different light wavelengths is shown for each cone photopigment. 


Note the overlap in sensitivity of the middle wavelength sensitive and long wavelength sensitive cone types. [Figure 9 description]

Over the years, however, psychologists questioned whether subtle variations may exist in normal color vision based on small individual differences in the spectral sensitivities of the photopigments (Alpern & Wake, 1977; Neitz & Jacobs, 1986). The findings of the early studies were viewed with some skepticism, however, because of the difficulty in ruling out measurement error and confounding factors. As the psychophysical evidence grew, researchers began to investigate this possibility from many angles.



Today, psychophysical (Neitz & Jacobs, 1990; Mollon, 1992), microspectrophotometric (Dartnall, Bowmaker, & Mollon, 1983), and molecular genetic studies (Nathans, Piantanida, Eddy, Shows, & Hogness, 1986; Winderickx et al., 1992) provide evidence of substantial variation in the number and spectral sensitivity of the cone types in the color-normal eye (also see Mollon, Cavonius, and Zrenner, 1998). The evidence now suggests the presence of three families of normally occurring cone photopigments. There is thought to be only one photopigment with a peak spectral sensitivity in the shortwavelengths (blue), but there is now evidence that there are multiple middlewavelength (green) photopigments and multiple longwavelength (red) photopigments. The difference in spectral sensitivity among he middlewavelength pigments or among the longwavelength pigments has been estimated to be approximately 5-7nm(Neitz, Neitz, & Jacobs, 1995). A representation of all these pigments is shown in the figure below:

Figure 10. A representation of all the identified cone photopigments. 


It is easy to see how two avoided detection for so long--they are nearly identical to two others. [Figure 10 description]

Molecular genetic analyses show that individuals may inherit a surprisingly large number of different X-linked, recessive genes that encode the production of these photopigments (Neitz, Neitz, & Grishok, 1995). A representation of different gene arrangements is shown below. An obvious question is why do we have so many color vision genes? The genes that encode the middle and longwavelength sensitive pigments reside near the end of one of the arms of the X chromosome and they have very similar DNA sequences. In fact, the substitution of one amino acid in the DNA of a photopigment gene is sufficient to cause a change in the spectral sensitivity of that photopigment and in our color perceptions. The location and similarity of these genes makes them susceptible to the kinds of genetic errors that produce multiple gene copies, as well as hybrid genes that are genetic composites of the original ones (Nathans, et al., 1986).



Figure 11. Each rectangle represents a single color vision gene; each row of genes represents an observed gene arrangement. The half red / half green genes represent hybrid genes; linked to "unusual" spectral sensitivity. [Figure 11 description]

At present, it appears that normal color vision results from inheriting at least one cone type from each cone ìclassî (short, middle, and long). It is unclear, however, which complement of genes and cone types result in specific types of color vision deficiency.There is a great deal of genetic variation among individuals with the same type of color defect, making this work difficult. However, it appears that both the type and severity of a color vision defect can be linked to the complement of different cone types in the retina.Hybrid genes, which have been associated with small differences in the spectral sensitivity of the photopigments, are thought to be involved.

These findings lead to an interesting question: if humans possess more than three cone types in their retina, do they still have trichromatic vision? The answer appears to be yes, presumably because the outputs of the different middle or longwavelength cone photoreceptors are summed together before leaving the retina. The resulting signals differ to a small but significant degree across individuals, though, because they affect color perception in some situations. Individuals with different complements of cone pigments will not accept each othersí color matches in the longwavelength end of the spectrum and they will disagree on color names for certain wavelengths of light (Neitz, Neitz, & Jacobs, 1993). For example, a particular mixture of red and green light might appear a perfect yellow to your eye, but appear a greenish-yellow or slightly orange to someone else. This type of color vision assessment, called the Rayleigh Match, is the most accurate method for measuring color discrimination and diagnosing the congenital color vision defects. Some Rayleigh match data is shown below.



Figure 12. The Rayleigh matches of 94 men. The length of the "error bars" represent all the different ratios of red to green light that can be mixed to make an acceptable match to a comparison yellow. Longer bars, therefore, represent poorer color discrimination. The individuals on each end of the figure have color vision deficiencies. [Figure 12 description]

Whereas most of the research to date has examined the color vision of men, the situation for women, especially women who have a color vision deficiency or who have a family history of color deficiency, is even more interesting. Women, by virtue of having two X-chromosomes, inherit two sets of color vision genes. The traditional view proposed that inheriting the normal complement on one or both X-chromosomes led to normal color vision while failing to inherit a normal complement of genes on at least one X-chromosome led to defective color vision (see Mollon, 1992).

However, we now know that early in development women may alternatively express genes from one or the other X-chromosome and that later in development one of the X-chromosomes is inactivated. Consequently, a womanís retina may have regions where the cones reflect the expression of genes on one X-chromosome and other areas where the cones reflect the expression of genes on the other chromosome. A representation of a normal distribution of cone type is shown below:



Figure 13. A representation of the distribution of photoreceptors in the retina. The ratio of R / G / B cone types varies, but the long wavelength cones are the most prevalent; short wavelength cones the least prevalent in the retina. [Figure 13 description]

Women who are heterozygous for the normal complement of color vision genes, therefore, may have a ìmosaicî retina: a patchwork of color-normal and color-deficient regions (Cohn, Emmerich, & Carlson, 1989). The nature of this mosaic depends on the inherited complement of color vision genes and on the point in development that X-chromosome inactivation occurred. That is, some women heterozygous for these genes may develop a color vision deficiency while others may develop normal color vision (Miyahara, Pokorny, Smith, Baron, & Baron, 1998). And, in fact, there are reports in the literature of identical (monozygotic) twins where one twin has normal color vision and the second is color-deficient (Jorgenson, et al., 1992).

In light of these current findings, sensory psychologists and other perception researchers are probably designing psychophysical tasks to try to tease apart the nature of color processing in the eyes of individuals with different complements of cone photoreceptors. The challenge will then fall to neuroscientists, molecular biologists, and others to support or refute our findings at the cellular level.

Future work for sensory psychologists will also involve investigating the extent to which these individual differences in color vision affect our interactions with our world. This knowledge is important because our society uses color to code information in a variety of settings, including education and transportation. In many occupations color discrimination is critical, for example, in discriminating electrical wiring and colored signal lights. While these individual differences are small, they may prove to be problematic in some settings.

In contrast to the exciting research directed at the earliest stages of sensory processing, today there is also substantial research interest at the other end of the S&P continuum: Research directed at higher level perceptual processes and phenomena in the gray area where perception and cognition meld. One area of intense research effort today is the study of visual search: The ability to scan our visual world, quickly distinguish among a variety of objects or forms, and locate and identify a specific target.


Visual Search


Studies of visual search investigate the process of looking for and identifying the presence or absence of a specific visual stimulus (a target) embedded among other items (distracters). To date, most of the research questions that have been asked in this area have been related to the very interesting finding that visual searches for some features are much easier and faster than searches for other features. Two natural questions emerge from these findings: Which visual features are found quickly and easily and which visual features are not? And, of course, why are some visual searches easier than others? The answers to both questions and an understanding of the mechanisms involved in visual search will contribute to our understanding of the active nature of visual information processing and to the perceptual organization of our visual world.

How does this perceptual mechanism work? Although low-level stimulus features such as color or size play a major role in determining the efficiency (speed) of a visual search (Geisler & Chou, 1995), other factors like familiarity also play a role (Lubow & Kaplan, 1997;Wang, Cavanagh, & Green, 1994).



As mentioned above, one fascinating aspect of our ability to perform a visual search is that some physical characteristics of stimuli allow for easy and efficient searches, whereas other stimuli result in difficult and time-consuming searches. This property is often referred to as salience, and objects that have high salience are perceived to "pop-out from their surroundings. It appears that visual scenes can be processed in parallel, that is, simultaneously, and pre-attentively for these items. Low salience objects, on the other hand, require lengthy searches. A search for low salience objects seems to take place in a serial format where item-by-item processing is required. Sometimes simply changing which stimulus is the target and which stimuli are the distracters changes the quality of the search from one mode to another. For example, Triesman & Souther (1985) found that searching for an "Q among "O,s was an easy (parallel) task but that searching for a "O among "Q,s was a more difficult (serial) task and we demonstrate this with the demonstrations below.

Figure 14. This is an example of searching for a "Q" among a small array of "O's". Clicking on the button labeled "Start" will present the stimulus array for a brief 100msec. This is an example of a parallel search: the "Q" pops out and you have no difficulty seeing it despite the very short duration of the display. Now try the demonstration below: [Figure 14 description]



Figure 15. Same as above, but now you are searching for a "Q" in an array of 36 other items. Again, click on "Start" to flash the array. Here, despite the fact that there is a 9-fold increase in the number of search items, you find the "Q" quickly. There is little if any influence of the number of items. O.k., now try the demonstrations below. In the first example you will be searching for a "O" among "Q's" in an array of 4 items. [Figure 15 description]



Figure 16. Again, you have no difficulty identifying the presence of the target, an "O" in this case, when the array size is small. [Figure 16 description] Now try the following:



Figure 17. Ouch! Were you lucky enough to see it? Not likely. In an array of items such as this the length of presentation must be considerably longer in order to allow people enough time to identify the presence or absence of a target "O". It does not pop-out. [Figure 17 description]

Some of the additional visual features that pop-out are brightness (Gilchrist, Humphreys, Riddock, & Neumann, 1997), color (D,Zmura, 1991), and motion (Nothdurft, 1993). Quite often, though, we are searching for objects that can only be identified by the simultaneous presence of two or more stimulus features. These so-called conjunction searches have been studied for color and form (D,Zmura, Lennie, & Tiana, 1997), motion and form (Muller & Found, 1996), color and orientation (Friedman-Hill & Wolfe, 1995), and for the conjunction of two colors or two sizes (Wolfe, 1992). Some conjunction searches are very difficult and require serial processing. A good example is the children,s game "Where,s Waldo. In this popular game, children search drawings of crowded social events for one particular individual (Waldo) who is characteristically dressed in a red and white-stripped sweater and cap, glasses, and dark hair. Waldo must be distinguished from a crowd of "distractor individuals who possess some but not all of these features.

Although there has been extensive research on the topic of visual search over the last decade, it is evident that there is still much to be learned about the basic processes involved. The results of dozens of visual search experiments (studying many different types of visual features and feature conjunctions) has shown us that a sharp distinction between serial and parallel processing may be too simplistic (Wolfe, 1998). The allocation of attention in visual search probably lies along a continuum, where stimulus features and context determine search efficiency. This raises interesting questions for future research. For example, what situations result in optimal search efficiency? How could the search context for a specific target be manipulated to maximize search efficiency?

The answers to these questions have a very practical value in applied settings. That is, outside the laboratory in the real world. What we have learned about visual search has quickly been applied to many real world situations such as air traffic control (Vortac, Edwards, Fuller, & Manning, 1993), driving (Lajunen, Hakkarainen, & Summala, 1996; Summala, Pasanen, Rasanen, & Sievanen, 1996), visual display design (Fisher & Tanner, 1992) and how visual displays are monitored in the workplace (Liu, 1996). Today, questions regarding sensation and perception are increasingly being applied to problems outside the laboratory.




Share with your friends:
1   ...   6   7   8   9   10   11   12   13   ...   16


The database is protected by copyright ©dentisty.org 2019
send message

    Main page