Psychologists examine how our perceptual systems make sense of the world
Your alarm rouses you, and you open your eyes to shadows stretching across the ceiling. The coffee pot gurgles in the kitchen, birds chirp outside the window, and the dog runs circles around your feet. You open the cabinet and scan for your favorite mug before pouring the coffee.
What sounds like an ordinary morning is really extraordinary when you consider how your senses lead you through it.
For most of us, our eyes register electromagnetic radiation and our ears register sound waves. Neurons fire and we transform these stimuli into information, seemingly effortlessly. We don’t step on the dog, and we can distinguish the birdcall from the coffee pot gurgling.
But how do we do it? That question is at the heart of the research at the Center for Perceptual Systems at The University of Texas at Austin.
The group of researchers spans the areas of psychology, neurobiology, computer and electrical engineering, computer sciences, speech and biology. Working in crossdisciplinary teams, they tackle some of the most mysterious workings of human existence.
“Everybody’s interested in the perceptual systems, because they are how you gain information from the world and make sense of it; they allow you to interact with your environment,” says Wilson Geisler, director of the center and David Wechsler Professor in the Department of Psychology. “Your perceptual systems are your window on the world.”
The center began as the Center for Vision and Image Sciences and expanded in 2001 to become the Center for Perceptual Systems.
Today researchers study everything from the energy fields of electric fish to how our visual system enables us to navigate around pedestrians on a crowded sidewalk to the information contained in the tiny sounds your ear makes that you can’t hear.
It is complex work. The perceptual systems probably take up about 50 percent of all the gray matter in the brain, Geisler says.
“One of the things that perception gives you is the illusion that it’s simple because it happens pretty rapidly,” Geisler says. “You quickly recognize the objects in a room and can see their distances and shapes and colors. It’s really very complicated, and no one really knows in any kind of detail how we do it, although rapid progress is being made.”
Technology is helping. In fact, the technological advances of the past 20 years have entirely transformed the study of perceptual systems.
For example, Mary Hayhoe, psychology professor, used to study eye movement with subjects lying on a table while their eyes were observed. Today she can observe subjects walking around one of the best-known virtual reality labs in the country. Or, she can ask them to move through their day wearing a backpack system in which a camera can record their eye movements and the world they traverse simultaneously.
The technology, almost counter-intuitively, allows Hayhoe to take subjects out of the world constructed for the experiment and put them in the natural world. Researchers at the center are most interested in how our senses function in realworld situations instead of in arbitrarily constrained laboratory experiments.
The Center for Perceptual Systems is poised to become the best in the world for natural systems analysis, or the rigorous study of perception under natural conditions.
A Quarter in the Grass
It’s happened to all of us: We’ve dropped a quarter in the grass, or lost that post-it note on a desk stacked with papers, or scanned the crowd for the face of the friend we’ve come to meet. Our frustration mounts because it seems to take forever. There has to be a better way.
Wilson Geisler, who recently was elected to the National Academy of Sciences, set out to find out if there is. “The process of visual search is a natural task that we do all the time,” Geisler says, “and it’s really important for survival. But how human beings have solved the problem of using the eyes to find something is really complicated.”
The human visual system has evolved from compromise. We need to have a large field of view, and yet we also need to be able to see very fine detail. To see fine detail over a large field of view would require an optic nerve the size of an arm. Ours is the size of a pinky.
We’ve had to settle for an eye that couples low-resolution sight over a large field of view with higher resolution over only a very small central field of view.
Geisler took these known facts of human sight to the question of visual search and asked how human beings should move their eyes if they want to find something. What is the optimal way? What could we do to locate an object as quickly as possible He and Jiri Najemnik, graduate student researcher, answered the question by creating a mathematical algorithm for optimizing eye movements during visual search.
Then Geisler turned to human subjects to see how people perform a visual search in real life. The results surprised him.
“We found out that people are almost perfect,” Geisler says. “We can find things just about as fast as physically possible.” Geisler’s study reveals a lot about the complexity of the visual process and the ways we develop to optimize it. Going through our natural learning process, we seem to discover the best way to use our sensory system. And it may bring some relief the next time you’re frantically searching for that lost item that won’t come into sight. Rest assured, you really are doing the best that you can.
A Virtual Stroll
You know what they say about walking and chewing gum? It turns out there’s a lot of truth to this—we really cannot attend to much more
than one thing at a time. Mary Hayhoe is working to understand how we decide to what we will attend.
“Most of the time we manage walking down the street just fine,” Hayhoe says. “We’re asking how it is you manage to attend to the right thing at the right time.”
She is answering that question by tracking the direction of the gaze. We distribute our gaze across natural scenes in ways that keep us from bumping into things, stepping off things, and otherwise hurting ourselves or others. We learn to do this as we develop during childhood.
“That learning is really critical,” Hayhoe says. “The visual system doesn’t just magically direct your eye to the right place. With young children, you get the sense that you have to be their attention, be there to say ‘Watch out!’ They’re still learning what the world is like.”
To understand how we adults, who have learned what the world is like, use our eyes to navigate, Hayhoe puts subjects on the sidewalk. In her case, it is a virtual sidewalk. Hayhoe operates one of the most advanced virtual reality labs in the country, alongside Dana Ballard, professor of computer science. In one of the many experiments in the lab, subjects don a virtual reality head mount and walk around the room.
An observer may think the subject is just making a long oval across the tile floor, but in the virtual world the subject is walking on a sidewalk, with a curb, fellow pedestrians and objects to steer around. The images the subject sees change rapidly, just as they would in the real world. A small camera tracks the subject’s left eye—the pupil and the corneal reflection—to see where he or she is looking at any time. LEDS in the ceiling track the subject’s head position, which enables the program to change the images the subject is seeing.
The experiment offers a complex view of how an individual uses his or her eyes.
“It’s like you’re inside her head,” Hayhoe says. “You get a lot of information about what a person is trying to do. She looks at the path, figures out where to put her feet, looks at pedestrians.”
One finding is that when your expectations of a scene change, your gaze behavior changes almost immediately. For example, if a pedestrian suddenly acts unruly, bumping into you, you’ll spend more time looking at pedestrians after that for a period of time. Why use virtual reality instead of a real street?
“We want to do both,” Hayhoe says. “We’ve conducted experiments tracking people walking around the room with a real person trying to run into them. But you have less control in the real world, so we get ideas from it and then take them into a virtual environment where we can conduct more controlled experiments.”
Hayhoe’s research is critical to understanding the basic scientific issues about the neural underpinnings of visually guided behavior, and its implications are numerous. From creating new ways to teach drivers education to working with stroke patients, it offers previously unavailable information about how we move and use our eyes.
Your Ears Reveal More Than You Think
Obviously the ears hear sounds, but would you have guessed they make sounds as well? Otoacoustic emissions, or OAEs, are tones that emanate from the inner ear and are measurable with a sensitive microphone placed in the ear canal. They may be spontaneous, present even in a sound-deadened room, or they may be produced in response to brief sounds. People generally don’t notice their OAEs.
“It’s interesting that most of us are unaware of those sounds,” says Dennis McFadden, Ashbel Smith Professor of Psychology. “But then again they’ve been around since birth, and maybe the higher centers of the brain have just nulled them out.”
Scientists, however, are finding that OAEs yield some very interesting information. OAEs are stronger and more numerous in females than in males, and that difference exists even at birth. Evidence suggests this is due to a difference in prenatal exposure to androgens (certain kinds of hormones). Females are exposed to weaker levels of androgens than males are.
Other differences in OAEs are even more interesting. Take boys with Attention-Deficit/Hyperactivity Disorder (ADHD). For years, clinicians have been arguing that there are two subtypes of ADHD—the inattentive type and the combined type (in which the individual has both inattention and hyperactivity symptoms). It turns out the ears support this distinction.
McFadden, working with colleagues in clinical psychology, found that the OAEs of boys with the combined type of ADHD were not different from those of boys without ADHD. But boys with the inattentive type of ADHD had weaker OAEs than either group. This suggests they were exposed to higher levels of androgens before birth. OAEs also appear to be different between females who have female twins and females who have male twins, as well as between heterosexual and homosexual females.
As scientists explore these differences, it becomes clear that the auditory system has the capacity to serve as a window into prenatal development and sexual differentiation. The finding is surprising, but for McFadden it is just one of the many mysteries of the auditory system that have kept him engaged during his 40 years of doing research at the university.
“The cochlea, or inner ear, is an extraordinarily complicated and therefore interestingstructure,” McFadden says. “The way it breaks down sound waves so that they can be processed by the brain and then reconstitutes them to give us the rich experience of hearing—music, speech, environmental sounds—is fascinating to those of us who study hearing.”
Duck!
What goes on in the brain when we see a moving object? Lawrence Cormack and Alex Huk teamed up to find out.
Cormack, associate professor of psychology, has a background studying depth perception. Huk, assistant professor of neurobiology, has studied motion perception. Together they are examining how the brain responds to things that move through three dimensions. In other words, they’re studying how things happen in the real world. That may make sense, but it has not always been an easy thing to do. Traditionally, experiments on motion perception took place when a subject was placed in front of a flat television screen or monitor. This was all well and good, as long as life happened on a flat screen. Life, however, happens in three dimensions.
Today research can as well. The scientists put subjects in what they described as a “souped-up MRI scanner.” The machine isn’t much different from the one used to create images of your knee or shoulder, except it has been upgraded to take images of blood flow and oxygenation in the brain.
“Neurons, like the rest of us, need more when they’re active,” Cormack says. “So after neurons fire up there’s an increase in blood flow. We can look at that and tell what part of your brain’s been active.”
While lying in the scanner, subjects are presented stimuli in three dimensions, similar to a 3-D movie, but more advanced. They watch objects moving across the horizon and objects moving toward and away from the body. And their brains register the difference.
The brain is far more active when objects are moving through depth and toward and away from the head than things moving on a flat computer screen. Asked what it means in our regular life, Cormack calls out, “Duck!” “I say that somewhat flippantly, yet it ties in with some other little fun facts,” Cormack explains.
Among those fun facts is that the muscles attached to the eyeball that are responsible for crossing the eyes (or following approaching objects) are large and strong, as compared to the muscles that make the eyes move up and down.
“The brain’s paying a lot of attention, developing a robust response to things coming at you so that you can react to them,” Cormack says. “Essentially it says something’s happening that’s terribly important. Get busy.”
Sound Systems
Randy Diehl, psychology professor and dean of the College of Liberal Arts, is used to people looking at him quizzically when he says he studies speech perception. “What’s the problem there?” they ask. “We perceive vowelsand consonants, so what? There doesn’t seem to be any mystery.”
In fact, there’s plenty of mystery, Diehl says, and his lab has been working to decipher it for years. “It’s a difficult problem because the task of listeners is to extract words from the speech signal,” Diehl says. “Words are series of distinctive sounds; think of them as sequences of vowels and consonants. It turns out that every talker produces these things in different ways.”
In other words, words themselves can be very different acoustically depending on who is saying them. A speaker with a large vocal tract is different from one with a small vocal tract. Men and women speak differently, as do children and adults, people with different types of speech pathologies or dialects, people who speak quickly or slowly, with food in their mouths or through a tube.
And yet we understand them.
“We have to apply adjustments on-the-fly in interpreting the acoustic signal to take into account all of the sources of lawful variability,” Diehl says. “And we do that without knowing we’re doing it. It’s as though the listener by a certain age has implicitly acquired all the rules for transforming those variations into a uniform message.”
Diehl’s lab has placed a lot of focus for the past 20 years on understanding auditory processing, how the brain converts the acoustic signals it gets into neural representations. To do so, Diehl has looked at the sound systems of various languages.
Though there are somewhere between 4,000 and 6,000 languages worldwide, the sound systems of those languages have a lot in common. Given the range of sounds that the human vocal tract can produce, there has to be a reason that certain patterns are so popular.
Working with Björn Lindblom of Stockholm University, Diehl set out to create an inventory of sounds that would be optimal for language, based on the degrees of freedom in the human vocal tract and the premise that optimal sounds will be sufficiently audible and distinctive from each other, enabling listeners to distinguish them from noise. Running a representative sampling of sounds through a computer model, Diehl was able to select an optimal vowel system. If that system consisted of just five vowels, it would be the sounds ee, ah, oo, ay, oh. This corresponds perfectly with the vowel systems in Spanish, Japanese, Hawaiian and a host of other languages. In fact, it is the most common vowel system among the world’s languages overall.
This means that languages have evolved to take advantage of the optimal ways our auditory system can interpret sound. Diehl says this finding was just a starting point. He has taken advantage of state-of-the-art technology to trace how the auditory system codes the frequency of vowels, allowing more complex vowel systems to evolve.
And his recent work is breaking new ground in predicting the optimal ways that humans classify phonemes, allowing them to distinguish between similar sounds, like buh and puh.
All of this work is geared toward solving the problem of how we are able to sit down with someone we’ve never met and understand each other.
“It’s really a great problem,” he says, “because it intersects with so many classic philosophical and psychological questions about the nature of perception, the nature of language and the nature of memory and how knowledge is used to interpret our world.”
The Mind’s Eye: Psychologist’s Insights into Brain Could Restore Sight
Medical researchers have an impressive history of innovation, pushing the human body’s capacity to heal beyond what most could imagine. Reconstructive surgery, heart transplants and remarkably functional cochlear implants give patients with severe injuries or critical impairments a second chance.
However modern technology has had a limited impact on the visual system, and doctors do not have many tools to treat severe afflictions. Associate professor of psychology Eyal Seidemann hopes to change that. Seidemann studies brain activity in the primary visual cortex, the first and largest processing stage of visual information in the cerebral cortex (“gray matter”).
The primary visual cortex, together with several dozen subsequent visual cortical areas, is responsible for forming our visual perception. Seidemann hopes his research will lead to the development of visual prosthesis, allowing people with severe eye damage to see.
“The brain uses its own language of electrical signals to represent our environment,” Seidemann says. “Everything that we perceive has to be represented in our brains using this complex neural code. Our ultimate goal is to break this code. If we could understand the brain’s inner language, we may one day be able to bypass the eyes and insert the electrical signals that represent the current visual scene directly into the relevant neurons in the patient’s visual cortex, thereby restoring normal vision.”
Most researchers in the field study vision by measuring the electrical signals of single neurons or by looking at the activity in large regions in the brain. Seidemann studies a critical intermediate spatial scale of small groups of neurons. His goal is to understand how groups of neurons behave together, which offers a more complete picture of how the brain represents information.
Seidemann says an exciting aspect of the research is that fully understanding the primary visual cortex likely will lead to a more complete understanding of other parts of the brain.
“The primary visual cortex is just one part of the cerebral cortex,” he explains. “Each region of the cortex is responsible for a different function (such as perception, memory, thought or movement planning) but each region’s architecture is very similar. This similarity suggests that the neural language is likely to be shared among the sensory, cognitive and motor parts of our brain.”
Seidemann says understanding the neural code in the visual system, therefore, is likely to have profound consequences for our ability to treat many other disorders of the brain.
Lost in Translation: Anthropologist Preserves Dying Sign Languages
During the 1960s when linguists and anthropologists began studying languages of deaf people, research focused on national sign language, with scant attention paid to indigenous ones.
Today, Angela Nonaka, assistant professor of anthropology who specializes in linguistic anthropology, is preserving endangered and undocumented sign languages in Thailand.
By examining and documenting rare phonological forms (hand configurations), color terminology (using three basic colors) and baby talk, she analyzes Ban Khor Sign Language and compares its variations with American Sign Language (ASL) and other languages. Understanding the origins of indigenous sign languages, which spontaneously arise in small rural villages around the world, allows researchers to answer questions about language complexity and evolution that would remain unanswered when these unique communication methods disappear, Nonaka explains.
For example, during the 1950s when ASL was introduced in Thailand, local sign languages began to dissipate. Now, many signed codes are in danger of becoming extinct within the next couple of generations.
“The true extent of the country’s linguistic diversity has yet to be fully recognized or appreciated because an entire class of sign languages remains largely unexplored,” Nonaka says.
Her research extends beyond linguistic documentation. She takes a holistic anthropological approach, investigating how and why native sign languages form, spread and disappear. By examining the local sign language communities, she works to preserve and revitalize the culture, as well as the language.
“The study of sign languages enriches our collective knowledge of linguistics and anthropology, underscoring the true linguistic and cultural diversity in the world,” Nonaka says.