Human Brain's Method of Combining Information Depends on How Many Senses Supply Input
PHILADELPHIA – When the human brain is presented with conflicting information about an object from different senses, it finds a remarkably efficient way to sort out the discrepancies, according to findings reported in the Nov. 22 issue of the journal Science.
Scientists from the University of Pennsylvania; the University of California, Berkeley; New York University; and the Max-Planck Institute for Biological Cybernetics found that when sensory cues from the hands and eyes differ from one another, the brain effectively splits the difference to produce a single percept. The researchers describe the middle ground as a "weighted average" because in any given individual one sense may have more influence than the other. When the discrepancy is too large, however, the brain reverts to information from a single cue – from the eyes, for instance – to make a judgment about what is true.
"We rely upon our senses to tell us about the surrounding environment, including an object's size, shape and location," said lead author Jamie M. Hillis, a postdoctoral researcher in Penn's Department of Psychology and former graduate student at UC Berkeley. "But sensory measurements are subject to error, and frequently one sensory measurement will differ from another."
In a series of experiments, the researchers divided 12 subjects into two groups. One group received two different types of visual cues, while the other received visual and haptic (touch) cues. The visual-haptic group assessed three horizontal bars; two appeared equally thick to the eye and hand in all instances, while the third bar alternately appeared thicker or thinner to the eye or hand. The group with two visual inputs assessed surface orientation, with two surfaces appearing equally slanted according to two visual cues, while a third appeared more slanted according to one cue and less slanted according to the other.
To manipulate the sensory cues, the researchers used force-feedback technology to simulate the touch and shutter glasses to simulate 3-D visual stimuli. Participants in the visual-haptic group insert their thumb and forefinger into the device to "feel" an object that is projected onto a monitor. Through the devices, they see and feel the virtual object.
"We found that when subjects grasped an object that felt 57 millimeters thick but looked as if it were 55 millimeters thick, their brains interpreted the object as being somewhere in between," Hillis said.
"If the brain is taking in different sensory cues and combining them to create one representation, then there could be an infinite number of combinations that the brain is perceiving to be the same," said co-author Martin S. Banks, professor of optometry and psychology at UC Berkeley. "The brain perceives a block to be three inches tall, but was it because the eyes saw something that looked four inches tall while the hands felt something to be two inches tall? Or, was it really simply three inches tall? We wanted to know how much could we push that."
What the researchers found was that pushing the discrepancies too far resulted in the brain defaulting to signals from either the hands or eyes, depending upon which one seemed more accurate. That means the brain maintains three separate representations of the object's properties: one from the combined visual and haptic cues, the second from just the visual cues and the third from the haptic cues.
What surprised the researchers was that the same rule did not hold true when the brain receives discrepant cues from within the same sense. In tests where participants used only their eyes, researchers presented conflicting visual cues regarding the degree of slant in surfaces appearing before them. One cue – the binocular disparity – made the surface appear to slant in one direction, while the other cue – textured gradient – indicated a different slant. The participants regularly perceived the "weighted average" of the visual signals no matter how far the two cues differed.
"If the discrepant cues were both visual, the brain essentially threw the two individual estimates away, keeping only the single representation of the object's property," Hillis said.
"There are many instances where a person will be looking at one thing and touching another, so it makes sense for the brain to keep the information from those two sensory cues separate," Banks said. "Because people can't look at two different objects at the same time, the brain can more safely discard information from individual visual cues after they've been combined into one representation. The brain is efficient in that it doesn't waste energy maintaining information that it will not likely need in real life."
Hillis and Banks' co-authors are Mark O. Ernst, now at the Max-Planck Institute, who conducted the visual-haptic experiments, and Michael S. Landy at NYU. Their work was supported by the Max-Planck Society, the Air Force Office of Scientific Research, the National Institutes of Health and the National Science Foundation, with an equipment grant from Silicon Graphics.