New research from the University of Pennsylvania, the Scuola Internazionale Superiore de Studi Avanzati (SISSA), and KU Leuven details the time scales of visual information processing across different regions of the brain. Using state-of-the-art experimental and analytical techniques, the researchers found that deeper regions of the brain encode visual information slowly and persistently, which provides a mechanism for explaining how the brain accurately identifies fast-moving objects and images. The findings were published in Nature Communications.
Understanding how the brain works is a major research challenge, with many theories and models developed to explain how complex information is processed and represented. One area of particular interest is vision, a major component of neural activity. In humans, for example, there is evidence that around half of the neurons in the cortex are related to vision.
Researchers are eager to understand how the visual cortex can process and retain information about objects in motion in a way that allows people to take in dynamic scenes while still retaining information about and recognizing the objects around them.
“One of the biggest challenges of all the sensory systems is to maintain a consistent representation of our surroundings, despite the constant changes taking place around us. The same holds true for the visual system,” says Davide Zoccolan, director of SISSA’s Visual Neuroscience Laboratory. “Just look around us: objects, animals, people, all on the move. We ourselves are moving. This triggers rapid fluctuations in the signals acquired by the retina, and until now it was unclear whether the same type of variations apply to the deeper layers of the visual cortex, where information is integrated and processed. If this was the case, we would live in tremendous confusion.”
Experiments using static stimuli, such as photographs, have found that information from the sensory periphery are processed in the visual cortex according to a finely tuned hierarchy. Deeper regions of the brain then translate this information about visual scenes into more complex shapes, objects, and concepts. But how this process works in more dynamic, real-world settings is not well understood.
To shed light on this, the researchers analyzed neural activity patterns in multiple visual cortical areas in rodents while they were being shown dynamic visual stimuli. “We used three distinct datasets: one from SISSA, one from a group in KU Leuven led by Hans Op de Beeck and one from the Allen Institute for Brain Science in Seattle,” says Zoccolan. “The visual stimuli used in each were of different types. In SISSA, we created dedicated video clips showing objects moving at different speeds. The other datasets were acquired using various kinds of clips, including from films.”
Next, the researchers analyzed the signals registered in different areas of the visual cortex through a combination of sophisticated algorithms and models developed by Penn’s Eugenio Pasini and Vijay Balasubramanian. To do this, the researchers developed a theoretical framework to help connect the images in the movies to the activity of specific neurons in order to determine how neural signals evolve over different time scales.
“The art in this science was figuring out an analysis method to show that the processing of visual images is getting slower as you go deeper and deeper in the brain,” says Balasubramanian. “Different levels of the brain process information over different time scales; some things could be more stable, some quicker. It’s very hard to tell if the time scales across the brain are changing, so our contribution was to devise a method for doing this.”
Using this state-of-the-art experimental data and an innovative approach to analysis, the researchers found that deeper regions of the brain process information over longer time scales and that this information becomes more stable in deeper brain regions. The brain appears to have developed a system that limits fluctuations that are too rapid while ensuring that the visual cortex does not lose potentially valuable information, thus allowing it to retain information while objects are in motion to provide a more consistent picture of its surroundings.
Another surprising finding was the identification of similarities and differences between three brain states: anesthesia, active wakefulness, and an awake but inactive state (akin to a meditative state). The researchers’ analysis shows that the brains of animals under anesthesia are more like the brains of animals who are active, a result that could influence the way that researchers think about brain states in the future.
In the future, follow-up experiments will provide an opportunity to more thoroughly test theories and to make new hypotheses about how the brain works. “We’re looking for good theoretical frameworks that can compactly explain a variety of phenomena,” says Piasini. “The idea of the brain as a machine that adapts and learns to exploit the structure of the world is one of the most promising ones, but it lacks sufficient causal tests. Our work helps to establish the groundwork for carrying out such tests.”
The complete author list: Eugenio Piasini and Vijay Balasubramanian from the University of Pennsylvania; Liviu Soltuzu, Paolo Muratore, Riccardo Caramellino, and Davide Zoccolan from the Scuola Internazionale Superiore de Studi Avanzati (SISSA); Kasper Vinken from Harvard University; and Hans Op de Beeck from KU Leuven. Piasini and Soltuzu are co-first authors.
This research was supported by the European Research Council Consolidator Grant 616803-LEARN2SEE and National institutes of Health grants R01EY07977 and R01NS113241.
Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the School of Arts & Sciences at the University of Pennsylvania.
Eugenio Piasini is a post-doctoral researcher in the Department of Physics and Astronomy in the School of Arts & Sciences at Penn.