The human brain uses more energy than any other organ in the body, requiring as much as 20% of the body’s total energy. While this may sound like a lot, the amount of energy would be even higher if the brain were not equipped with an efficient way to represent only the most essential information within the vast, constant stream of stimuli taken in by the five senses. The hypothesis for how this works, known as efficient coding, was first proposed in the 1960s by vision scientist Horace Barlow.
Now, new research from the Scuola Internazionale Superiore di Studi Avanzati (SISSA) and the University of Pennsylvania provides evidence of efficient visual information coding in the rodent brain, adding support to this theory and its role in sensory perception. Published in eLife, these results also pave the way for experiments that can help understand how the brain works and can aid in developing novel artificial intelligence (AI) systems based on similar principles.
According to information theory—the study of how information is quantified, stored, and communicated—an efficient sensory system should only allocate resources to how it represents, or encodes, the features of the environment that are the most informative. For visual information, this means encoding only the most useful features that our eyes detect while surveying the world around us.
Vijay Balasubramanian, a computational neuroscientist at Penn, has been working on this topic for the past decade. “We analyzed thousands of images of natural landscapes by transforming them into binary images, made up of black and white pixels, and decomposing them into different textures defined by specific statistics,” he says. “We noticed that different kinds of textures have different variability in nature, and human subjects are better at recognizing those which vary the most. It is as if our brains assign resources where they are most necessary.”
To determine how this process works in rodents, SISSA’s Riccardo Caramellino, the study’s first author, together with Andrea Buccellato and Anna Carboncino, trained rodents to discriminate binary textured images, similar to how these types of studies are done in people. The researchers then analyzed their results using a mathematical model of an “ideal observer” developed by Eugenio Piasini, co-first author of the paper who carried out this research as a postdoctoral fellow in Penn’s Computational Neuroscience Initiative and is now an assistant professor at SISSA.
The researchers found that rats are most sensitive to textures that are also variable in nature, indicating that brains adapt to environmental conditions and are primed to detect differences that are naturally more prevalent. “We have found, in rodents, a pattern of perceptual sensitivity for visual textures that is consistent with efficient coding and is the same as the one previously observed in humans, despite the phylogenetic distance between these species. This result suggests that efficient texture coding may be a universal principle in vision,” Zoccolan says.
The researchers say that these findings could also pave the way for new types of experiments to better understand the neuronal mechanisms behind this fundamental process; this work could also support the development and training of AI and artificial vision systems.
The authors are Riccardo Caramellino, Andrea Buccellato, Anna Carboncino, and Davide Zoccolan from SISSA and Eugenio Piasini and Vijay Balasubramanian from Penn.
This research was supported by the European Research Council Consolidator (Grant 616803-LEARN2SEE), National Science Foundation (Grant 1734030), National institutes of Health (Grant R01NS113241), and the Computational Neuroscience Initiative of the University of Pennsylvania