A new theory for what’s happening in the brain when something looks familiar

This novel concept from the lab of neuroscientist Nicole Rust brings the field one step closer to understanding how memory functions. Long-term, it could have implications for treating memory-impairing diseases like Alzheimer’s.

A black-and-white illustration with many lines and circles and a person sitting in the middle.
How can the brain distinguish between something new and something familiar? Research from the Visual Memory Lab led by Nicole Rust has a new theory, replacing one long-held by the field. (Image: Julia Kuhl)

When a person views a familiar image, even having seen it just once before for a few seconds, something unique happens in the human brain.

Until recently, neuroscientists believed that vigorous activity in a visual part of the brain called the inferotemporal (IT) cortex meant the person was looking at something novel, like the face of a stranger or a never-before-seen painting. Less IT cortex activity, on the other hand, indicated familiarity.

But something about that theory, called repetition suppression, didn’t hold up for University of Pennsylvania neuroscientist Nicole Rust. “Different images produce different amounts of activation even when they are all novel,” says Rust, an associate professor in the Department of Psychology. Beyond that, other factors—an image’s brightness, for instance, or its contrast—result in a similar effect.

In a paper published in the Proceedings of the National Academy of Sciences, she and postdoctoral fellow Vahid Mehrpour, along with Penn research associate Travis Meyer and Eero Simoncelli of New York University, propose a new theory, one in which the brain understands the level of activation expected from a sensory input and corrects for it, leaving behind the signal for familiarity. They call it sensory referenced suppression.

The visual system

Rust’s lab focuses on systems and computational neuroscience, which combines measurements of neural activity and mathematical modeling to figure out what’s happening in the brain. One aspect relates to the visual system. “The big central problem of vision is how to get the information from the world into our heads in an interpretable way. We know that our sensory systems have to break it down,” she says.

A close-up image of a person outside.
Nicole Rust is an associate professor in the Department of Psychology in the School of Arts & Sciences at the University of Pennsylvania. She is also director of the Visual Memory Lab, co-director of the Computational Neuroscience Initiative, and MindCORE’s executive director for research. (Image: Courtesy Nicole Rust)

It’s a complicated process, greatly simplified here for clarity: Information comes into the eye via the rods and cones. It travels neuron by neuron through a stack of brain areas that make up the visual system and finally to a visual brain area called the IT cortex. Its 16 million neurons activate in different patterns depending on what’s being viewed, and the brain must then interpret the patterns to understand what it’s seeing.

“You get one pattern for a specific face. You get a different pattern for ‘coffee cup.’ You get a different pattern for ‘pencil,’” Rust says. “That’s what the visual system does. It builds the world back up to help you decipher what you’re looking at.”

In addition to its role in vision, activation of the IT cortex is also thought to play a role in memory. Repetition suppression, the old theory, relies on the idea that there’s an activation threshold that gets crossed: More neural activity tells the brain the image is novel, less indicates one that’s previously seen.

Because several factors affect the total amount of neural activity, also called spikes, in the IT cortex, the brain can’t discern what’s specifically causing the reaction. It could be memory, image contrast, or something else altogether, Mehrpour says. “We propose a new idea that the brain corrects for the changes caused by these other factors, in our case contrast,” he says. After that calibration, what remains is the isolated brain activation for familiarity. In other words, the brain understands when it is viewing something that it has previously seen.

Long-term implications

To draw this conclusion, the researchers presented sequences of grayscale images to two adult male rhesus macaques. Every image appeared exactly twice, the first time as novel, the second time as familiar, in a range of high- and low-contrast combinations. Each viewing lasted precisely half a second. The animals were trained to use eye movements to indicate whether an image was new or familiar, disregarding the contrast levels.

As the macaques performed this memory task, the researchers recorded neural activity in the IT cortex, measuring the spikes for hundreds of individual neurons, a unique method that differs from those that measure proxies of neural activity averaged across 10,000 neurons firing. Because Rust and colleagues wanted to understand the neural code, they needed information for individual neurons.

By understanding how memory in a healthy brain works, you can lay the foundations to develop preventions and treatments for the memory-related disorders plaguing an aging population. Penn neuroscientist Nicole Rust

Using a mathematical approach, they deciphered the patterns of spikes that accounted for how the macaques could distinguish memory from contrast. This ultimately confirmed their hypothesis. “Familiarity and contrast both change the overall firing rate,” Rust says. “What we’re saying is the brain can tease apart and isolate one from the other.”

In the future, better understanding this process could have applications for artificial intelligence, Mehrpour says. “If we know how the brain represents and rebuilds information in memory in the presence of changes in sensory input like contrast, we can design AI systems that work in the same way,” he says. “We could potentially build machines that work in the same way that our brain does.”

Beyond that, Rust says that down the line the findings could have implications for treating memory-impairing diseases like Alzheimer’s. “By understanding how memory in a healthy brain works, you can lay the foundations to develop preventions and treatments for the memory-related disorders plaguing an aging population.”

But for any of this to come to pass, it will be crucial to keep digging, she says. “To get this right, we have to understand the memory signal that’s driving behavior.” This work brings neuroscientists one step closer.

Funding for this research came from the Simons Foundation (grants 543033 and 543047), National Eye Institute of the National Institutes of Health (Grant R01EY020851), National Science Foundation (CAREER Award 1265480), and Howard Hughes Medical Institute.

Vahid Mehrpour is a postdoctoral fellow in the Visual Memory Lab at the University of Pennsylvania.

Travis Meyer is a research associate in the Visual Memory Lab at the University of Pennsylvania.

Nicole Rust is an associate professor in the Department of Psychology in the School of Arts & Sciences at the University of Pennsylvania. She is also director of the Visual Memory Lab, co-director of the Computational Neuroscience Initiative, and MindCORE’s executive director for research.

Eero Simonelli is a professor of neural science, mathematics, data science, and psychology in the College of Arts & Science at New York University. He is also founding director of the Center for Computational Neuroscience at the Simons Foundation’s Flatiron Institute.