Why are some images more memorable than others? – Neuroscience News
Summary: A new study reveals that the brain prefers to remember images that are more difficult to explain. The researchers used a computational model and behavioral experiments to show that scenes that were difficult for the model to reconstruct were better remembered by the participants.
This finding helps explain why certain visual experiences stick in our memory. The study could also inform the development of AI memory systems.
Key facts:
- Memory formation: The brain tends to remember images that are difficult to interpret or explain.
- Computational model: A model solving the compression and reconstruction of the visual signal was used.
- Implications of AI: Statistics could help build more efficient memory systems for artificial intelligence.
Source: Yale
The human brain filters through the flood of experiences and creates specific memories. Why do some experiences become “rememberable” in this flood of sensory information, while most are discarded by the brain?
A computational model and behavioral study developed by Yale researchers suggests a new clue to this age-old question, they report in the journal. Nature Human behavior.
“The mind prefers to remember things that it can’t explain very well,” said Ilker Yildirim, an assistant professor of psychology at Yale’s School of Arts and Sciences and lead author of the paper. “If the scene is predictable and not surprising, it can be ignored.”
For example, a person may be briefly confused by the presence of a fire hydrant in a distant natural environment, making the image more difficult to interpret and therefore more memorable. “Our study explored the question of which visual information is memorable by pairing a computational model of scene complexity with a behavioral study,” Yildirim said.
For the study, led by Yildirim and John Lafferty, the John C. Malone Professor of Statistics and Data Science at Yale, the researchers developed a computational model that looked at two steps in memory formation—the compression of visual signals and their reconstruction.
Based on this model, they designed a series of experiments in which people were asked whether they could remember specific images from a sequence of natural images displayed in rapid succession. The Yale team found that the more difficult it was for the computational model to reconstruct the image, the more likely the participants were to remember the image.
“We used an artificial intelligence model to try to shed light on human perception of scenes—an understanding that could help develop more efficient memory systems for artificial intelligence in the future,” said Lafferty, who is also director of the Center for Neurocomputation. and Machine Intelligence at the Wu Tsai Institute at Yale.
Former Yale graduate students Qi Lin (psychology) and Zifan Lin (statistics and data science) are co-first authors on the paper.
About these novelties in visual memory research
Author: Bill Hathaway
Source: Yale
Contact: Bill Hathaway—Yale
Picture: Image is credited to Neuroscience News
Original Research: Closed access.
“Images with harder to reconstruct visual representations leave stronger memory traces” by Ilker Yildirim et al. Nature Human behavior
Abstract
Images with harder to reconstruct visual representations leave stronger memory traces
Much of what we remember is not due to deliberate choice, but simply a byproduct of perception.
This raises a fundamental question about the architecture of the mind: how does perception connect with and influence memory?
Here, inspired by a classic proposal regarding perceptual processing and memory durability, the level-of-processing theory, we present a sparse coding model for compressing feature embeddings in images and show that reconstruction residuals from this model predict how well images are encoded. into memory.
In an open data set on memorability of scene images, we show that reconstruction error not only explains memory accuracy but also response latency during retrieval, the latter including all the variance explained by powerful vision-only models. We also confirm the prediction of this account using “model-driven psychophysics”.
This work establishes reconstruction error as an important signal linking perception and memory, possibly through adaptive modulation of perceptual processing.
Post Comment