Brain regions that builds panoramic memory identified

Brain regions that builds panoramic memory identified

New York, Sep 9 (IANS) Neuroscientists have identified two brain regions that are involved in creating panoramic memories and help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama.

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects.

“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” said lead author Caroline Robertson, post doctoral student at the Massachusetts Institute of Technology (MIT) in the US.

The study found the hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.

The researchers suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

Brain scans conducted on study participants revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar.

However, this was not the case for image pairs that the participants had not seen as linked.

This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers said.

“Our hypothesis was that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene,” Robertson added.

For the study, the team used immersive virtual reality headsets, which allowed them to show people many different panoramic scenes, the researchers showed participants images from 40 street corners in Boston’s Beacon Hill neighbourhood.

The images were presented in two ways. Half the time, participants saw a 100-degree stretch of a 360-degree scene, but the other half of the time, they saw two noncontinuous stretches of a 360-degree scene.

After showing participants these panoramic environments, the researchers then showed them 40 pairs of images and asked if they came from the same street corner.

Participants were much better able to determine if pairs came from the same corner if they had seen the two scenes linked in the 100-degree image than if they had seen them unlinked, said the paper appearing in the journal Current Biology.

Leave a Reply