London: Scientists have created the first ever map of how the brain organises the thousands of images that come flooding in through our eyes every day.
A team at the University of California, Berkeley, found that the brain is wired to put in order all the categories of objects and actions that we see.
To illustrate their findings, they have created the first map of how the brain organises these categories across the cortex, the 'Daily Mail' reported.
The result - achieved through computational models of brain imaging data collected while test subjects watched hours of video clips - is what researchers call 'a continuous semantic space'.
The UC Berkeley team have mapped this data across the human cortex to show which areas of the brain deal with which categories of objects we see in the world around us.
The team used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of participants as they watched two hours of film clips.
Researchers then analysed the readings to find correlations in data and build a model showing how each of 30,000 subdivisions in the cortex responded to the 1,700 categories of objects and actions shown.
Next, they used principal components analysis, a statistical method that can summarise large data sets, to find the 'semantic space' that was common to all the study subjects.
"Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organised," said Alexander Huth, lead author of the study.
Researchers found that the brain efficiently represents the diversity of categories in a compact space. Instead of having a distinct brain area devoted to each category, as previous work had identified, for some but not all types of stimuli, the researchers found brain activity is organised by the relationship between categories.
"Humans can recognise thousands of categories. Given the limited size of the human brain, it seems unreasonable to expect that every category is represented in a distinct brain area," said Huth.
A clearer understanding of how the brain organises visual input can help with the diagnosis and treatment of brain disorders. The findings may also be used to create brain-machine interfaces, particularly for facial and other image recognition systems.
"Our discovery suggests that brain scans could soon be used to label an image that someone is seeing, and may also help teach computers how to better recognise images," said Huth. The study was published in the journal Neuron.