Scientists can now use brain activation patterns to identify complex thoughts like “The witness shouted during the trial.”
The research uses machine-learning algorithms and brain-imaging technology to “mind read.”
The findings indicate that the mind’s building blocks for constructing complex thoughts are formed by the brain’s various sub-systems and are not word-based. Published in Human Brain Mapping, the study offers new evidence that the neural dimensions of concept representation are universal across people and languages.
“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,'” says Marcel Just, professor of psychology in Carnegie Mellon University’s Dietrich College of Humanities and Social Sciences.
“We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”
Previous work by Just and his team showed that thoughts of familiar objects, like bananas or hammers, evoke activation patterns that involve the neural systems that we use to deal with those objects. For example, how you interact with a banana involves how you hold it, how you bite it, and what it looks like.
The new study demonstrates that the brain’s coding of 240 complex events, sentences like the shouting during the trial scenario uses an alphabet of 42 meaning components, or neurally plausible semantic features, consisting of features, like person, setting, size, social interaction, and physical action. Each type of information is processed in a different brain system—which is how the brain also processes the information for objects. By measuring the activation in each brain system, the program can tell what types of thoughts are being contemplated.
For seven adult participants, the researchers used a computational model to assess how the brain activation patterns for 239 sentences corresponded to the neurally plausible semantic features that characterized each sentence. Then the program was able to decode the features of the 240th left-out sentence. They went through leaving out each of the 240 sentences in turn, in what is called cross-validation.
Brain ‘reads’ sentence the same way in 2 languages
The model was able to predict the features of the left-out sentence, with 87 percent accuracy, despite never being exposed to its activation before. It was also able to work in the other direction, to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.
“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just says. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”
He adds, “A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain.”
Funding for the work came from the Intelligence Advanced Research Projects Activity (IARPA).
Source: Carnegie Mellon University