How does the human brain use neural activity to create and represent meanings of words, phrases, sentences and stories? One way to study this question is to give people text to read, while scanning their brain. We have been doing such experiments with fMRI (1 mm spatial resolution) and MEG (1 msec time resolution) brain imaging, and developing novel machine learning approaches to analyzing this data. As a result, we have learned answers to questions such as "Are the neural encodings of word meaning the same in your brain and mine?", "Are neural encodings of word meaning built out of recognizable subcomponents, or are they randomly different for each word?," "What sequence of neurally encoded information flows through the brain during the half-second in which the brain comprehends a word?," “How are meanings of multiple words combined when reading phrases, sentences, and stories?” This talk will summarize our machine learning approach, some of what we have learned, and newer questions we are currently studying.
Posted by: Nathan Galli