Saturday, August 1, 2015

Seeing Speech in the Brain

An fMRI study of brain dynamics during speech production.

fMRI is now a standard method to look at brain activity, though its resolution in both time and space are poor. It only detects changes in blood volume, which is an indirect, if regular, response to neuronal activity. Additionally, the (living) brain is always on, churning away at its own fantasies and unconscious levels, so the differences between resting and active states can be quite hard to detect, especially at this gross level of resolution.

A recent study took a stab at detecting speech production in the brain by fMRI, analyzing the data by correlation between regions. They broke up the brain into 212 regions, and measured the mutual correlation between each of them under separate tasks given to 14 subjects. The first task was the resting state, when the subject's eyes were closed and they were told to have no specific thoughts. Not the greatest method, from my experience of meditation, actually(!) The other two tasks were 1. they were asked to make simple eeee and iiii vowel sounds, and 2. they were asked to repeat simple but complete English sentences. The respective brain correlation networks are shown below.




Correlation maps among the best-correlated nodes selected from a total of 212 nodes (brain volumes) of fMRI activity in an average of 14 subjects. Nodes are colored "hotter" the more connected they are to others, and the lines between them are colored "hotter" depending on the strength of correlation. The A images show the complete map. The B, C, D, and E graphs are sub-selections of the data by "hotness" of the nodes, with the most connected (red) in the B maps, etc.


While the complexity of these images resists interpretation, they show several notable things. First is the overall lower connectivity in the resting state, which is dominated by green-level correlations. There are only about 3 "hot" nodes, as shown in the B sub-graph, in red. In contrast, during speaking of either syllables or sentences, far more nodes (15, and 11, respectively) are highly connected, and the overall graph is heavily weighted towards higher connectivity. The identity of the "hot" nodes is difficult to pin down in functional terms, but they seem to be dominated by premotor and motor cortex regions, noted as 6 and 4 in the graphs.

Secondly, while the syllable graphs have even higher connectivity in some respects than the sentence graphs, they show much less involvement of the frontal cortex, which is entirely absent in the orange and green-level correlation graphs, compared to the sentence speech case. This makes, on the face of it, complete sense, and supports the idea that we are seeing aspects of speech processing going on through this method. The emphasis in the syllable case is on the motor processing, while the full sentence case adds in executive complexity as well as some cerebellar fine sequencing control. That the resting state shows some frontal cortex activity and connection also stands to reason, if one assumes that uttering plain syllables leads to utter boredom, while letting the mind run free with "no thoughts" is likely to lead, frequently, to some relatively complex thoughts.

Did they find anything interesting? Long lists of brain areas with slight differences were offered, but seem to deserve much deeper (future) analysis to come up with anything approaching a theory of how speech production circuitry might look and work in a comprehensive way. This is really only a first peek at the issue, using crude imaging and advanced analysis methods.

The researchers try to further differentiate the brain states by an analysis of their modularity. The claim is that the speech producing brain seems to have tighter and more intensively local activity than the resting state, whose weaker correlations seem more widely and loosely spread. The speaking brain also generates more active modules, such as one reaching into the cerebellum. Unfortunately, despite some extremely snazzy graphics, I found the data impossible to interpret, and not, on the face of it, compelling, though reasonable enough. It is going to take a great deal more work, and perhaps better technology, to map the speech circuitry in a useful way. Without much better spatial and especially temporal resolution, one can't capture such dynamic and nano-scale activities.

An example quote points to possible functions underlying what was imaged. But the differentiation from the resting state was not very strong, since it showed plenty of connections / correlations as well, just slightly weaker.
"The SPN [speech production network] core sensorimotor hub network established connections with the brain regions responsible for sound perception and encoding (auditory cortex), phonological and semantic processing (parietal cortex), lexical decisions and narrative comprehension (middle/posterior cingulate cortex), motor planning and temporal processing of auditory stimuli (insula), control of learned voice production and suppression of unintended responses (basal ganglia), and modulation of vocal motor timing and sequence processing (cerebellum)."


  • A brief poster of the above work.
  • Speech on the A-train.
  • Institutions of fiction- money, heaven, companies, rights, states.
  • "Sneering at religion is juvenile."
  • Just what is addiction?
  • Race conscious, or race unconscious?
  • Echos of the Ruhr.. Greece to run 3.5% surpluses forever? Don't bet on it- those debts are dead. Also, bonus podcast on Greece, which will need balance of trade discipline one way or another, inside or outside the Euro.
  • State of the GOP race.
  • Cringely on the OPM disaster and who was running their systems.
  • Friedman created not only his own economic fantasies, but fantasy versions of his opponents.
  • Coal is still the big problem for climate change.
  • Economic graph of the week: inflation is dead. So is US currency volatility.


No comments: