Saturday, January 11, 2014

Sympathtic vibrations: speech waves and brain waves

Brain waves sync up with perceived speech, pointing to possible functions.

What do brain waves do? They are a prominent feature of live, working brains, and change markedly under different conditions, especially sleep and epilepsy. They seem like a natural analog to the CPU clocking that is so essential in artificial computers, but clearly are more chaotic, slower, and diverse. They seem to make up a moderately important dimension of brain processing, combining with the other more fundamental dimensions of anatomical organization and electrical/chemical pathway conduction to make up brain activty.

A recent paper makes the comment that.. "A large number of invasive and non-invasive neurophysiological studies provide converging evidence that cortical oscillations play an important role in gating information flow in the human brain, thereby supporting a variety of cognitive processes including attention, working memory, and decision-making."

So what does "gating" mean? That is a bit hard to say. In artifical computers, the clock cycle is essential to quantize the computations so that each transistor and each computation is given a chance to do its thing in a defined time, then rests so that other elements can catch up to it, keeping the whole computational process in logical register. Brains may need a similar service, but clearly it is far messier, since individual neurons take orders from no one- they seem to fire almost chaotically. While rhythmicity is a property of individual neurons, brain waves (aka cortical or electrical oscillations) are very much a mass phenomenon, only biassing the behavior of individual neurons, not ruling them outright.

An attractive place to look for their function is in auditory cognition, especially speech recognition, since each form of oscillation shares a multi-frequency mix of patterns of related frequencies, though the range of sound frequencies are substantially wider (~30 Hz to ~15,000 Hz) than the range of electrical brain oscillations (few Hz to maybe 150Hz). Maybe they map to each other in some discernable way? As the authors state:
"The similarity in the hierarchical organisation of cortical oscillations and the rhythmic components of speech suggests that cortical oscillations at different frequencies might sample auditory speech input at different rates. Cortical oscillations could therefore represent an ideal medium for multiplexed segmentation and coding of speech. The hierarchical coupling of oscillations (with fast oscillations nested in slow oscillations) could be used to multiplex complementary information over multiple time scales for example by separately encoding fast (e.g., phonemic) and slower (e.g., syllabic) information and their temporal relationships."

Basically, the authors had subjects (22 of them) listen to about seven minutes of speech, played either forward or backward, and at the same time used magnetoencephalography, i.e. a ginormous machine that detects slight magnetic fields emanating from the head, to track superficial brain waves. MEG is somewhat more sensitive than EEG that is done with electrodes pasted onto the head. Then they fed both data streams into a correlating procedure (below), and looked for locations where the two oscillations were related.

Procedure of analysis- each waveform stream was deconstructed and correlated, to find locations in the brain where electromagnetic surface waves reflect speech waves.

They found several instances of correlation. Two were in the low frequency (1-2, 4-8 Hz) delta and theta rhythms, which directly entrain with the speech rhythm. Two more were in the 20 and 50 Hz range, where the amplitude of these gamma rhythms correlated with the phase of the lower frequency speech rhythms, a somewhat indirect correlation. The locations of these brain wave correlations were naturally over the auditory and speech centers of the brain:

Location of brain waves, of various frequency bands, that correlated with speech patterns. This is a map of significant results, mapped to each hemisphere. Note significant differences between the hemispheres, right on the right side.

"In sum, this comprehensive analysis revealed two distinct speech tracking mechanisms in the brain. First, low-frequency speech modulations entrain (that is, align the phase of) delta and theta oscillations in the auditory cortex. Second, low-frequency speech modulations also entrain the amplitude dynamics of gamma oscillations."


Speech trace (A) shown with a superimposed dotted line (cosine) of the theta brain wave of the listener. In B, the brain is shown, with locations of 3-7 Hz entrainment labeled in red, specifically entrainment that differed significantly between the forward and backward speech renditions. C shows the overall cross-correlation data, for both hemispheres, with signals at 20 and 48 Hz, at least on one hemisphere. This tracked not overall speech, but the correlation with speech starts and stops, showing close phase tracking.

The phase entrainment shifted position when successive speech elements (stops/starts for sentences and words) arrived, showing that the system tracks the input quite carefully.

Most intriguingly, the authors found that backward speech was significantly less correlated with brain waves than forward speech. This indicates some top-down control, where intelligibility of the speech stream is broadcast back to lower levels of the auditory processing apparatus to fine-tune expectations of the next event, via stronger rhythmic alignment.

They also found differences between the hemispheres, with the low-frequency correlations stronger in the right hemisphere, and the gamma-wave correlations stronger in the left, which contains the primary language areas in most people (such as Broca's and Wernicke's areas).

"Our study supports emerging models of speech perception that emphasise the role of brain oscillations. Hierarchically organised brain oscillations may sample continuous speech input at rates of prominent speech rhythms (prosody, syllables, phonemes) and represent a first step in converting a continuous auditory stream to meaningful internal representations."

One can imagine that brain waves assist processing in several ways. When unified over large areas of the brain, they might enforce regimented processing, (i.e. transfer of neuronal signals from one cell / layer / module to the next, in ways that constitute signal processing from raw to more abstract representations), which could make it more efficient and also better able to affect other areas of the brain, such as consciousness. In auditory processing, the advantage in lining up processing with the actual signal should be clear enough. They could also reduce chatter in the system, which seems universal in other brain studies. Do they "carry" signals themselves? Not really, just as the computer clock cycle doesn't tell us what the computer happens to be doing, but facilitates the detailed processing flowing through its innumerable wires and junctions.


  • A better review of the same paper.
  • Test your hearing.
  • Religion, tribalism, hate, love, etc. etc...
  • But some still insist upon religion. And "definitively refute" atheism. And finish up with C. S. Lewis. Hmmm. 
  • The onion refutes it a little better.
  • And becoming an atheist.. not so easy.
  • Economic wreckers and capitalist running dogs in our midst.
  • Turns out, Republicans do favor redistribution, after all.
  • Managing the job guarantee.
  • 4K TVs work wonders as monitors.
  • The India diplomatic row is an example of why worker protections and minimum wage protections are so important... the system worked.
  • Satanists.. performing a public service.
  • Yes, he is a bully.
  • Inheritance is increasingly significant, so death taxes are more important than ever.
  • Economists have no idea what they are doing.
  • Economic graph of the week, on unemployment.

No comments: