Saturday, January 14, 2017

Why Have Brain Waves?

A theory about the function of electrical brain oscillations.

The phenomenon of brain waves have been the topic of many posts here, because they are such a tempting target for brain-wide information synchronization and management. Disparate analogies such as radio broadcasting, and the clock-ing of CPU chips come to mind. Yet the phenomenon is complicated, with lots of noise and a variety of active frequencies, spanning a ten-fold range. There have been many clues about their function, such as correlation with various mental states, (attention, sleep, resting non-attention), but no coherent theory about what they do has arisen, yet.

A recent paper tries to rectify that, by dialing back the expectations of what brain waves are doing, and coming at the problem from a very basic level. Information, after all, is not being carried directly by these waves at all- they are too variable and weak for that. Information in the brain is carried by the individual cell activations, in the context of their anatomical connections, which together form patterns that dynamically model variable information states.

One problem for this system is that neuron firing needs to be sparse in order to be useful. If everyone fires at once, you have epilepsy, instead of information modeling and transfer. Inhibitory neurons help with this, dampening feedback loops and preventing runaway activation. But most phenomena that one wants to model are stable over time, or vary relatively slowly. If you are looking at a scene, little changes from one 50 millisecond frame to the next, which is why our MP3 and video compression technologies work so well. Modeling stable phenomena with sparse, randomly firing neurons leads to quite a bit of error, as shown in the author's panel b, below.

Theory for the usefulness of partial neuron synchronization for accurate data encoding. Panel b shows what happens when neurons (black slashes, for each firing) are unsynchronized, while representing a constant stimulus (signal, blue). The cumulative representation (black line) is not an optimal representation of the original signal (blue). In contrast, if the neurons still fire sparsely, but are clocked to a global rhythm, even a very rough rhythm (yellow) gives as good overall accuracy as the fully randomly firing ensemble, and shorter time intervals provide the possibility of greater accuracy (salmon, green). Panel d represents conceptually the tradeoff between random firing and synchronized firing, as measured in data reconstruction error. The optimum is somewhere in the middle.

The observation is general to all data, whether stable or not, actually. Some synchronization provides more accurate data representation over a completely random ensemble of neurons, especially if the neurons are firing sparsely enough that (as in panel a, above) none fire at exactly the same time. This is a very significant point, and by itself predicts that neuron oscillations will happen in roughly the way they are observed- widely enough to be observed and to entrain much of the neuron firing that happens, but not strongly enough to cause epileptic-like mass synchronized firing.

It turns out that that there is even more room for improvement, however. Ironically, adding a little noise can also be helpful for signal reconstruction. Since the network has to include inhibitory neurons to dampen overall feedback and also prevent simultaneous spiking of nearby neurons, they cause an additional degradation of final representation, especially since they have delays in their own response, as do the activating neurons. The problem is that despite the presence of inhibitory neurons, they can not always act fast enough to dampen spike trains, which tend to run away a bit before inhibition. Modelling this all out, the authors find that adding a bit of noise to the system helps prevent excess synchronization, with quite beneficial effects, seen in the next figure.
 "Thus, optimal coding was achieved when the balance between excitatory and inhibition was the tightest. Further, at the optimal level of noise, the spiking CV [coefficient of variation] value was near unity, implying irregular (near-poisson) single cell responses."

The population firing rate power (panel h) shows most clearly the dangers of the low or no noise regime. Adding just a little noise (blue) helps dampen runaway spike trains significantly, while also (panels e, c) improving data reconstruction accuracy. In each panel d,e,f, the stable dashed line is the orginal data to be reconstructed. CV = coefficient of variation, exc. = excitatory neurons, in. = inhibitory neurons, ram = root mean squat, an inverse measure of correlation between the original signal and the reconstructed signal. Lower numbers (error) are better.

Taken together, this work argues strongly that neural oscillations (aside from the sleep spindles and other slow-wave phenomena that have distinct maintenance purposes) have a loosely analogous role to clocking cycles in computers. They do not themselves convey any data, but facilitate better data modelling. Their strengthening during attention, motor activities, and the like would then be a sign of weak synchronization, which may be significant over large areas of the brain for assembling mental constructs, but not of anything like information broadcasting. I would take this as the leading theory, currently, of their function.
"Neural oscillations have been hypothesised to fulfill a number of different functional roles, including feature binding (Singer, 1999), gating communication between different neural assemblies (Fries, 2005; Womelsdorf et al., 2007; Akam and Kullmann, 2010), encoding feed-forward and feed-back prediction errors (Arnal et al., 2011; Arnal and Giraud, 2012; Bastos et al., 2012) and facilitating ‘phase codes’ in which information is communicated via the timing of spikes relative to the ongoing oscillation cycle (Buzsáki and Chrobak, 1995). 
Many of these theories propose new ways in which oscillations encode incoming sensory information. In contrast, in our work network oscillations do not directly code for anything, but rather, are predicted as a consequence of efficient rate coding, an idea whose origins go back more than 50 years (Barlow, 1961)."

No comments: