Saturday, October 12, 2019

Thinking Ahead in Waves

A computational model of brain activity following simple and realistic Bayesian methods of internal model development yields alpha waves.

Figuring how the brain works remains an enormous and complicated project. It does not seem susceptible to grand simplifying theories, but has in contrast been a mountain climbed by thousands, in millions of steps. A great deal of interest revolves around brain waves, which are so tantalizingly accessible and reflective of brain activity, yet still not well understood. They are definitely not carrying information in the way radio stations send information, whether in the AM or FM. But they do seem to represent synchronization between brain regions that are exchanging detailed information through their anatomical, i.e. neuronal, connections. A recent paper and review discuss a theory-based approach to modeling brain computation, one that has the pleasant side effect of generating alpha waves- one of the strongest and most common of the brain wave types, around 10 Hz, or 10 cycles per second- automatically, and in ways that explain some heretofore curious features.

The model follows the modern view of sensory computation, as a Bayesian modeling system. Higher levels make models of what reality is expected to look/hear/feel like, and the lower level sensory systems, after processing their inputs in massively parallel fashion, send only error signals about what differs from the model (or expectation). This is highly efficient, such that boring scenery is hardly processed or noticed at all, while surprises form the grist of higher level attention. The model is then updated, and rebroadcast back down to the lower level, in a constant looping flow of data. This expectation/error cycle can happen at many levels, creating a cascade or network of recurrent data flows. So when such a scheme is modeled with realistic neuronal communication speeds, what happens?

A super-simple model of what this paper implements. The input node sends data, in the form of error reports, (x(t)), to the next, higher level node. In return, it gets some kind of data (y(t)), indicating what that next level is expecting, as its model of how things are at time t.

The key parameters are the communication speeds in both directions, (set at 12 milliseconds), the processing time at each level, (set at 17 milliseconds), and a decay or damping factor accounting for how long neurons would take to return to their resting level in the absence of input, (set at 200 milliseconds). This last parameter seems most suspect, since the other parameters assume instantaneous activation and de-activation of the respective neurons involved. A damping outcome/behavior is surely realistic from general principles, but one wonders why a special term would be needed if one models the neurons in a realistic way, which is to say, mostly excitatory and responsive to inputs. Such a system would naturally fall to inactivity in the absence of input. On the other hand, a recurrent network is at risk of feedback amplification, which may necessitate a slight dampening bias.

The authors generate and run numerical models for a white noise visual field being processed by a network with such parameters, and generate two-dimensional fields of waves for two possibilities. First is the normal case of change in the visual field, generating forward inputs, from lower to higher levels. Second is a lack of new visual input, generating stronger backward waves of model information. Both waves happen at about 8 times the communication delay, or about 100 milliseconds, right in the alpha range. Not only did such waves happen in 2-layer models with just one pair of interacting units, but when multiple modules were modeled in a 2-dimensional field, traveling alpha waves appeared.

When modeled with multiple levels and two dimensions, the outcome, aside from data processing, is a traveling alpha wave that travels in one direction when inputs predominate (forward) and in the opposite direction when inputs are low and backward signals predominate.

Turning to actual humans, the researchers looked more closely at actual alpha waves, and found the same thing happening. Alpha waves have been considered odd in that, while generally the strongest of all brain waves, and the first to be characterized, they are strongest while resting, (awake, not sleeping), and become less powerful when the brain is actively attending/watching, etc. Now it turns out that what had been considered the "idle" brain state is not idle at all, but a sign of existing models / expectations being propagated in the absence of input- the so-called default mode network. Tracking the direction of alpha waves, they were found to travel up the visual hierarchy when subjects viewed screens of white noise. But when their eyes were closed, the alpha waves traveled in the opposite direction. The authors argue that normal measurements of alpha waves fail to properly account for the power of forward waves, which may be hidden in the more general abundance of backward, high-to-low level alpha waves.

Example of EEG from human subjects, showing the directionality of alpha wave travel, in this case forward from input (visual cortex in the back of the brain) to higher brain levels.

"Therefore, we conjecture that it may simply not be possible for a biological brain, in which communication delays are non-negligible, to implement predictive coding without also producing alpha-band reverberations. Moreover, a major characteristic of alpha-band oscillations—i.e., their propagation through cortex as a traveling wave—could also be explained by a hierarchical multilevel version of our predictive coding model. The waves predominantly traveled forward during stimulus processing and backward in the absence of inputs. ...  Importantly though, in our model none of the units is an oscillator or pacemaker per se, but oscillations arise from bidirectional communication with temporal delays."

Thus brain waves are increasingly understood as a side effect of functional synchronization and, in this case, intrinsically associated with the normal back-and-forth of data processing, which looks nothing like the stream of data from a video camera, but something far more efficient, using a painstakingly-built internal database of reality to keep tabs on, and detect deviations from, new sensations arriving. It remains to ask what exactly this model function is, which the authors term y(t) - the data sent from higher levels to lower levels. Are higher levels of the brain modeling their world in some explicit way and sending back a stream of imagery which the lower levels compare with their new data? No- the language must far more localized. The rendition of full scenery would be reserved for conscious consideration. The interactions considered in this paper are small-scale and progressive, as data is built up over many small steps. Thus each step would contain a pattern of what is assumed to be the result of the prior step. Yet what exactly gets sent back, and what gets sent onwards, is not intuitively clear.

No comments: