Saturday, February 4, 2012

Neurons poised in meditation

Cortical neurons show exquisite balance between inhibitory and excitatory inputs, typically summing to zero.

Life wouldn't be quite as sweet without homeostatic mechanisms. From blood chemistry to the hedonic treadmill, our biology keeps things stable so we can sail through life on an even keel. As yet we have only a glimmer of appreciation for the many homeostatic mechanisms in the brain- for mood, activity, weight, temperature, introversion, sleep, and many more. While people differ in their settings for many of these systems, those settings tend to be extremely stable through life.

It's gradually become apparent that the basic workings of the brain rely on another homeostatic mechanism on the level of neurons and neural networks. The ability to shift attention, superimposed on a baseline brain rhythm / hum, indicates that the baseline condition of much of the brain is quasi-stability. Such shifts are detected in functional brain scanning as metabolic variations, (higher blood flow, among other signs), so there is real physiological change that follows what subjectively seems like effortless shifts of mental state.

So, most of the brain operates on a knife's edge / default / zero state which can be roped into active, attentive, ever-changing coalitions of neurons from all over the cortex to bring us the many and various conditions of thought, sensation, vision, emotional engagement, planning, etc. that are well-known to engage various specific parts of the brain.

For this important property of nerve cells in the brain, a recent paper provides a cellular rationale, which is that all information-carrying neurons in the brain typically get automatically balanced inhibitory and excitatory inputs through a feedback mechanism to inhibitory synapses. The idea is that each neuron connects as much to inhibitory interneurons as to excitatory information-carrying neurons, and those inhibitory neurons adjust their outputs over time in response to what is going on in the target cell so that its activity stays mostly quiescent. Most neurons thus spend most of their time in a calm, meditative state.

The paper leads with simulation experiments, and follows with supporting evidence from actual cells and networks. The hypothesis appears to go substantially beyond current knowledge, where inhibitory neurons are a well known component, but not a functionally well-understood part of the neural / cognitive landscape. The hypothesis provides a rationale, based on a relatively mindless cellular mechanism, for why the brain doesn't explode with activity, but rather keeps humming within relatively tight bounds, and as noted above, with the sort of knife-edge stability that lets us think with some degree of both focus and roving attention.

The story starts with Donald Hebb, who came up with perhaps the key insight to how neurons can learn as a network. The rule is commonly expressed as "cells that fire together wire together". Which is to say that in a neural network of cells, those that fire coincidently (one to the next) within some window of time (say <10 milliseconds), engage in a molecular process that strengthens their synaptic connections so that in the future, the downstream cell responds more strongly to firing of that upstream cell.

This basically mechanical process is the soul of associative learning, where a bell, say, is associated with the appearance of food. But lest everything become associated with everything else, countervailing mechanisms are needed to inhibit those connections that are not active, and keep overall activity at a low baseline so that only unusual occurences create signals. In this general respect, the role of inhibitory neurons (which generally use the neurotransmitter GABA in their synapses, in contrast to excitatory neurons that typically use glutamate) has been appreciated for some time.

The present authors take the extra step of simulating in detail specific rules of Hebbian learning and inhibition that seem to accurately account not only for the actual sensitivity of cortical neurons but, on a larger scale, for the operation of memory as an example of how all this adds up to collective neuronal & brain function.

The secret is a rule that generates detailed balance of excitatory and inhibitory inputs over time to each cell, leading the downstream (target) neuron to be mostly quiescent (also called by the authors asynchronous irregular activity, or AI). This rule operates on inhibitory interneurons that get the same upstream signals as excitatory neurons, adjusting their synaptic strengths towards the common downstream neuron.

A: Diagram of the inhibitory neuron (light gray), getting excitatory inputs and acting in parallel with excitatory stimulation to a balanced target (green). B: Diagram of simulations, where 25 inhibitory and 100 excitatory neurons with distinct signal trains, feed into a target neuron. C: Relation of the learning rule with time. Only coincident firing (inhibitory and target neurons) within ~20 milliseconds is supposed to strengthen synapses.

What's the rule?
∆w = µ( (pre * post) - (p0 * pre) )
w is synaptic efficiency, pre is the presynaptic activity, post is postsynaptic activity, p0 is a constant that targets the postsynaptic neuron to low average activity (zero most of the time), and µ is the learning rate: the key factor for how quickly coincident firing strengthens inhibitory synapses.

Course of simulations, where the net membrane current on the target neuron is green, the excitatory current is black, and inhibitory current on the target cell (from the inhibitory synaptic firing) is gray. Note that the target ends up (after) at zero most of the time.

Clearly this is a flexible rule, where constants can be plugged in to approximate what is empirically observed. Yet its simplicity, once set up, is very impressive. A time course of simulation samples is shown above. The inhibitory activity starts at zero, (before), and due to frequent co-firing of the inhibitory and target neurons, (due to their common excitatory input), their connections progressively strengthen to the point that net firing of the target cell goes down close to zero (after) for all but unusual excitatory inputs that vary faster than the rule-based learning rate.

Different values for µ, or the learning rate, (apologies.. I use greek mu in the text in place of eta) make relatively little difference to system behavior, after perturbing the excitatory input (red line), compared to the background activity (black line).

As the authors put it: "In the detailed balanced state, the response of the cell was sparse and reminiscent of experimental observations across many sensory systems. Spikes were caused primarily by transients in the input signals, during which the faster dynamics of the excitatory synapses momentarily overcame inhibition." Thus from a very simple cellular structure is born a sophisticated information processing system.

Comparison of simulation (lines) to experiments (blocks) cited from other researchers on rat auditory cortex single-neuron learning curves in response to a shift in sensory (sound) signal frequency. The X-axis is in minutes, and the Y axis is excitatory : inhibitory current ratio, taken from two different cells- a cell tuned to the prior frequency (blue) or a cell tuned to a newly introduced frequency (red).

Lastly, the researchers turn to what this kind of circuit / rule can do in a larger scale neural system, using memory as an example. They simulate a matrix of 100 X 100 cells made up of mostly excitatory cells (Ex in the figure below) and 25% inhibitory cells (In). Unlike the determined circuits simulated above, here the cells randomly connect to 2% of the other cells in the population. The researchers assumed that enough inhibitory connections would be sprinkled throughout so that, given enough learning time, the network as a whole would behave in the balanced way they expect.

The graphs above follow the evolution of this system in time, with A showing the initial state where all the excitatory cells fire at full blast, and the inhibitory cells fire randomly. After a simulated hour of internal learning, (B), the network has indeed settled down to a baseline balanced, or AI state of very low output activity, shown by the dark snowy pattern, despite the excitatory cells still firing at the original rate.

At time C, the researchers introduced a permanent five-fold increased excitatory synapse strength within two patches of the matrix (red and blue squares in A; the unaffected control patch is black). The graphs underneath sample a few random cells from either the red or black (control) areas of the matrix. The transient activation / recognition of the introduced signal is clearly apparent (C). Yet after another simulated hour, (D), the inhibitory circuits have adjusted and the overall network is back to a baseline, quiescent state. The patches of prior high activity are completely invisible.

No measure of ambient neural activity would detect a memory engram here. Yet when a quarter of the red patch excitatory neurons were driven with extra activity, the full red patch lit up again, (E), showing that a full memory could be re-activated from a partial input. The cleanliness of this process, not overlapping the blue patch at all, indeed forming a slight negative image over it, is astonishing. This constitutes an extremely interesting and promising model for how information can be tucked away and later retrieved out of our brains.. a loosely constructed tangle of neurons and their ~100 trillion synapses.

What can I add? Perhaps that simulation is an increasingly essential element in biology, as in so many other fields. We are dealing with such complexity that it is hopeless to formulate comprehendable representations of this reality in prose form, or even graphs, charts, or other tools of presentation. To get at the dynamics of complex systems, critical and simple insights like that of Hebbs and the inhibitory neuron balancing rule promoted here remain essential. But to demonstrate what such insights really mean for complex systems, mental extrapolation is not enough- computer simulation is needed.

No comments:

Post a Comment