Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

Saturday, June 26, 2021

Tuning Into the Brain

Oscillations as a mechanism of binding, routing, and selection of thoughts.

The brain is plastic, always wiring and rewiring its neurons, even generating new neurons in some areas. But this plasticity does not come near the speed and scale needed to manage brain function at the speed of thought. At that perspective and scale, our brains are static lumps, which somehow manage to dynamically assemble thoughts, visions, impressions, actions, and so much more as we go about our frenetic lives. It has gradually become clear that neural oscillations, or electrical brain waves, which are such a prominent feature of active brains, serve centrally to facilitate this dynamic organization, joining up some areas via synchrony for focused communication, while leaving others in the dark- unattended and waiting for their turn in the spotlight. 

The physical anatomy of the brain is obviously critical to organizing all the possible patterns of neural activity, from the most unconscious primary areas of processing and instinct to whatever it is that constitutes consciousness. But that anatomy is relatively static, so a problem is- how do some brain areas take precedence for some thoughts and activities, while others take over during other times? We know that, very roughly, the whole brain tends to be active most of the time, with only slight upticks during intensive processing, as seen in the induced blood flow detected by methods such as fMRI. How are areas needed for some particular thought brought into a dynamic coalition, which is soon after dissolved with the next thought? And how do such coalitions contribute to information flow, and ultimately, thought?

Oscillations seem to provide much of this selectivity, and the last decade of research has brought an increasingly detailed appreciation of its role in particular areas, and of its broad applicability to dynamic selection and binding issues generally. A recent paper describes a bunch of simulations and related analytical work on one aspect of this issue- how oscillations work at a distance, when there is an inevitable lag in communication between two areas. 

Originally, theories of neural oscillation just assumed synchrony and did not bother too much with spatial or time delay separations. Synchrony clearly offers the opportunity of activating one area of the brain based on the activity of a separate driving area. For instance, primary visual areas might synchronize rhythmically with downstream areas and thus drive their processing of a sequence of signals, thus generating higher level activations in turn that ultimately constitute visual consciousness. Adding in spatial considerations increases complexity, since various areas of the brain exist at widely different separations, potentially making a jumble of the original idea. But on the other hand, feedback is a common, even universal, phenomenon in the brain, and requires some amount of delay to make any sense. Feedback needs to be (and anatomically must be) offset in time to avoid instant shut-down or freezing. 

Perhaps one aspect of anatomical development is to tune the brain so that certain coalitions can form with useful sequentially delays, while others can not, setting in the anatomical concrete a certain time-delay characteristic for each anatomically connected group. Indeed, it is known that myelination- the process of white matter development during childhood and early adulthood- speeds up axonal conduction, thus greatly altering the delay characteristics of the brain. Keeping these delays tuned to produce usable coalitions for thought could be a significant hurdle as this development proceeds, and explain some of the deep alterations of cognition that accompany it. The opportunity to assemble more wide-ranging coalitions of entrained neurons is obviously beneficial to complex thought, but just how flexible are such relations? Could the speeding up of one possible coalition destroy a range of others?

The current paper simply makes the case that delays are perfectly conducive to oscillatory entrainment, and also that regions with higher frequencies tend to more effectively drive downstream areas with slightly lower intrinsic frequencies, though other relationships can also exist. Both phenomena contribute to assymmetric information flow, from one area to the next, given oscillatory entrainment. The computer simulations the authors set up were relatively simple- populations of a hundred neurons with some inhibitory and most excitatory, all behaving as closely as possible to natural neurons, modestly inter-connected, with some connections to another second similar population, and each given certain stable or oscillatory inputs. Each population showed a natural oscillation, given normal behavior of neurons (with inhibitory feedback) and a near-white noise input baseline that they injected for each population. On top of that, they injected oscillatory inputs as needed to drive each population's oscillations to perform the experiment, at particular frequencies and phases.


The authors manipulated the phase delay between the two populations (small delta), and also manipulated the frequency mismatch between them (called de-tuning, big delta). This led to a graph like shown above, where each has its own axis and leads to a regimes (red) of high entrainment (B) and information transfer (C). The degree of entrainment is apparent in the graphs in D, taken from the respective black points in B, with the driver population in red, the receiver population in blue, as diagramed in A. In this case, practically all the points are in red zones, and thus show substantial entrainment.

While this simulation method appears quite powerful, the paper was not well-written enough, and the experiments not clear enough, to make larger points out of this work. It is one small piece of a larger movement to pin down the capabilities and exact nature and role of neural oscillations in the brain- a role that has been tantalizing for a century, and is at last starting to be appreciated as central to cognition.

Based on the following articles:


Saturday, June 12, 2021

Mitochondria and the Ratchet of Doom

How do mitochondria escape Muller's ratchet, the genetic degradation of non-mating cells?

Muller's ratchet is one of the more profound concepts in genetics and evolution. Mutations build up constantly, and are overwhelmingly detrimental. So a clonal population of cells which simply divide and live out their lives will all face degradation, and no matter how intense the selection, will eventually end up mutated in some essential function or set of functions, and die out. This gives rise to an intense desire for organisms to exchange and recombine genetic information. This shuffling process can, while producing a lot of bad hands, also deal out some genetically good hands, purifying away deleterious mutations and combining beneficial ones.

This is the principle behind the meiotic sex of eukaryotes with large genomes, and also the widespread genetic exchange done by bacterial cells, via conjugation and other means. In this way, bacteria can stave off genetic obsolescence, and also pick up useful tricks like antibiotic resistance. But what about our mitochondria? These are also, in origin and essence, bacterial cells with tiny genomes which are critically essential to our well-being. They are maternally inherited, which means that the mitochondria from sperm cells, which could have provided new genetic diversity, are, without the slightest compunction, thrown away. This seriously limits opportunities for genetic exchange and improvement, for a genome that is roughly 16 thousand bases long and codes for 37 genes, many of which are central to our metabolism.

One solution to the problem has been to move genes to the nucleus. Most bacteria have a few thousand genes, so the 37 of the mitochondrial genome are a small remnant, specialized to keep local regulation intact, while the vast majority of needed proteins are encoded in the nucleus and imported through rather ornate mechanisms to take their places in one of the variety of the organelle's locations- inner matrix, inner membrane, inter-membrane space, or outer membrane.

The more intriguing solution, however, has been to perform constant and intensive quality control (with recombination) on mitochondria via a fission and fusion cycle. It turns out that mitochondria are constantly dividing and re-fusing into large networks in our cells. And there are a lot of them- typically thousands in our cells. Mitochondria are also capable of recombination and gene conversion, where parts of one DNA are over-written by copying another DNA molecule. This allows a modicum of gene shuffling among mitochondria in our cells. 

The fusion and fission cycle of mitochondria, where fissioned mitochondria are subject to evaluation for function, and disposal.

Lastly, there is a tight control process that eliminates poorly functioning mitochondria, called mitophagy. Since mitochondria function like little batteries, their charge state is a fundamental measure of health. A nuclear-encoded protein called PINK1 enters the mitochondria, and if the charge state is poor, it remains on the outer membrane to recruit other proteins, including parkin and ubiquitin, which jointly mark the defective mitochondrion for degradation through mitophagy. That means that it is engulfed in an autophagosome and fused with a lysozome, which are the garbage disposal / recycling centers of the cell, filled with acidic conditions and degradative enzymes.

The key point is that during the fission / fusion cycle of mitochondria, which happens over tens of minutes, the fissioned state allows individual or small numbers of genomes to be evaluated, and if defective, disposed of. Meanwhile, the fused state allows genetic recombination and shuffling, to recreate genetic diversity from the ambient mutation rate. Since mitochondria are the centers of metabolism, especially redox reactions, they are especially prone to high rates of mutation. So this surveillance is particularly essential. If all else fails, the whole cell may be disposed of via apoptosis, which is also quite sensitive to the mitochondrial state.

In oocytes, mitochondria appear to go through a particularly stringent period of fission, allowing a high level of quality control at this key point. Additionally, mitochondria then go through exponential growth and energy generation to make the oocyte, at which point those which more quality control discards the oocytes that are not up to snuff.

All this adds up to a pretty thorough method of purifying selection. Admittedly, little or no genetic material comes from outside the clonal maternal genetic lineage, but mutations are probably common enough that beneficial mutations arise occasionally, and one can imagine that there may be additional levels of selection for more successful mitochondria over less successful ones, in addition to the charge-dependent rough cut made by this mitophagy selection.

As the penetrating reader my guess, parkin is related to Parkinson's disease, as one of its causal genes, when defective. Neurons are particularly prone to mitochondrial dysfunction, due to their sprawled-out geography. The nuclear genes needed for mitochondria are made only in the cell body / nucleus, and their products (either as proteins, or sometimes as mRNAs) have to be ferried out to the axonal and dendritic periphery to supply their targets with new materials. Neurons have very active transport systems to do this, but still it is a significant challenge. Second, the local population of mitochondria in outlying processes of neurons is going to be small, making the fission/fusion cycle much less effective and less likely to eliminate defective genes and individual mitochondria, or make up for their absence if they are eliminated, leading to local energetic crises.

Cross-section of a neuronal synapse, with a sprinkling of mitochondria available locally to power local operations.

Papers reviewed here:


  • Get back to work. A special, CEO-sponsored cartoon from Tom Tomorrow.
  • They are everywhere.
  • Shouldn't taxes be even a little bit fair?
  • The economics of shame.

Saturday, April 24, 2021

Way Too Much Dopamine

Schizophrenia and associated delusions/hallucinations as a Bayesian logic defect of putting priors over observations, partly due to excess dopamine or sensitivity to dopamine.

It goes without saying that our brains are highly tuned systems, both through evolution and through development. They are constantly active, with dynamic coalitions of oscillatory synchrony and active anatomical connection that appear to create our mental phenomena, conscious and unconscious. Neurotransmitters have long been talismanic keys to this kingdom, there being relatively few of them, with somewhat distinct functions. GABA, dopamine, serotonin, glutamate, acetylcholine are perhaps the best known, but there are dozens of others. Each transmitter tends to have a theme associated with it, like GABA being characteristic of inhibitory neurons, and glutamate the most common excitatory neurotransmitter. Each tends to have drugs associated with it as well, often from natural sources. Psilocybin stimulates serotonin receptors, for instance. Dopamine is central to reward pathways, making us feel good. Cocaine raises dopamine levels, making us feel great without having done anything particularly noteworthy.

As is typical, scientists thought they had found the secret to the brain when they found neurotransmitters and the variety of drugs that affect them. New classes of drugs like serotonin uptake inhibitors (imipramine, prozac) and dopamine receptor antagonists (haloperidol) took the world by storm. But they didn't turn out to have the surgical effects that were touted. Neurotransmitters function all over the brain, and while some have major themes in one area or another, they might be doing very different things elsewhere, and not overlap very productively with a particular syndrome such as depression or schizophrenia. Which is to say that such major syndromes are not simply tuning problems of one neurotransmitter or other. Messing with transmitters turned out to be a rather blunt medical instrument, if a helpful one.

All this comes to mind with a recent report of the connection between dopamine and hallucinations. As noted above, dopamine antagonists are widely used as antipsychotics (following the dopamine hypothesis of schizophrenia), but the premier hallucinogens are serotonin activators, such as Psilocybin and LSD, though their mode of action remains not fully worked out. (Indeed, ketamine, another hallucinogen, inhibits glutamine receptors.) There is nothing neat here, except that nature, and the occasional chemical accident, have uncovered amazing ways to affect our minds. Insofar as schizophrenia is characterized by over-active dopamine activity in some areas, (though with a curious lack of joy, so the reward circuitry seems to have been left out), and involves hallucinations which are reduced by dopamine antagonists, a connection between dopamine and hallucinations makes sense. 

"... there are multiple genes and neuronal pathways that can lead to psychosis and that all these multiple psychosis pathways converge via the high-affinity state of the D2 receptor, the common target for all antipsychotics, typical or atypical." - Wiki


So what do they propose? These researchers came up with a complex system to fool mice into pressing a lever based on uncertain (auditory) stimuli. If the mouse really thought the sound had happened, it would wait around longer for the reward, giving researchers a measure of its internal confidence in a signal which may have never been actually presented. The researchers thus presented a joint image and sound, but sometimes left out the sound, causing what they claim to be an hallucinated perception of the sound. Thus the mice, amid all this confusion, generated some hallucinations in the form of positive thinking that something good was coming their way. Ketamine increased this presumed hallucination rate, suggestively. The experiment was then to squirt some extra dopamine into their brains (via new-fangled optogenetic methods, which can be highly controllable in time and space) at a key area known to be involved in schizophrenia, the striatum, which is a key interface between the cortex and lower/inner areas of the brain involved in motion, emotion, reward, and cognition.

Normal perception is composed of a balance of bottom up observation and top-down organization. Too much of either one is problematic, sometimes hallucinatory.

This exercise did indeed mimick the action of a general dose of ketamine, increasing false assumptions, aka hallucinations, and confirming that dopamine is involved there. The work relates to a very abstract body of work on Bayesian logic in cognition, recognizing that perception rests on modeling. We need to have some model of the world before we can fit new observations into it, and we continually update this model by "noticing" salient "news" which differs from our current model. In the parlance, we use observation to update our priors to more accurate posterior probability distributions. The idea is that, in the case of hallucination, the top-down model is out-weighing (or making up for a lack of) bottom-up observation, running amok, and thus exposing errors in this otherwise carefully tuned Bayesian system. One aspect of the logic is that some evaluation needs to be made of the salience of a new bit of news. How much does it differ from what is current in the model? How reliable is the observation? How reliable is the model? The systems gone awry in schizophrenia appear to mess with all these key functions, awarding salience to unimportant things and great reliability to shaky models of reality. 

Putting neurotransmitters together with much finer anatomical specification is surely a positive step towards figuring out what is going on, even if this mouse model of hallucination is rather sketchy. So this new work constitutes a tiny step in the direction of boring, anatomically and chemically, into one tiny aspect of this vast syndrome, and into the interesting area of mental construction of perceptions.


  • Another crisis of overpopulation.
  • And another one.
  • Getting China to decarbonize will take a stiff carbon price, about $500 to $1000/ton.

Various policy scenarios of decarbonization in China, put into a common cost framework of carbon pricing (y-axis). Some policies are a lot more efficient than others. 

Sunday, December 6, 2020

Computer Science Meets Neurobiology in the Hippocampus

Review of Whittington, et al. - a theoretical paper on the generalized mapping and learning capabilities of the entorhinal/hippocampal complex that separates memories from a graph-mapping grid.

These are exciting times in neurobiology, as a grand convergence is in the offing with computational artificial intelligence. AI has been gaining powers at a rapid clip, in large part due to the technological evolution of neural networks. But computational theory has also been advancing, on questions of how concepts and relations can be gleaned from data- very basic questions of interest to both data scientists and neuroscientists. On the other hand, neurobiology has benefited from technical advancements as well, if far more modestly, and from the relentless accumulation of experimental and clinical observations. Which is to say, normal science. 

One of the hottest areas of neuroscience has been the hippocampus and the closely connected entorhinal cortex, seat of at least recent memory and of navigation maps and other relational knowledge. A recent paper extends this role to a general theory of relational computation in the brain. The basic ingredients of thought are objects and relations. Computer scientists typically represent these as a graph, where the objects are nodes, and the relations are the connecting lines, or edges. Nodes can have a rich set of descriptors (or relations to property nodes that express these descriptions). A key element to get all this off the ground is the ability to chunk, (or abstract, or generalize, or factorize) observations into discrete entities, which then serve as the objects of the relational graph. The ability to say that what you are seeing, in its whirling and colorful reality, is a dog .. is a very significant opening step to conceptualization, and the manipulation of those concepts in useful ways, such as understanding past events and predicting future ones.

Gross anatomy of the hippocampus and associated entorhinal cortex, which function together in conceptual binding and memory.

A particular function of the entorhinal/hippocampal complex is spatial navigation. Reseachers have found place cells, grid cells, and boundary cells (describing when these cells fire) as clear elements of spatial consciousness, which even replay in dreams as the rats re-run their daytime activities. It is evident that these cells are part of an abstraction mechanism that dissociates particular aspects of conceptualized sensory processing from the total scene and puts them back together again in useful ways, i.e. as various maps.

This paper is conducted at a rather abstruse level, so there is little that I can say about it in detail. Yet it and the field it contributes to is so extremely interesting that some extra effort is warranted. By the time the hippocampus is reached, visual (and other sensory data) has already been processed to the conceptual stage. Dogs have been identified, landmarks noted, people recognized. Memories are composed of fully conceptualized, if also sensorily colored, conceptual chunks. The basic idea the authors present is that key areas of the entorhinal cortex provide general and modular mapping services that allow the entorhinal/hippocampal complex to deal with all kinds of relational information and memories, not just physical navigation. Social relations, for example, are mapped similarly.

It is important to note tangentially that conceptualization is an emergent process in the brain, not dictated by pre-existing lists of entities or god-given databases of what exists in the world and beyond. No, all this arises naturally from experience in the world, and it has been of intense interest to computer scientists to figure out how to do this efficiently and accurately, on a computer. Some recent work was cited here and is interesting for its broad implications as well. It is evident that we will in due time be faced with fully conceptualizing, learning, and thinking machines.

"Structural sparsity also brings a new perspective to an old debate in cognitive science between symbolic versus emergent approaches to knowledge representation. The symbolic tradition uses classic knowledge structures including graphs, grammars, and logic, viewing these representations as the most natural route towards the richness of thought. The competing emergent tradition views these structures as epiphenomena: they are approximate characterizations that do not play an active cognitive role. Instead, cognition emerges as the cooperant consequence of simpler processes, often operating over vector spaces and distributed representations. This debate has been particularly lively with regards to conceptual organization, the domain studied here. The structural forms model has been criticized by the emergent camp for lacking the necessary flexibility for many real domains, which often stray from pristine forms. The importance of flexibility has motivated emergent alternatives, such as a connectionist network that maps animals and relations on the input side to attributes on the output side. As this model learns, an implicit tree structure emerges in its distributed representations. But those favoring explicit structure have pointed to difficulties: it becomes hard to incorporate data with direct structural implications like 'A dolphin is not a fish although it looks like one', and latent objects in the structure support the acquisition of superordinate classes such as 'primate' or 'mammal'. Structural sparsity shows how these seemingly incompatible desiderata could be satisfied within a single approach, and how rich and flexible structure can emerge from a preference for sparsity." - from Lake et al., 2017


Getting back the the hippocampus paper, the authors develop a computer model, which they dub the Tolman-Eichenbaum machine [TEM] after key workers in the field. This model implements a three-part system modeled on the physiological situation, plus their theory of how relational processing works. Medial entorhinal cells carry generalized mapping functions (grids, borders, vectors), which can be re-used for any kind of object/concept, supplying relations as originally deduced from sensory processing or possibly other abstract thought. Lateral entorhinal cells carry specific concepts or objects as abstracted from sensory processing, such as landmarks, smells, personal identities, etc. It is then the crossing of these "what" and "where" streams that allows navigation, both in reality and in imagination. This binding is proposed to happen in the hippocampus, as firing that happens when firing from the two separate entorhinal regions happen to synchronize, stating that a part of the conceptual grid or other map and an identified object have been detected in the same place, generating a bound sensory experience, which can be made into a memory, or arise from a memory, or an imaginative event, etc. This is characteristic of "place cells", hippocampal cells that fire when the organism is at a particular place, and not at other times.

"We propose TEM’s [the computational model they call a Tolman-Eichenbaum Machine] abstract location representations (g) as medial entorhinal cells, TEM’s grounded variables (p) as hippocampal cells, and TEM’s sensory input x as lateral entorhinal cells. In other words, TEM’s sensory data (the experience of a state) comes from the ‘what stream’ via lateral entorhinal cortex, and TEM’s abstract location representations are the ‘where stream’ coming from medial entorhinal cortex. TEM’s (hippocampal) conjunctive memory links ‘what’ to ‘where’, such that when we revisit ‘where’ we remember ‘what’."


Given the abstract mapping and a network of relations between each of the components, reasoning or imagining about possible events also becomes feasible, since the system can solve for any of the missing components. If a landmark is seen, a memory can be retrieved that binds the previously known location. If a location is surmised or imagined, then a landmark can be dredged up from memory to predict how that location looks. And if an unfamiliar combination of location and landmark is detected, then either a new memory can be made, or a queasy sense of unreality or hallucination would ensue if one of the two are well-known enough to make the disagreement disorienting.

As one can tell, this allows not only the experience of place, but the imagination of other places, as the generic mapping can be traversed imaginatively, even by paths that the organism has never directly experienced, to figure out what would happen if one, for instance, took a short-cut. 

The combination of conceptual abstraction / categorization with generic mapping onto relational graph forms that can model any conceptual scale provides some of the most basic apparatus for cognitive thought. While the system discussed in this paper is mostly demonstrated for spatial navigation, based on the proverbial rat maze, it is claimed, and quite plausible, that the segregation of the mapping from the object identification and binding allows crucial generalization of cognition- the tools we, and someday AI as well, rely on to make sense of the world.


  • A database of superspreader events. It suggests indoor, poorly ventilated spread of small aerosols. And being in a cold locker helps as well.
  • The curious implications of pardons.
  • It is going to be a long four years.
  • How tires kill salmon.
  • Ever think that there is more to life?

Saturday, November 14, 2020

Are Attention and Consciousness the Same?

Not really, though what consciousness is in physical terms remains obscure.

A little like fusion power, the quest for a physical explanation of consciousness has been frustratingly unrewarding. The definition of consciousness is fraught to start with, and since it is by all reasonable hypotheses a chemical process well-hidden in the murky, messy, and mysterious processes of the brain, it is also maddeningly fraught in every technical sense. A couple of recent papers provide some views of just how far away the prospect of a solution is, based on analyses of the visual system, one in humans, the other in monkeys.

Vision provides both the most vivid form of consciousness, and a particularly well-analyzed system of neural processing, from retinal input through lower level computation at the back of the brain and onwards through two visual "streams" of processing to conscious perception (the ventral stream in the inferior temporal lobe) and action-oriented processing (in the posterior parietal lobe). It is at the top of this hierarchy that things get a bit vague. Consciousness has not yet been isolated, and how it could be remains unclear. Is attention the same as consciousness, or different? How can related activities like unconscious high-level vision processing, conscious reporting, pressing buttons, etc. be separated from pure consciousness? They all happen in the brain, after all. Or do those activities compose consciousness?

A few landmarks in the streams of visual processing.  V1 is the first level of visual processing, after pre-processing by the retina and lateral geniculate nucleus. Processing then divides into the two streams ending up in the inferotemporal lobe, where consciousness and memory seem to be fed, while the dorsal stream to the inferior parietal lobule and nearby areas feed action guidance in the vicinity of the motor cortex

In the first paper, the authors jammed a matrix of electrodes into the brains of macaques, near the "face cells" of the inferotemporal cortex of the ventral stream. The macaques were presented with a classic binocular rivalry test, with a face shown to one eye, and something else shown to the other eye. Nothing was changed on the screen, nor the head orientation of the macaque, but their conscious perception alternated (as would ours) between one image and the other. It is thought to be a clever way to isolate perceptual distinctions from lower level visual processing, which stay largely constant- each eye processes each scene fully, before higher levels make the choice of which one to focus on consciously. (But see here). It has been thought that by the time processing reaches the very high level of the face cells, they only activate when a face is being consciously perceived. But that was not the case here. The authors find that these cells, when tested more densely than has been possible before, show activity corresponding to both images. The face could be read using one filter on these neurons, but a large fraction (1/4 to 1/3) could be read by another filter to represent the non-face image. So by this work, this level of visual processing in the inferotemporal cortex is biased by conscious perception to concentrate on the conscious image, but that is not exclusive- the cells are not entirely representative of consciousness. This suggests that whatever consciousness is takes place somewhere else, or at a selective ensemble level of particular oscillations or other spike coding schemes.

"We trained a linear decoder to distinguish between trial types (A,B) and (A,C). Remarkably, the decoding accuracy for distinguishing the two trial types was 74%. For comparison, the decoding accuracy for distinguishing (A, B) versus (A, C) from the same cell population was 88%. Thus, while the conscious percept can be decoded better than the suppressed stimulus, face cells do encode significant information about the latter. ... This finding challenges the widely-held notion that in IT cortex almost all neurons respond only to the consciously perceived stimulus."

 

The second paper used EEG on human subjects to test their visual and perceptual response to disappearing images and filled-in zones. We have areas in our visual field where we are physically blind, (the fovea), and where higher levels of the visual system "fill in" parts of the visual scene to make our conscious perception seem smooth and continuous. The experimenters came up with a forbiddingly complex visual presentation system of calibrated dots and high-frequency snow whose purpose was to oppose visual attention against conscious perception. When attention is directed to the blind spot, that is precisely when the absence of an image there becomes apparent. This allowed the experimenters to ask whether the typical neural signatures of high-level visual processing (the steady-state visually evoked potential, or SSVEP) reflect conscious perception, as believed, or attention or other phenomena. They presented and removed image features all over the scene, including blind spot areas. What they found was that the EEG signal of SSVEP was heightened as attention was directed to the invisible areas, exactly the opposite of what they hypothesized if the signal was tied to actual visual conscious perception. This suggested that this particular signal is not a neural correlate of consciousness, but one of attention and perhaps surprise / contrast instead.

So where are the elusive neural correlates of consciousness? Papers like these refine what and where it might not be. It seems increasingly unlikely that "where" is the right question to ask. Consciousness is graded, episodic, extinguishable in sleep, heightened and lowered by various experiences and drugs. So it seems more like a dynamic but persistent pattern of activity than a locus, let alone an homunculus. And what exactly that activity is.. a Nobel prize surely awaits someone on that quest.


  • Unions are not a good thing ... sometimes.
  • Just another debt con.
  • Incompetent hacks and bullies. An administration ends in character.
  • Covid and the superspreader event.
  • Outgoing Secretary of State is also a deluded and pathetic loser.
  • But others are getting on board.
  • Bill Mitchell on social capital, third-way-ism, "empowerment", dogs, bones, etc.
  • Chart of the week: just how divided can we be?

Sunday, June 14, 2020

The Music in my Head

Brain waves correlate with each other, and with cognitive performance.

This is a brief return to brain waves. Neural oscillations are low-frequency synchronized activity of many neurons, from the low single Hz to 50 or 100 Hz. They do not broadcast information themselves, as a radio station might. Rather, they seem to represent coalitions of neurons being ganged together into a temporary unified process. They seem to underlie attention, integration of different sensory and cognitive modes, and perhaps subjective "binding". A recent paper provides a rare well-written presentation of the field, along with a critical approach to correlations between waves coming from different places in the brain. They also find that strength of oscillatory coupling correlates with cognitive performance.

The issue they were trying to approach was the validity of cross-frequency coupling, where a rhythm at one frequency is phase-coupled with one at a higher frequency, at some integer difference in frequency, like 1:2. (In other words, harmony was happening). Such entrainment would allow fundamentally different cognitive processes to relate to each other. They study two different types of correlation- the straight frequency coupling as above, and a phase-amplitude coupling where the amplitudes of a higher frequency oscillation are shaped to resemble a lower frequency wave. This resembles AM radio, where the high-frequency radio signal "carries" a sound signal encoded in its rising and falling amplitudes, though its frequency is completely stable. This latter form of coupling was more difficult to find, analyze, and in the end failed to have significant functional consequences, at least in this initial work.

Cartoons of cross-frequency couplings (CFC, aka harmonies) that were investigated.

The authors' first goal was to isolate gold-standard couplings, whose participating waves come from different locations in the brain, and do not (respectively) resemble contaminating similar waves inherent in the complementary location. After isolating plenty of such cases, they then asked where such phenomena tend to take place, and do they correlate with function, like performance on tests. They used resting brains, instead of any particular task setting. This makes the study more reproducible and comparable to others, but obviously fails to offer much insight into waves as they are (if they are) used for critical task performance. Resting brains have an ongoing hum of activity, including a well-known network of waves and frequency couplings called the default mode network. Going past the authors' statistical analysis of maximally valid correlations, they found a variety of "hubs" of cross-frequency coupling, which had an interesting nature:

"In α:β and α:γ CFS, the α LF hubs were observed in PFC and medial regions that belong to the default mode network [29] or to control and salience networks in the functional parcellation based on fMRI BOLD signal fluctuations [94–96]. This is line with many previous studies that have found α oscillations in these regions to be correlated with attentional and executive functions [14–19]. In contrast, the β and γ HF hubs were found in more posterior regions such as the SM region and the occipital and temporal cortices, where β and γ oscillations are often associated with sensory processing"
Alpha, beta, and gamma are different frequency bands of neural oscillations. CFS stands for cross-frequency coupling. LF stands for the low frequency partner of the coupling, while HF stands for the high frequency partner or source. PFC stands for the prefrontal cortex, which seems to be a locus of relatively low frequency brain waves, while the sensori-motor (SM) regions are loci of higher-frequency activity. This is interesting, as our brains are generally filters that gather lots of information (high frequency) which is then winnowed down and characterized into more abstract, efficient representations, which can operate at lower frequency.

And does more correlation in the resting state mean better brain performance when doing tasks? These authors claim that yes, this is the case:

"CFS between θ–α with β–γ oscillations (θ–α: β–γ CFS) and CFS between β and γ oscillations (β:γ CFS) showed significant positive correlations with scores on Trail-Making Tests (TMTs), which measure visual attention, speed of processing, and central executive functions, as well as with Zoo Map Tests, which measure planning ability (p < 0.05, Spearman rank correlation test, Fig 8). Intriguingly, negative correlations with the test scores were observed for CFS of α and β oscillations with higher frequencies (α–β:γ) and for γ:Hγ CFS in the Digits Tests measuring WM performance.
...
These results suggest that in a trait-like manner, individual RS CFC brain dynamics are predictive of the variability in behavioral performance in separately measured tasks, which supports the notion that CFC plays a key functional role in the integration of spectrally distributed brain dynamics to support high-level cognitive functions."

RS refers to the resting state, of the subjects and their brains.

It is exciting to see this kind of work being done, gaining insight into information processing in the brain. It is an opaque, alien organ. Though it houses our most intimate thoughts and feelings, how it does so remains one of the great tasks of humanity to figure out.

Saturday, February 15, 2020

Cells That Eat Memories

Microglia dispose of synapses that need to be forgotten.

Memories are potent, essential, abstract, sometimes maddeningly ungraspable. But they are also physical entities, stored like so many hard-drive magnetic domains, in our brains. The last decades of research have found the "engram"- the physical trace of memory in our brains; neural synapses that connect based on the convergence of neuron activities, are painstakingly consolidated through re-enactments, and judged for permanence based on their importance. Engram cells are located in many places in the brain, supporting many kinds of memory, including memories made up of all sorts of experiences, such as multiple modes of sensation. The coordinated action of neurons, combined with emotional valence, sets up an engram event, and similarly coordinated activity at some future time can prompt its readout / recall.

A recent paper described the role of microglia- the scavangers and maintainers of the brain, in cleaning out engram synapses, resulting in forgetting. The researchers set up a typical fear response training system (by electrical foot shocks) in mice, which was remembered better after 5 days than 35 days. Treating the mice with various drugs and other methods that deplete microglia caused them to remember the bad experience much better after 35 days, indeed, virtually unchanged from the 5-day time point. The cells behind these memories are, at least in part, in the dentate gyrus, a part of the hippocampus.

Engram cells. Mice with specialized genetic elements were tracked during training, when cell activation was shown by red fluorescence (neuronal activity plus tamoxifen-activated genetic recombination leading to red fluorophore expression). Later, during memory recall, another genetic system (c-Fos expression, green) was used to track neuronal cell activation. The merging and superposition of both colors locates cells that fulfill the criteria of engram cells, i.e. memory cells. The scale bar is 20 microns.

How do the microglia do this? Well, the most important question is how stale and unimportant synapses are identified for destruction, but this paper doesn't go there quite yet. They ask instead what the microglia are doing to gobble up these synapses. They use complement, which is a tagging system commonly used in the immune system to mark cells and other detritus for destruction, and is also used during early brain development to prune vast amounts of excess neurons and synapses. So this is an obvious place to look for continued dynamic pruning during adulthood. Application of inhibitors of complement in these mice caused the same enhanced remembering of their bad experiences as did the inhibition of microglia generally. Fluorescence studies showed that the initial complement cascade component, C1q, is present right at the synapses that were previously trained and presumably are elements of engrams.

But what of the selection process? The researchers devised a way to selectively inhibit the activity of the trained neurons, by adding an inhibitory protein to the genetic engineering cocktail, thus damping activity of those cells during the 35 days post-training. This treatment enhanced forgetting by one-third, while co-treatment at the same time with the drug that depleted microglia brought memory performance back to maximal levels. Thus the activity of neurons involved in engrams is important, as one would expect, to maintain memories, while the microglial disposal system is responsive to whatever system is marking inactive engram / memory synapses for removal.

This is interesting work in a very exciting and young field, starting to put meat on the bones of our knowledge of what memories are in physical terms, and how they grow and fade. Much like other tissues like muscles and bones, which are constantly regenerating and adjusting their mass and strength in response to loads, the brain dynamically responds to use as well- a lesson for those heading into older age.

Saturday, October 19, 2019

The Participation Mystique

How we relate to others, things, environments.

We are all wrapped up in the impeachment drama now, wondering what could be going on with a White House full of people who have lost their moral compasses, their minds. Such drama is an exquisite example of participation mystique, on our part as we look on in horror as the not very bright officials change their stories by the day, rats start to leave the sinking ship, and the president twists in the wind. We might not sympathize, but we recognize, and voyeuristically participate in, the emotions running and the gears turning.

Carl Jung took the term, participation mystique, from the anthropologist Lucien Levy Bruhl. The original conception was a rather derogotory concept about the animism common among primitive people, that they project anthropomorphic and social characters to objects in the landscape, thus setting up mystical connections with rocks, mountains, streams, etc. Are such involvements characteristic of children and primitive people, but not of us moderns? Hardly. Modern people have distancing and deadening mechanisms to manage our mental involvement with projected symbologies, foremost among which is the scientific mindset. But our most important and moving experiences partake of identification with another- thing or person, joining our mental projection with their charisma, whatever that might be.

Participation mystique remains difficult to define and use as a concept, despite books being written about it. But I would take it as any empathetic or identification feelings we have toward things and people, by which the boundaries in between become blurred. We have a tremendous mental power to enter into other's feelings, and we naturally extend such participation (or anthropomorphism) far beyond its proper remit, to clouds, weather events, ritual objects, etc. This is as true today with new age religions and the branding relationships that every company seeks to benefit from, as it is in the more natural setting of imputing healing powers to special pools of water, or standing in awe of a magnificent tree. Such feelings in relation to animals has had an interesting history, swinging from intense identification on the part of typical hunters and cave painters, to an absurd dismissal of any soul or feeling by scientistic philosophers like Descartes, and back to a rather enthusiastic nature worship, nature film-making, and a growing scientific and philosophical appreciation of the feelings and moral status of animals in the present day.




Participation mystique is most directly manipulated and experienced in the theater, where a drama is specifically constructed to draw our sympathetic feeings into its world, which may have nothing to do with our reality, or with any reality, but is drenched in the elements of social drama- tension, conflict, heroic motivations, obstacles. If you don't feel for and with Jane Eyre as she grows from abused child, to struggling adult, to lover, to lost soul, and finally to triumphant partner, your heart is made of stone. We lend our ears, but putting it psychologically, we lend a great deal more, with mirror neurons hard at work.

All this is involuntary and unconscious. Not that it does not affect our conscious experience, but the participation mystique arises as an automatic response from brain levels that we doubtless share with many other animals. Seeing squirrels chase each other around a tree gives an impression of mutual involvement and drama that is inescapable. Being a social animal requires this kind of participation in each other's feelings. So what of the psychopath? He seems to get these participatory insights, indeed quite sensitively, but seems unaffected- his own feelings don't mirror, but rather remain self-centered. He uses his capabilities not to sympathise with, but to manipulate, others around him or her. His version of participation mystique is a truncated half-experience, ultimately lonely and alienating.

And what of science, philosophy and other ways we systematically try to escape the psychology of subjective identification and participation? As mentioned above in the case of animal studies, a rigid attitude in this regard has significantly retarded scientific progress. Trying to re-establish objectively what is so obvious subjectively is terribly slow, painstaking work. Jane Goodall's work with chimpanzees stands as a landmark here, showing the productive balance of using both approaches at once. But then when it comes to physics and the wide variety of other exotic phenomena that can not be plausibly anthropomorphized or participated in via our subjective identification, the policy of rigorously discarding all projections and identifications pays off handsomely, and it is logic alone that can tell us what reality is.

  • The Democratic candidates on worker rights.
  • Was it trade or automation? Now that everything is made in China, the answer should be pretty clear.
  • On science.
  • Turns out that Google is evil, after all.
  • Back when some Republicans had some principles.
  • If all else fails, how about a some nice culture war?
  • What is the IMF for?
  • #DeleteFacebook
  • Graphic: who is going to tax the rich? Who is pushing a fairer tax system overall? Compare Biden with Warren carefully.

Saturday, October 12, 2019

Thinking Ahead in Waves

A computational model of brain activity following simple and realistic Bayesian methods of internal model development yields alpha waves.

Figuring how the brain works remains an enormous and complicated project. It does not seem susceptible to grand simplifying theories, but has in contrast been a mountain climbed by thousands, in millions of steps. A great deal of interest revolves around brain waves, which are so tantalizingly accessible and reflective of brain activity, yet still not well understood. They are definitely not carrying information in the way radio stations send information, whether in the AM or FM. But they do seem to represent synchronization between brain regions that are exchanging detailed information through their anatomical, i.e. neuronal, connections. A recent paper and review discuss a theory-based approach to modeling brain computation, one that has the pleasant side effect of generating alpha waves- one of the strongest and most common of the brain wave types, around 10 Hz, or 10 cycles per second- automatically, and in ways that explain some heretofore curious features.

The model follows the modern view of sensory computation, as a Bayesian modeling system. Higher levels make models of what reality is expected to look/hear/feel like, and the lower level sensory systems, after processing their inputs in massively parallel fashion, send only error signals about what differs from the model (or expectation). This is highly efficient, such that boring scenery is hardly processed or noticed at all, while surprises form the grist of higher level attention. The model is then updated, and rebroadcast back down to the lower level, in a constant looping flow of data. This expectation/error cycle can happen at many levels, creating a cascade or network of recurrent data flows. So when such a scheme is modeled with realistic neuronal communication speeds, what happens?

A super-simple model of what this paper implements. The input node sends data, in the form of error reports, (x(t)), to the next, higher level node. In return, it gets some kind of data (y(t)), indicating what that next level is expecting, as its model of how things are at time t.

The key parameters are the communication speeds in both directions, (set at 12 milliseconds), the processing time at each level, (set at 17 milliseconds), and a decay or damping factor accounting for how long neurons would take to return to their resting level in the absence of input, (set at 200 milliseconds). This last parameter seems most suspect, since the other parameters assume instantaneous activation and de-activation of the respective neurons involved. A damping outcome/behavior is surely realistic from general principles, but one wonders why a special term would be needed if one models the neurons in a realistic way, which is to say, mostly excitatory and responsive to inputs. Such a system would naturally fall to inactivity in the absence of input. On the other hand, a recurrent network is at risk of feedback amplification, which may necessitate a slight dampening bias.

The authors generate and run numerical models for a white noise visual field being processed by a network with such parameters, and generate two-dimensional fields of waves for two possibilities. First is the normal case of change in the visual field, generating forward inputs, from lower to higher levels. Second is a lack of new visual input, generating stronger backward waves of model information. Both waves happen at about 8 times the communication delay, or about 100 milliseconds, right in the alpha range. Not only did such waves happen in 2-layer models with just one pair of interacting units, but when multiple modules were modeled in a 2-dimensional field, traveling alpha waves appeared.

When modeled with multiple levels and two dimensions, the outcome, aside from data processing, is a traveling alpha wave that travels in one direction when inputs predominate (forward) and in the opposite direction when inputs are low and backward signals predominate.

Turning to actual humans, the researchers looked more closely at actual alpha waves, and found the same thing happening. Alpha waves have been considered odd in that, while generally the strongest of all brain waves, and the first to be characterized, they are strongest while resting, (awake, not sleeping), and become less powerful when the brain is actively attending/watching, etc. Now it turns out that what had been considered the "idle" brain state is not idle at all, but a sign of existing models / expectations being propagated in the absence of input- the so-called default mode network. Tracking the direction of alpha waves, they were found to travel up the visual hierarchy when subjects viewed screens of white noise. But when their eyes were closed, the alpha waves traveled in the opposite direction. The authors argue that normal measurements of alpha waves fail to properly account for the power of forward waves, which may be hidden in the more general abundance of backward, high-to-low level alpha waves.

Example of EEG from human subjects, showing the directionality of alpha wave travel, in this case forward from input (visual cortex in the back of the brain) to higher brain levels.

"Therefore, we conjecture that it may simply not be possible for a biological brain, in which communication delays are non-negligible, to implement predictive coding without also producing alpha-band reverberations. Moreover, a major characteristic of alpha-band oscillations—i.e., their propagation through cortex as a traveling wave—could also be explained by a hierarchical multilevel version of our predictive coding model. The waves predominantly traveled forward during stimulus processing and backward in the absence of inputs. ...  Importantly though, in our model none of the units is an oscillator or pacemaker per se, but oscillations arise from bidirectional communication with temporal delays."

Thus brain waves are increasingly understood as a side effect of functional synchronization and, in this case, intrinsically associated with the normal back-and-forth of data processing, which looks nothing like the stream of data from a video camera, but something far more efficient, using a painstakingly-built internal database of reality to keep tabs on, and detect deviations from, new sensations arriving. It remains to ask what exactly this model function is, which the authors term y(t) - the data sent from higher levels to lower levels. Are higher levels of the brain modeling their world in some explicit way and sending back a stream of imagery which the lower levels compare with their new data? No- the language must far more localized. The rendition of full scenery would be reserved for conscious consideration. The interactions considered in this paper are small-scale and progressive, as data is built up over many small steps. Thus each step would contain a pattern of what is assumed to be the result of the prior step. Yet what exactly gets sent back, and what gets sent onwards, is not intuitively clear.

Saturday, August 31, 2019

Good Fences Make Good Neighbors

Ephrins are cell surface receptor-ligand pairs that create boundaries throughout the body.

Ever wonder how organs form and stay distinct? All our DNA is the same, yet the cells it gives rise to differentiate, migrate all over the place, through each other's neighborhoods, and then form various distinct tissues, including 700 unique muscles. Cells don't have eyes or brains, so the mechanism is more like ants following pheromone trails rather than an architect following a masterplan of the body. It is a far more complicated story than anyone understands at the moment, but some of the actors are known.

There are a lot of cell adhesion proteins, such as integrins, cadherins, NCAMs. and selectins. But adhesion can't be the whole story, lest every cell adhere to every other. That is where Ephrins come in, which are a family of cell surface molecules which typically have repulsive effects, when they find and bind to their receptors (called Eph). They are widely used in development and mature tissues to keep proper boundaries and help guide that way for migrating cells and cell processes.

It has long been known that if you dissociate embryonic tissue and allow the cells to float about, they will re-associate in an organized (though far from perfect) way, like sticking to like, with boundaries forming between those from different tissues. This indicates the power of selective adhesion and repulsion to help cells position themselves, which operate in conjunction with other systems that keep their identity straight- what organ or tissue they are supposed to be. Each such cell type expresses its own complement of adhesion and repulsion surface molecules, forming part of the code that helps it to find and keep its place, as well as deciding whether to continue dividing and moving, or to stop when the local structure has reached its expected proportions.

On the left, one tissue expressing an Eph receptor keeps separate from another tissue expressing the  Ephrin ligand which it recognizes, thanks to repulsive effects that counter-act several other adhesive interactions. On the right are a few of the details of the mechanism. Each side of the Ephrin-Eph interaction can tell its cell that an encounter has happened.

These mechanisms take another quantum leap in the nervous system, which involves a particularly high level of cell migration during development, and pathfinding of dendrites and axons throughout life. Axons travel huge distances, both in the central nervous system and in the peripheral nervous system, using adhesion and repulsion cues all along the way. Ephrins are used dynamically to guide growth cones. For example in the serotonin network, serotonin neurons traveling from the dorsal raphae (B7) to the forebrain olfactory bulb pass by the amygdala. Do they stop there to extend some input fibers? Normally they do. But not if the amygdala has been genetically altered (in these mice) to express the EphrinA ligand, which pairs with the EphrinA5 receptor that is normally expressed on these neurons.

EphrinA is typically expressed in the hypothalamus, keeping serotonergic projections from the dorsal raphae (on the left, B7) from innervating. But if it is expressed (in mice) in the amygdala, it prevents that normal innervation as well, as these neurons travel during development into the forebrain, in this case the olfactory bulb (OB).

  • Review of muscle development.
  • Review of cell migration.
  • Rising Asia.
  • Sweden is more entrepreneurial than the US- because incumbents have less power and people have more power.

Saturday, August 24, 2019

Incarnation and Reincarnation

Souls don't reincarnate. Heck, they don't even exist. But DNA does.

What a waste it is to die. All that work and knowledge, down the drain forever. But life is nothing if not profligate with its gifts. Looking at the reproductive strategies of insects, fish, pollen-spewing trees, among many others gives a distinct impression of easy come, easy go. Life is not precious, but dime-a-dozen, or less. Humanity proves it all over again with our rampant overpopulation, cheapening what we claim to hold so dear, not to mention laying the rest of the biosphere to waste.

But we do cherish our lives subjectively. We have become so besotted with our minds and intelligence that it is hard to believe, (and to some it is unimaginable), that the machinery will just cease- full stop- at some point, with not so much as a whiff of smoke. Consciousness weaves such a clever web of continuous and confident experience, carefully blocking out gaps and errors, that we are lulled into thinking that thinking is not of this world- magical if not supernatural. Believing in souls has a long and nearly universal history.

Reincarnation in the popular imagination, complete with a mashup of evolution. At least there is a twisty ribbon involved!

Yet we also know it is physical- it has to be something going on in our heads, otherwise we would not be so loath to lose them. Well, lose them we do when the end comes. But it is not quite the end, since our heads and bodies are reincarnations- they come from somewhere, and that somewhere is the DNA that encodes us. DNA incarnates through biological development, into the bodies that are so sadly disposable. And then that DNA is transmitted to new carnate bodies, and re-incarnates all over again in novel combinations through the wonder of sex. It is a simple, perhaps trite, idea, but offers a solid foundation for the terms (and archetypes) that have been so abused through theological and new-age history.

Sunday, July 14, 2019

What Does the Cerebellum Do?

Pianists take note- fine motor and rhythym control happens thanks to this part of the brain. But it isn't just for motor control anymore, either.

The cerebellum is the mini-brain appendage that has finer crenellations than the cortex, as much surface area, (when unfolded), more neurons, more regular structure, and has long been associated with fine motor control, judging from cases where it is defective. But in recent decades, its functions have ramified and now are understood to affect many core brain functions like cognition, pain, and affect, in a supplementary way. Just as we have supplemented computers with special processing units like GPUs, evolution seems to have devised a separate processing unit for our brains.

Removing the cerebellum does not generate paralysis, but severe deficits in movement control (fine-ness, rhythm, timing, balance) as commanded by the higher levels of the cortex. That means the cortical motor commands are not routed entirely through the cerebellum, but are copied to it and supplemented by its outputs on the way to the spinal cord. In evolution, it started as a small module to improve balance (becoming what is now the most primitive part of our cerebellum, the flocculonodular lobe). This gradually extended, in mammals, to refining all sorts of motor control, in the central areas of the human cerebellum. And finally, its lateral lobes are now interconnected with many areas of the neocortex, including executive, memory, and other non-motor locations, evidently to refine, based on feedback, many aspects of our cognition. In the evolution of humans, the cerebellum changed the most, of all brain regions, between Neanderthals and ourselves, suggesting that even so late in evolution, better fine control, whether of motor, social or other functions, became dramatically more important, perhaps through such activities as the creation and use of our many tools, of stone, wood, fibers, etc.

Outline of the typical circuitry of the cerebellum. Main inputs come from the left, vi mossy fibers (MF), which touch directly on the output DCN cells. Their major processes, however, go to granule cells (GC), whose axons form a vast parallel array innervating the dendrites of purkinje cells (PC), which in turn inhibit the deep cerebellar nuclei neurons (DCN), which provide outputs. Separately, by the major theory in the field, some error inputs come into the inferior olive (IO), which has extremely strong inputs to the purkinje cells and can change their long-term behavior, thus constituting training.

The remarkable thing about the cerebellum is its structure- a regimented, once-through architecture that can not have reverberating, recurrent connections like more complex parts of the brain, but instead is massively parallelized, featuring purkinje cells with large but flat pancake-like dendritic trees, shot through at right angles with the parallel fiber axons of the granule cells. The flow of information is input via mossy fibers to the granule cells, which activate the purkinje cells, which inhibit the dense central nuclei cells, which are the source of all outputs. The dense central nuclei cells also get some inputs directly from the input mossy fibers. The overall logic seems to be that the granule cell - purkinje cell circuit selectively dampens what would otherwise be a direct input-output from the mossy fibers through the dense central nuclei cells.

One functional map of the cerebellum, from a very interesting general review of its functions. It is clear that while motor functions are strongly represented, the cerebellum engages many other cognitive issues.

Additionally, center parts of the cerebellum that are most relevant for motion are topographically mapped to body regions, much as the sensory and motor cortexes of the cerebrum are. This supports the idea that the main cerebellar function is a very regimented, if hugely adjustable and sensitive, information transformation from input to output.

There is one more input, from the inferior olive, which gets inputs from the spinal chord, and higher levels of the brain. These neurons have activating processes going to the deep central nuclei and particularly strong connections (climbing fibers) to a purkinje cells, one climbing fiber per target cell. These connections are strong enough to overwhelm all the granule cell inputs, and are thought to be the key "training signal", which, in response to pain or error, adjusts the strength of the granule cell-purkinje cell network. This seems to be what is happening under the hood, piano playing-wise, when one has the jarring experience of hitting wrong notes, and gradually finds that the fingers unconsciously and spontaneously learn to avoid them. What was perhaps a stop-gap tuning mechanism for critical needs of accurate motion turned out, however, to have wider applications.

Detailed tracing of the connections between hippocampus (injection site) and the cerebellum (imaged above). A virus was injected, which travels slowly in retrograde fashion up the axons of neurons projecting from other regions of the brain, in this case, neurons projecting from the cerebellum into the hippocampus. The images show staining (brown) of regions of the cerebellum, with cell bodies in blue.

A recent paper looked at hippocampal connections of the cerebellum, which seem to mediate spatial orientation / navigation- another fine-tuning kind of process. Defects in these connections are seen in autism, for instance. Experiments in mice show that the cerebellum provides some inputs to, and affects and alters activities in the hippocampus, known for roles in short-term memory, and navigation / orientation / mapping. These researchers undertook to track the detailed connections between the two areas, and also established that some portions of each organ oscillate together, at the theta (6-12 Hz) frequency. This oscillation is very strong in the hippocampus, characteristic of being in motion or needing short-term memory, and known to function in spatial navigation. Indeed, they sampled individual purkinje neurons in the cerebellum (of mice) that were phase-locked with this hippocampal rhythm. And they found that for some of these areas, the coherence of the rhythms increased detectably as the mice learned a new navigation task. The cerebellum, as all brain areas, has various rhythms of its own, and to find that some of those may entrain, or at least functionally correlate with, those of other interesting regions of the brain, is very interesting.
"... oscillations within the theta range are thought to support inter-region communication across a wide variety of brain regions. Our finding that cerebello-hippocampal coherence is limited to the 6–12 Hz bandwidth is in keeping with previous studies on cerebro-cerebellar communication in which neuronal synchronization has been observed between the cerebellum and prefrontal cortex."