Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

Sunday, June 2, 2019

Backward and Forward... Steps to Perception

Perception takes a database, learning, and attention.

We all know by now that perception is more than simply being a camera, getting visual input from the world. Cameras see everything, but they recognize nothing, conceptualize nothing. Perception implies categorization of that input into an ontology that makes hierarchical sense of the world, full of inter-relationships that establish context and meaning. In short, a database is needed- one that is dynamically adaptive to allow learning to slice its model of reality into ever finer and more accurate categories.

How does the brain do that? The work of Karl Friston has been revolutionary in this field, though probably not well-enough appreciated and admittedly hard for me and others not practiced in mathematical statistics to understand. A landmark paper is "A theory of cortical responses", from 2005. This argues that the method of "Empirical Bayes" is the key to unlock the nature of our mental learning and processing. Bayesian statistics seems like mere common sense. The basic proposition is that the likelihood of some thing is related to our naive model (hypothesis) of its likelihood arrived at prior to any evidence or experience, combined with evidence expressed in a way that can weight or alter that model. Iterate as needed, and the model should improve with time. What makes this a statistical procedure, rather than simple common sense? If one can express the hypothesis mathematically, and the evidence likewise, in a way that relates to the hypothesis, then the evaluation and the updating from evidence can be done in a mechanical way.

Friston postulates that the brain is such a machine, which studiously models the world, engaging in what statisticians call "expectation maximization", which is to say, progressive improvements in the in detail and accuracy of its model, driven by inputs from sensory and other information. An interesting point is that sensory input functions really as feedback to the model, rather than the model functioning as an evaluator of the inputs. We live in the model, not in our senses. The overall mechanism works assiduously to reduce surprise, which is a measure of how inputs differ from the model. Surprise drives both attention and learning.

Another interesting point is the relationship between inference and learning. The model exists to perform inference- that is the bottom-up process of judging the reality and likely causes of some event based on the internal model, activated by the input-drive attention. We see a ball fall down, and are not surprised because our model is richly outfitted with calculations of gravitation, weight, etc. We infer that it has weight, and no learning is required. But suppose it is a balloon that floats up instead of falling- a novel experience? The cognitive dissonance represents surprise, which prompts higher-level processing and downward, top-down alterations to the model to allow for lighter-than-air weights. Our inferences about the causes may be incorrect. We may resort to superstition rather than physics for the higher-level inference or explanation. But in any case, the possibility of rising balls would be added to our model of reality, making us less surprised in the future.
The brain as a surprise-minimizing machine. Heading into old age, we are surprised by nothing, whether by great accumulated experience or by a closure against new experiences, and thus reach a stable / dead state. 

This brings up the physiology of what is going on in the brain, featuring specialization, integration, and recurrent networks with distinct mechanisms of bottom-up and top-down connection. Each sensory mode has its specialized processing system, with sub-modules, etc. But these only work by working together, both in parallel, cross-checking forms of integration, and by feeding into higher levels that integrate their mini-models (say for visual motion, or color assignment) into more general, global models.
"The cortical infrastructure supporting a single function may involve many specialized areas whose union is mediated by functional integration. Functional specialization and integration are not exclusive; they are complementary. Functional specialization is only meaningful in the context of functional integration and vice versa."

But the real magic happens thanks to the backward connections. Friston highlights a variety of distinctions between the forward and backward (recurrent) connections:

Forward connections serve inference, which is the primary job of the brain most of the time. They are regimented, sparsely connected, topographically organized, (like in the regular striations of the visual system). They are naturally fast, since speed counts most in making inferences. On the molecular level, forward connections use fast voltage-gated receptors, AMPA and GABA.

Backward connections, in contrast, serve learning and top-down modulation/attention. They are slow, since learning does not have to obey the rapid processing of forward signals. They tend to occupy and extend to complimentary layers of the cortex vs the forward connecting cells. They use NMDA receptors, which are roughly 1/10 as fast in response as the receptors use in forward synapses. They are diffuse and highly elaborated in their projections. And they extend widely, not as regimented as the forward connections. This allows lots of different later effects (i.e. error detection) to modulate the inference mechanism. And surprisingly, they far outnumber the forward connections:
"Furthermore, backward connections are more abundant. For example, the ratio of forward efferent connections to backward afferents in the lateral geniculate is about 1 : 10. Another distinction is that backward connections traverse a number of hierarchical levels whereas forward connections are more restricted."

Where does the backward signal come from, in principle? In the brain, error = surprise. Surprise expresses a violation of the expectation of the internal model, and is accommodated by a variety of responses. An emotional response may occur, such as motivation to investigate the problem more deeply. More simply, surprise would induce backward correction in the model that predicted wrongly, whether that is a high-level model of our social trust network, or something at a low level like reaching for a knob and missing it. Infants spend a great deal of time reaching, slowly optimizing their models of their own capabilities and the context of the surrounding world.
"Recognition is simply the process of solving an inverse problem by jointly minimizing prediction error at all levels of the cortical hierarchy. The main point of this article is that evoked cortical responses can be understood as transient expressions of prediction error, which index some recognition process. This perspective accommodates many physiological and behavioural phenomena, for example, extra classical RF [receptive field] effects and repetition suppression in unit recordings, the MMN [mismatch negativity] and P300 in ERPs [event-related potentials], priming and global precedence effects in psychophysics. Critically, many of these emerge from the same basic principles governing inference with hierarchical generative models."

This paper came up due to a citation from current work investigating this model specifically with non-invasive EEG methods. It is clear that the model cited and outlined above is very influential, if not the leading model now of general cognition and brain organization. It also has clear applications to AI, as we develop more sophisticated neural network programs that can categorize and learn, or more adventurously, develop neuromorphic chips that model neurons in a physical rather then abstract basis and show impressive computational and efficiency characteristics.

Monday, April 8, 2019

That's Cool: Adolescent Brain Development

Brain power and integration increases with development, particularly in the salience network and in the wakeful, attentive beta waves.

We see it happen, but it is still amazing- the mental powers that come on line during child development. Neurobiologists are starting to look inside and see what is happening mechanistically- in anatomical connectivity, activity networks, and brain wave patterns. Some recent papers used fMRI and magnetoencephalography to look at activity correlations and wave patterns over adolescent development. While the methods and analyses remain rather abstruse and tentative, it is clear that such tendencies as impulsivity and cognitive control can be associated with observations about stronger brain wave activity at higher frequencies, lower activity at lower frequencies, and inter-network integration.

An interesting theme in the field is the recognition that not only is the brain organized physically in various crinkles, folds, nodules, etc., and by functional areas like the motor and sensory cortexes or Broca's area, involved in speech production, but that it is also organized in connectivity "networks" that can cross anatomical boundaries, yet show coherence, being coordinately activated inside much more densely than outside the network. An example is the default mode network (DMN, or task-negative network), which happens when adults are just resting, not attending to anything in particular, but also not asleep. This is an example of the brain being "on" despite little conscious mental work being done. It may be our unconscious at work or play, much like it is during sleep on a much longer leash. As one might imagine for this kind of daydreaming activity, it is strongly self-focused, full of memories, feelings, social observations, and future plans. Anatomically, the DMN extends over much of the brain, from the frontal lobes to the temporal and parietal lobes, touching on regions associated with the functions mentioned, like the hippocampus involved in memory, temperoparietal areas involved in sociality/ theories of mind, etc. There are roughly twenty such networks currently recognized, which activate during different mental fuctions, and they provide some answers to the question of how different brain areas are harnessed together for key functions typical of mental activity.

Two networks relevant to this current work are the salience network (SN) and the cingulo-opericular network (CN or CO). The latter is active during chronic attention- our state of being awake and engaged for hours at a time, termed tonic alertness. (This contrasts with phasic alertness, which is much shorter-term / sporadic and reactive).  It is one of several task-positive networks that function in attention and focus. The salience network spans cortical (anterior insula an dorsal anterior cingulate) and subcortical areas (amygdala and central striatum) binding together locations that play roles in salience- assigning value to new events, reacting to unusual events. It can then entrain other brain networks to take control over attention, behavior, thoughts, etc.

fMRI studies of the activity correlations between brain networks. The cingulo-opercular and salience network connections (gray) take a large jump in connectivity to other regions in early adolescence. At the same time, fronto-parietal network connections (yellow), characteristic of frontal control and inhibition of other networks, take a dive, before attaining higher levels going into adulthood.

Here we get to brain waves, or oscillations. Superimposed on the constant activity of the brain are several frequencies of electrical activity, from the super-slow delta waves (~ 1Hz) of sleep to the super-fast gamma waves (~50 Hz) which may or may not correlate with attention and perception. The slower waves seem to correlate with development, growth, and maintenance, while the faster waves correlate with functions such as attention and behavior. Delta waves are thought to function during the deepest sleep in resetting memories and other brain functions, and decline sharply with age, being pervasive in infants, and disappearing by old age. Faster waves such as theta (5-9 Hz), alpha (8-12), and beta (14-26 Hz) correlate with behavior and attention, and are generally thought to help bind brain activities together, rather than transmitting information as radio waves might. Attention is a clear example, where large brain regions are bound by coordinated waves, depending on what is being attended to. Thus the "spotlight of attention" is characterized both by the activation of selected relevant brain areas, and also by their binding via phase-locked neural oscillations. These are naturally highly variable and jumbled as time goes on, reflecting the saccadic nature of our mental lives.

One of the papers above focused on theta and beta waves, finding that adolescents showed a systematic move from lower to higher frequencies. While fMRI scans of non-oscillatory network activity showed greater integration with age, studies of oscillations showed that the main story was *de-coupling mainly at the lower frequencies. What this all seems to add up to is a reduction of impulsivity, via reduced wave/phase coupling between especially between the salience and other networks, at the same time as control over other networks is more integrated and improved, via increased connectivity. So control by choice goes up, while involuntary reactivity goes down. It is suggested that myelination of axons, as part of brain development along with pruning extra cells and connections, makes long-range connections faster, enabling greater power in these higher frequency binding/coordination bands.

Brain wave phase coordination between all areas of the brain, measured by frequency and age. Low frequencies associated with basal arousal, motor activity, and daydreaming are notably less correlated in adults, while beta-range frequencies about 25 Hz, associated with focused attention, are slightly more correlated. 

Is this all a little hand-wavy at this point? Yes indeed- that is the nature of a field just getting to grips with perhaps the most complicated topic of all. But the general themes of oscillations as signs/forms of coordination and binding, and active sub-networks as integrating units of brain/mental activity on top of the anatomical and regional units are interesting developments that will grow in significance as more details are filled in.

Sunday, March 10, 2019

Deranged Transcription

.. in autism, schizophrenia, and bipolar syndromes.

It is clear that human evolution over the last few millenia has been particularly rapid for mental/cognitive traits. This seems to have created the hazard of unintended consequences in the form of mental illness, when this high-strung and finely tuned system goes haywire. There has been a steady stream of genome variants discovered to be associated with prominent mental illnesses like schizophrenia, autism, and bipolar disorder. These variants are inherited (not sporadic, like the mutations responsible for most cancers) and tend to be either rare or have very slight effects, for obvious reasons.
"The majority of disease-associated genetic variation lies in noncoding regions enriched for noncoding RNAs (ncRNAs) and cis-regulatory elements that regulate gene expression and splicing of their cognate coding gene targets."

One feature that stands out among the genetic variants that have been found to date are that they are rarely in coding regions, thus do not affect the sequence of the protein encoded by the affected gene. Instead, they affect regulation- the delicate mechanisms by which genes are turned on and off in particular places and times. It is now common knowledge that humans have hardly more genes than fish or potatoes. It is gene regulation that makes us what we are, and recent decades have revealed whole new mechanisms and participants in this regulation, such as long non-coding RNAs.

A recent paper conducted a fishing expedition to find how gene regulation varies among people with mental illness, in a quest to find causal changes and molecular patterns characteristic of these syndromes. They had access to thousands of brain samples, and put them through a modern analysis of transcripts, including from 16,541 protein-coding genes and 9,233 genes expressing short or long regulatory RNAs. One theme that came up is that they found many differences among gene splice forms. Most eukaryotic genes are spliced after they are transcribed, being composed on the genome of a series of separate coding exons. This splicing (removing the intervening intronic RNA and joining the coding exons) is highly regulated and for many genes, fundamentally influential on what protein they ultimately produce. Some genes have dozens of different splice forms, many with significantly different roles. For example, a key functional domain like an inhibitor or a DNA-binding region may be left out of one form, converting its encoded protein into one that performs functions precisely opposite to those of the full-length form.

Screen shot from the NIH gene resource, showing the human gene titin, which encodes the largest known protein, which structurally spans the muscle sarcomere. Each green segment is an exon, totalling 363. Each horizontal line is a distinct splice form, varying in which exons are included. Skeletal and cardiac muscle express different splice forms of this protein, and sequence variations in this gene are responsible for some syndromes such as dilated cardiomyopathy.

After putting their brain tissue samples through purification and sequencing of all the RNAs, and alignment with known sequences, the authors came up with a set of differences of gene expression between affected and control subjects. They claim that a quarter of all the genes they looked at were noticeably affected by one of the three disorders they looked at- autism, schizophrenia, and bipolar disorder. Naturally, this finding may be a consequence of wide-spread structural and functional changes in affected brains that may be well downstream from any causative factor. There was relatively little overlap between the three. Schizophrenia showed the most differentially expressed genes/splice forms by far (several-fold more than the others), and shared about half of those seen differentially expressed in bipolar disorder, and one-fifth of those in autism. One might speculate that on the whole, schizophrenia is a more severe disorder which would lead to more dramatic differences in gene expression in an absolute sense.
"Notably, 48 DS [differentially spliced] genes (10%; FDR = 8.8 × 10−4) encode RNA binding proteins or splicing factors, with at least six splicing factors also showing DTE [differential overall transcript expression] in ASD [autism spectrum disorders] (MATR3), SCZ [schizophrenia] (QKI, RBM3, SRRM2, U2AF1), or both (SRSF11)."

Interestingly, the authors also tested (in animals) whether drugs used to treat these major disorders could generate the differences seen above. The answer was no- differential expression was somewhat affected by lithium, but not significantly by the others tested. Secondly, the authors wheeled in a separate method, sifting through genomic variations to find other genes with variations known to be causally associated with these diseases, and guess whether these variations (aka mutations) might affect transcriptional expression. The results did not overlap very well. For bipolar disorder, none of the 17 genes identified by this latter method overlapped with the differentially expressed genes identified by the first method.

Part of the general scheme of the experiment, and schematic results. Isoforms of some genes show differences in disease vs control tissues, and patterns from many such genes and differences can be assembled to diagnose particular cells or processes that are being notably affected. But note in the middle panel that the quantitative difference in the alternative splicing pattern between disease and control for this particular example is minuscule- in the 1 to 4% range. It may be statistically significant, but could be minor in effect. The last panel illustrates fold-change ranges for non-coding long RNAs among the different syndromes and known patterns of cell-type expression, in a schematic sense. Genes known to be expressed in microglia showed particularly significant changes.

So what did they get? Mainly a lot of little fish, and quite a few that had already been suggested to be important for brain function by other methods. One prominent theme was the involvement of immune-related genes. Genes characteristic of astrocyte and microglia cell expression, and of interferon responses, among other signatures, were significantly up-regulated. This agrees with other work indicating that inflammatory mechanisms are used to prune synapses and cells generally in the brain, and this process is over-active in schizophrenia. An example gene is complement C4A, which encodes part of a key immune system that identifies and clears foreign material with the help of phagocytes. It used to be thought that the central nervous system was immune-privileged, i.e. not surveiled by it or affected by it. But that turns out to be very far from the truth, and this gene's overexpression is genetically identified as a causal factor for schizophrenia.

Another big fish they caught was a gene called RBFOX1. Certain spliced forms were significantly less abundant in the affected tissues, supporting a long line of work identifying variations in this gene as candidate causal factors for autism, and its function as an RNA binding protein that regulates the alternative splicing of other genes as well as their later transcript stability and lifetime. Reduced function of this protein is known to increase neuronal excitability and increase seizures. It seems thus to be a key node in neuronal development and functional regulation, through its control of the expression of other genes.

Did these authors find anything new? That is naturally hard to say at this point, since such conclusions would require quite a bit of followup work to analyze the function of novel genes that were found. The expedition showed that it was technically capable of hauling in not only a lot of fish, but many known already to be significant in these syndromes, either causally or as markers and downstream effects. Choosing which to track down to their actual biology is a difficult question- one that must come next. It is a catch that may provide sustenance for many students and post-docs to come.


  • Reflections on John Dean.
  • Trump's high school, bailed out by the Chinese.
  • No, we do not have to pay interest on created money, or issue bonds, if we don't want to.
  • Why cry for the 1%?
  • California's housing crisis is rooted in NIMBY permiting, not economics.
  • Treason and felony? Not so bad after all.

Saturday, June 30, 2018

Work-a-Day Addiction

We are all addicted to the normal hormones of motivation.

I am watching the World Cup from time to time, and wonder- why? Why do we get so involved in competition, why are men particularly motivated to participate and watch, why do whole nations believe themselves to be "represented" by a sporting team, and feel emotional loss or gain by their fate? It is all very odd, from logical perspective- even a grand waste of effort, money and time, second only to that wasted on religion!

Naturally, one has to look at our biology and evolutionary history. A recent article in Salon outlined an interesting contrast between hormones that drive men to their characteristic activities- testosterone and oxytocin. Success gives us a testosterone boost, while social bonding gives us an oxytocin boost. Both are powerful drugs that give us highly conditional, precise motivation. If one's success serves the group, such as a successful hunt, (or war), both systems reinforce, and we are maximally happy. If the two systems are in conflict, such as a bar brawl, civil war, or domestic intrigue, we take our bonding resources were we can- from whichever group will have us, or do without, becoming loners or outcasts living on testosterone alone, if that. That really isn't much fun, and one gets the distinct impression that oxytocin is ultimately the more significant motivator. Winning only works if you have someone to share it with.

Another recent experience was going to a concert. It was transporting, highly socially bonding, and its happy effects lasted long, long after, surely involving something like a surge of oxytocin. One might even call it "religious", a related activity carefully engineered to bring people together to get positive hits of social bonding, though with a veneer of pompous folderol. These are in some ways the poles of our culture- the incessant competition in sports, business, and politics, which divide us, versus bonding experiences that bring us back together. The latter are getting, perhaps, pushed to the margins as live music becomes rarer, religion dies a slow death, and our arts turn into violent superhero extravaganzas. Granted, that in competitive situations, we bond with our team, our party, our religion versus their religion. But the mixture of testosterone-fueled competitiveness makes such group-i-ness a mixed blessing, easily turning to mean, if not violent ends. Just look at our political system, where civility has turned into blood sport. We need to find a more consistently positive and unifying way to be.


The drugs we abuse follow a similar pattern. Cocaine is probably a fair testosterone analog, making one feel invincible- ready to take wing. On the other hand, there are downers like alcohol and opioids- what's up with them? One can speculate that they give an oxytocin kind of buzz- the comfortable, disinhibited state of reduced social anxiety, which melts barriers to some degree, though quickly becomes destructive in excess. The opioid epidemic is, as many have observed, a direct index of social malaise and atomization. One might add that 12-step programs to treat alcoholism attempt (mostly unsuccessfully) to supply the missing social bonding with a sort of regimented friendship. Most people, with robust lives and brain chemistry, don't fall for the fake pleasures of either drug, but get their fix from actual accomplishments and actual social connections, which generate the internal rewards which are just as chemical, but much more subtly, finely, and productively regulated.

(All this is terribly reductive, of course, and misses all kinds of details of how our motivation systems work. Yet it is clear that in a broad brush way, positive and negative feelings are strongly influenced by these and other chemicals that form the internal motivation and reward systems, for which drugs of abuse and recreation are blunderbuss versions. Whether we add in serotonin, dopamine, and others, to make more accurate models of the internal workings does not alter this picture. There is also no law that cognition can not generate emotions. But it seems to make sense that our general emotional tenor is shifted in rather gross ways, over many cognitive (and bodily) systems at once, which dictates the use of hormones and hormone-like neurotransmitters to create such wide-spread regulatory effects.)

But many forces are set against our happiness and bonding, sending us toward competition instead. Traditionally, the (testosterone-addled) patriarchy has been a major culprit, as we have experienced so clearly in the traditional setting of the society of Afghanistan. Scarcity in general, of course, generates warfare and competition, to which primitive cultures are far more prone than our own, despite their otherwise idyllic nature.

But our current version of modernity may be even worse. Society-wide bonding experiences, like patriotism, universal religion, and traditions in the very general sense, are all being corroded. Some of the corrosion comes from competitive market forces, which are invading every aspect of our lives with the retreat of public services, civic responsibility, and family structure. Some comes from our very prosperity, which allows us each more independence and freedom- which is to say, an escape from close, maybe suffocating, social ties. Some of it comes from sheer population growth, which makes everything more competitive, particularly space- the space to live, to not be homeless, to find a country in which one can make a future, instead of being crushed under poverty and corruption. And some of it comes from environmenal degradation, another consequence of population growth and rampant capitalism, which makes agriculture and subsistance more difficult in already-poor areas and degrades the spiritual balms of nature. The internet, which was supposed to bring us all together, has instead balkanized us into ever-smaller tribes, enabled anonymous flame-throwing and rampant bullying, including from the highest offices. And into the bargain, it has grievously wounded the music industry- that keystone of positive social bonding.

What is going to put all this together again? How will we re-establish healthy lives and communities against all these forces? Clearly, Scandinavian countries, which have followed a less strictly capitalistic model, with less testosterone and more social awareness and conscience, have succeeded in building happier societies. It takes a change in emphasis, and a renunciation of the imbalanced invasion of competition in every nook and cranny of our lives, where few win and most lose. Otherwise our only recourse is the artificial versions of social bonding reward.

  • A depressing withdrawal from opioids.
  • Trump is a bad drug.
  • Remember that 81% of evangelicals support Trump. Because they are white and pure.
  • The supreme court is the next domino to fall for our creeping Nazi-ism.
  • Death of the middle class, cont.
  • And the decline is gathering speed.
  • Science takes a theory, and sometimes a little PR.
  • Love ...

Sunday, June 24, 2018

Now You See it

Rapid-fire visual processing analysis shows how things work in the brain.

We now know what Kant only deduced- that the human mind is full of patterns and templates that shape our perceptions, not to mention our conceptions, unconsciously. There is no blank slate, and questions always come before answers. Yet at the same time, there must be a method to the madness- a way to disentangle reality from our preconceptions of it, so as to distinguish the insane from the merely mad.

The most advanced model we have for perception in the brain is the visual system, which commandeers so much of it, and whose functions reside right next to the consciousness hot-spot. From this system we have learned about the step-wise process of visual scene interpretation, which starts with the simplest orientation, color, movement, and edge finding, and goes up, in ramifying and parallel processes, to identifying faces and cognition. But how and when do these systems work with retrograde patterns from other areas of the brain- focusing attention, or associating stray bits of knowlege with the visual scene?

A recent paper takes a step in this direction by developing a method to stroboscopically time-step through the visual perception process. They find that it involves a process of upwards processing prior to downwards inference / regulation. Present an image for only 17 milliseconds, and the visual system does not have a lot of conflicting information. A single wave of processing makes its way up the chain, though it has quite scant information. This gives an opportunity to look for downward chains of activity, (called "recurrent"), also with reduced interference from simlutaneous activities. The subjects were able to categorize the type of scene (barely over random chance) when presented images for this very brief time, surprisingly, enough.

The researchers used a combination of magnetoencephalography (MEG), which responds directly to electrical activity in the brain and has very high time resolution, but not so great depth or spatial resolution, with functional magentic resonance imaging (fMRI), which responds to blood flow changes in the brain, and has complementary characteristics. Their main experiments measured how accurately people categorized pictues, between faces and other objects, as the time of image presentation was reduced, down from half a second to 34, then 17 milliseconds, and as they were presented faster, with fewer breaks in between. The effect this had on task performance was, naturally, that accuracy degraded, and also that what recognition was achieved was slowed down, even while the initial sweep of visual processing came up slightly faster. The theory was that this kind of categorization needs downward information flow from the highest cognitive levels, so this experimental setup might lower the noise around other images and perception and thus provide a way to time the perception process and dissect its parts, whether feed-forward to feed-back.

Total time to categorization of images presented at different speeds. The speediest presentation (17 milliseconds) gives the last accurate discernment (Y-axis), but also the fastest processing, presumably dispensing with at least some of the complexities that come up later in the system with feedbacks, etc., which may play an important role in accuracy.

They did not just use the subject reports, though. They also used the fMRI and MEG signals to categorize what the brains were doing, separately from what they were saying. The point was to see things happening much faster- in real time- rather than to wait for the subjects to process the images, form a categorical decision, generate speech, etc. etc. They fed this MEG data through some sort of classifier, (SVM), getting reliable categorizations of what the subjects would later report- whether they saw a face or another type of object. They also focused on two key areas of the image processing stream- the early visual cortex, and the inferior temporal cortex, which is a much later part of the process, in the ventral stream, which does object classification.

The key finding was that, once all the imaging and correlation was done, the early visual cortex recorded a "rebound" bit of electrical activity as a distinct peak from the initial processing activity. The brevity of picture presentation (34 ms and 17 ms) allowed them to see (see figure below) what longer presentation of images hides in a clutter of other activity. The second peak in the early visual cortex corresponds quite tightly to one just slightly prior in the inferior temporal cortex. While this is all very low signal data and the product of a great deal of processing and massaging, it suggests that decisions about what something is not only happen in the inferior temporal cortex, but impinge right back on the earlier parts of the processing stream to change perceptions at an earlier stage. The point of this is not entirely clear, really, but perhaps the most basic aspects of object recognition can be improved and tuned by altering early aspects of scene perception. This is another step in our increasingly networked models of how the brain works- the difference between a reality modeling engine and a video camera.

Echoes of categorization. The red trace is from the early visual processing system, and the green is from the much later inferior temporal cortex (IT), known to be involved in object categorization / identification. The method of very brief image presentation (center) allows an echo signal that is evidently a feedback (recession) from the IT to come back into the early visual cortex- to what purpose is not really known. Note that overall, it takes about 100 milliseconds for forward processing to happen in the visual system, before anything else can be seen.

  • Yes, someone got shafted by the FBI.
  • We are becoming an economy of feudal monoliths.
  • Annals of feudalism, cont. Workers are now serfs.
  • Promoting truth, not lies.
  • At the court: better cowardly and right wing, than bold and right wing.
  • Coal needs to be banned.
  • Trickle down not working... the poor are getting poorer.

Saturday, May 19, 2018

mm-Hmmm ...

The critical elements of conversation that somehow didn't make it into the "language". A review of How We Talk, by N. J. Enfield.

Written language is a record of elision. The first written languages were hardly more than accounting symbols, and many early forms of writing lacked basic things like vowels and punctuation. The written forms are a shorthand, for those practiced in the art of spoken language to fill in the blanks, and they still hide a great deal today. For example, the same letter, such as "a" can stand for several vowel sounds, as in ate, art, ahh, am, awe. Another rich part of the language left on the editing room floor are completely unrecorded (except by authors of dialog looking for unusual verisimilitude) sounds, like um, ah, huh, mm-hmmm and the like. N. J. Enfield makes the case that, far from being uncouth interjections, these are critical parts of the language, indeed, part of an elaborate "conversation machine" which is one behavior that distinguishes humans from other animals.

Arabic, a language commonly written without vowels.

When we are in conversation, time is of the essence. We expect attentiveness and quick responses. It is a relationship with moral aspects- with obligations on each side. The speaker should repeat things when asked, not take up too much of the floor, provide clear endings to turns. Enfield describes a very disciplined timing system, where, at least in Japan, responses begin before the first speaker has stopped. Other cultures vary, but everyone responds within half a second. Otherwise, something is discernably wrong. One thing this schedule indicates is that there is a sing-song pattern within the speaker's production that signals the ending of a speaking turn well before it happens. The other is that there is an serious obligation to respond. Not doing so will draw a followup or even rebuke from the speaker. Waiting more time to respond is itself a signal, that the response is not what is desired.. perhaps a "no". It can also be softened by an "uh" or "er" kind of filler that again signals that the responder is 1. having some difficulty processing, and 2. paving the way for a negative response.

Likewise, "mm-Hmm" is a fully functional and honorable part of the language- the real one used in conversation. It is the encouraging sign that the listener is holding up her part of the bargain, paying attention to the speaker continuously. Failing to provide such signs leads the speaker to miss a necessary interaction, and interject.. "Are you listening?".

Finally, Enfield deals with "Huh?", a mechanism listeners use to seek repair of speech that was unclear or unexpected. When a response runs late, it may switch to "Huh?", in a bid to say that processing is incapable of making sense of what the speaker said, please repeat or clarify. But at the same time, if something of the original speech can be salvaged, listeners are much more likely to ask for specific missing information, like "Who?", or "where was that?" or the like. This again shows the moral engine at work, with each participant working as hard as they can to minimize the load on the other, and move the conversation forward in timely fashion.

Huh is also a human universal, one that Enfield supposes came about by functional, convergent evolution, due to its great ease of expression. When we are in a relaxed, listening state, this is the sound we can most easily throw out with a simple breath ... to tell the speaker that something went off track, and needs to be repeated. It is, aside from clearly onomatopoeic expressions, the only truly universal word among humans.

A conversation without words.

It is a more slender book than it seems, devoted to little more than the expressions "uh", "mmm-Hmmm", and "Huh?". Yet it is very interesting to regard conversation from this perspective as a cooperation machine, much more complex than those of other species, even those who are quite vocal, like birds and other apes. But it still leaves huge amounts of our face-to-face conversational engine in the unconscious shadows. For we talk with our hands, faces, and whole bodies as well. Even with clothes. And even more interesting is the nature of music in relation to all this. It is in speech and in our related vocal intimacies and performances that music first happens. Think of a story narration- it involves not only poetry of language, but richly modulated vocal performance that draws listeners along and, among much else, signals beginnings, climaxes, endings, sadness and happiness. This seems to be the language of tone that humans have lately transposed into the free-er realms of instrumental music and other music genres. Analyzing that language remains something of an uncharted frontier.


  • Machines can do it too.
  • Varieties of technoreligion.
  • Monopoly is a thing.
  • Appalling display of religious fundamentalism: The US ambassador to Israel refers to old testament and 3,000 year old rights of Israel. If other 3,000 year old land claims were to be honored, the US would be in substantial peril!
  • In praise of Jimmy Carter.
  • Collapse or innovation.. can we outrun the Malthusian treadmill?
  • Truth and Rex Tillerson.
  • Sunlight makes us feel better.
  • We still have a public sector pension crisis.
  • Economic graph of the week. Worker quit rates are slowly rising. Will that affect pay?

Sunday, April 29, 2018

Concept as a Shadow of Percept

Binding and grounding- How does our brain organize concepts, properties, and relations?

Evolutionarily, brains developed to interact with the outside world, developing out of senses to orient the organism for food and defense. The basic senses came first, and then the ever more complex networks / computations to deduce what their signals mean and ultimately to build an entire world model, against which to evaluate changes in perception. While perception is now heavily influenced by such mental models, it did not begin there.

This history informs the question of where concepts reside in the brain. The wetness of water is something we feel, and our conception of it is naturally tied to our perception / sensation of it. Our language makes unending use of metaphors that extend this concrete perception-based ontology to the highest abstractions. If someone is a dirty scoundrel, they are only metaphorically dirty, and the visual and tactile aspects of dirtiness are evoked to enrich and clarify the abstraction. While one might imagine that, like computers, our brains store this conceptual data of definitions, categories, properties, etc. just about anywhere, whether in a distributed holographic engram or in dedicated storage areas, the fact is that data storage is localized, such that lesions in various parts of the brain can disrupt recall of specific classes of skills, properties, facts, faces, experiences, etc.

It is also apparent that much of this storage is coincident with our perceptual apparatus (and motor and emotional apparatus). Thus, a brain scan of someone asked to think about and apple and how it differs from a grape will light up areas in the later visual system where incoming perceptions of apples and grapes are decoded into the shape, color, size, etc., which we understand as being their respective properties. Perhaps olfactory and tactile areas come into play as well. Likewise, thinking about a guitar might generate activity in the motor planning areas, providing concepts about playing one. So, analogous to a spotlight of attention, we also have back-casting mini-spotlights of classification and conceptualization that connect "higher" areas in the frontal cortex (which are perhaps asking the questions) to motor, emotional, and perceptual areas, allowing the system to encode complex conceptual schemes once only, where they are first established, in the grounded areas such as in perceptual processing. Once there, they can serve both the immediate perceptual needs of classification, and the evidently related needs of rendering the same classes as concepts.
"Not only is object property information distributed across different locations, but also, these locations are highly predictable on the basis of our knowledge of the spatial organization of the perceptual, action, and affective processing systems. Conceptual information is not spread across the cortex in a seemingly random, arbitrary fashion, but rather follows a systematic plan."

This is the subject of a review, and of a recent paper which attempts to show that it is white matter tracts that constitute the property recognizing units of the brain, not the grey matter targets of their communication. They call this model "representation by connection", and their case is not well made. Yet the overall topic is extremely interesing, so here goes. The fact that this characteristic distribution of property storage in the peripheral systems exists is not in dispute. What is still uncertain is how it is all knit together and what, exactly, is being communicated back and forth. And, of course, what out of these ingredients constitutes the resulting gestalt of consciousness.



The new paper tries to make a big distinction between grey matter (the dense cellular areas of the brain) and white matter (the tracts of myelinated connecting axons, which have recently been so beautifully traced by DTI MRI. They collected 80 people who have suffered lesions in various localized areas of the brain, due to strokes, hemorrhage, or trauma, and correlate their deficits in object property recall / naming, which are specific and diverse, with separately derived maps of white matter tracts, and semantic / notional distance between the same concepts, as provided by a sampling of normal college students. The latter was taken in several dimensions, including properties of color, manipulability, motion, phonological, shape, usage, and others, but only set up a conceptual space field- it was not explicitly correlated with brain anatomy.
"Distributed GM (grey matter) regions that represent different attribute dimensions (e.g., shape, color, manner of interaction) of the same object are connected by WM (white matter). The WM linking pattern itself would then contain multiple dimensions of information in these GM regions and, importantly, additional information about the manner of mapping among various attributes. The incorporation of these elements has been argued to be necessary for the “higher-order” semantic similarity relationships, which are not explained by attribute-specific spaces, to emerge."

The authors developed anatomical maps of normal white matter tracts, and then developed a map of semantic correlations given the defects in the brain damaged patients, over-laying their cognitive defects, on the same object classification tasks as above, over their individual brain defect regions. The question was then, once the semantic defects were anatomically mapped, are their relative positions correlated with the mapping in conceptual space done by normal volunteers? One would naturally expect that yes, cognition of similar objects (scissors, knives) would happen in spatially close areas of the brain. This was indeed found. What it means, however, especially for the author's theory, is extremely hard to say. It is especially hard as the paper is not well written, (the group is from China), leaving their core ideas rather murky.
"In conclusion, using a structural-property-pattern-based RSA (representational similarity analysis) approach, we found that the WM (white matter) structures mainly connecting occipital/middle temporal regions and anterior temporal regions represent fine-grained higher-order semantic information. Such semantic relatedness effects were not attributable to modality-specific attributes (shape, manipula- tion, color, and motion) or to the representation contents of the cortical regions that they connected and were above and beyond the broad categorical distinctions. By connecting multiple modality-specific attributes, higher-order semantic space can be formed through patterns of these connections."
Example of one finding. Two brain areas of gray matter in the left hemisphere (red, superior temporal gyrus; and green, the calcarine sulcus) are linked by a mapped white matter tract. The graph shows a significant r-value for the correlation between the neural similarity (map of white matter lesions by properties reported to be defective) and the author's custom mapping of semantic similarity.

They do claim explicitly that the corresponding gray matter maps from the brain-damaged patients did not correlate as well with the semantic space patterns as did the white matter maps. But this is not so surprising, as gray matter regions (which could be imagined as the CPUs of the brain) are all connected via the white matter (the networking cables). So for a process that depends on the collation of numerous pieces of information, (the concept property units harvested from each percept / action planning/ emotion region), it would be natural to expect that cutting the cables might be a cleaner way to cause particular property type deficits. That doesn't mean, however, that it is the cables that are conscious, or that are doing the most important processing.

So I think, aside from the operational issues, which look rather daunting in this paper and whose resulting correlation maps are only marginally compelling, there are serious theoretical issues that indicate that while this field is ripe for advancement, this does not seem to be the paper to get us there.

A much more through and positive review than this one.
  • Bernie does what no one else will- the employment guarantee.
  • Techies are building a dystopia built on surveillance, invasion of privacy.
  • Egypt remains a mess, without economic prospects and with overpopulation.
  • What is going so wrong in Mexico? We don't even know who is corrupt any more.
  • Lack of competition is pervasive.
  • Unfit to serve on a sewer board, II.
  • Nature -and- nurture.

Sunday, October 15, 2017

What Adulthood Looks Like in the Brain

A gene expression turning point in the mid-20's correlates with onset of schizophrenia, among other things.

Are brains magic? No. They may be magic-al in how they bring us the world in living color, but it is mechanism all the way down, as can be seen from the countless ailments that attack the brain and its functions, such as the dramatic degradation that happens in boxers and football players who have been through excessive trauma. All that amazing function relies on highly complex mechanisms, but we are only beginning to understand them.

One way to look at the brain is through its components, the proteins expressed from our genomes. Those proteins run everything, from the structural building program to the brain's metabolism and exquisitely tuned ion balances. Current technology in molecular biology allows researchers to tabulate the expression of all genes in any tissue, such as the brain or parts thereof. This is sort of like getting a quantitative parts list for all the vehicles active in one city at a particular time. It might tell you a little about the balance between mass transit and personal cars, and perhaps about brand preferences and wealth in the city, but it would be hard to conclude much about detailed activity patterns or traffic issues, much less the trajectories or motivations of the individual cars- why they are going where they are going.

So, while very high-tech and big-data, this kind of view is still crude. A recent paper deployed these methods to look at gene expression in the brain over time, in humans and mice, to find global patterns of change with age, and attempt to correlate them with diseases that are known to have striking age-related characteristics, like schizophrenia.

They duly found changes of gene expression over time, and tried to concentrate on genes that exhibit dramatic changes, reversing course over time, or plateauing at times that they call turning points. Below are a few genes that show a dramatic decline in mice, from youth to adolescence.

Expression of selected proteins in frontal cortex of mice, with age, showing turning points in early adulthood, which is about five months here in mice. These genes each have variants or other known associations with schizophrenia.

Perhaps the most interesting result was that, if they tabulate all the cases of turning points, most seem to occur at early adulthood. The graphs below show rates of genes with turning points at various age points. One can see that by about age 30 and certainly 40, (about age 7 months in mice), the brain is set in stone, gene-wise. There is little further change in gene expression.

Number of genes by age when their expression shifts direction, either up, down, or plateauing.

In contrast, age 15 to 30 is a very active time, with the most genes changing direction, and the largest changes of expression peaking about age 28. Given the many genes and complex patterns available, the authors reasoned that they could do some reverse engineering, predicting a sample's age from its expression pattern. And using only 100 genes, they show that they can reliably estimate the age of the source, within about 5 years. Secondly, they broke the gene sets down by the cell type where they are expressed, and found distinct patterns, like genes typical of endothelial cells attaining peaks of expression very early, before 10 years of age, as physical growth of the brain is maximal, while genes typical of pyramidal neurons, one of the most important and complex types of neurons, reach their inflection points much later, in the mid-twenties.

Types of genes and their rough times of expression shift, by age. Most key neuronal genes seem to settle into unchanging patterns in early adulthood. PSD is post-synaptic density, typical of synapses where learning takes place.

Particularly interesting to the authors are genes associated with post-synaptic density, part of the learning and plasticity system where connections between neurons are managed. These genes were especially enriched with turning points in early adulthood, and also feature a disproportionate number of genes implicated in schizophrenia, whose onset occurs around this time. The genes tended to be turned down at this time, in neurons. While these observations correlate with the age-specific nature of schizophrenia, they do not go much farther, unless it could be shown that the genetic variants that confer susceptibility are perhaps mal-regulated and more active than they are supposed to be, or some other mechanism that makes the connection between the defect (all of which so far which have very weak effects), and the outcome. For example, one such variation was found to increase expression of its gene, which is present and functional in the post-synaptic densities.

It is interesting to see the developmental program of our brains portrayed in this new way, but it is only a glimpse into issues that require far more detailed investigation.


Saturday, July 1, 2017

A Cleanser For Tau Clumps

Alzheimer's disease is caused in part by protein clumps including amyloids and tau fibrils. We have an enzyme for that!

The iPhone appeared a decade ago, and gained ground so definitively because it provided a general platform (and large screen) for which others could develop rich applications. The ramifying copying, diversification, and specialization of those apps is reminiscent of evolution in general, and particularly of the "apps" present in every cell. Proteins are based on a generic/genetic platform of DNA coding and RNA translation, and self- assemble into countless shapes and sizes, complexes and pathways, to do all sorts of tasks to make life better for the organism. Where once there must have been very few or even none (in the RNA world), proteins have diversified over evolutionary time by endless rounds of copying, stealing, mutating, and specializing to constitute our current biospherical profusion.

Our bodies are filled with protein apps that generally keep their heads down and do their work without complaint. But a few can make trouble. Most notorious are the prions, which, as bizarrely misfolded versions of natural, functional proteins, can encourage other proteins to join them on the dark side, and even infect other organisms, causing unusual brain diseases and panics over epidemic transmission. Less notorious, but far more devastating, are various dementias such as Alzheimer's. These appear to be caused by the accumulation of junky proteins which clog cellular processes, and eventually kill brain cells, destroying the organ from within. The exact cause of this accumulation and how, or even whether, it kills cells are both still under study, but deposits of this junk (amyloid plaques and tau tangles/fibrils) are universally diagnostic of these dementias and prime candidates for research and treatment.

Proteins are constantly growing old and getting sent to the garbage, in all cells. The problem with the Alzheimer's-related proteins seems to be that they escape this disposal process, either because they accumulate too rapidly, or because they condense into crystalline entities that can no longer be pried apart by the various chaperones and other proteins that constitute our sanitary services. A recent paper discusses one human protein that seems able to clean up junk composed of the tau protein, even at its worst, and might be the kind of thing that could be injected or increased by genetic therapy to treat such syndromes.

Proline, the only amino acid that cyclizes back to the peptide chain, creating kinks in protein structures. The bond that Cyclophilins wrench back and forth is resonance-stabilized, making this trick a bit harder than it looks.

Cyclophilins are a class of protein chaperone (which are proteins that assist the folding of other proteins) that catalyze the switching of proline peptide bonds from cis to trans (see above). Proline is the stiffest amino acid, and plays a major role in generating kinks, turns, and other rigid aspects of protein structure. So cyclophilins help loosen up such structures and get proteins back on track if they have gone down a tangled path. That seems to be the case with tau, which is the crystallized conformation of fragments of a protein that in its normal state is a membrane protein thought to participate in synapse formation. Naturally, the accident of causing dementia in old age is, in evolutionary terms, a minor issue as it takes place well after reproduction has occurred, allowing such evident genetic defects to persist in our population.

The authors diagram how the tau protein (red and gray) might get its turns loosened up by CyP40, (black), allowing the tight beta-sheet structure to dissolve.

The authors run through a series of experiments to show that one cyclophilin, CyP40, is particularly effective in dissolving tau fibrils, one form of protein aggregation seen in Alzheimer's. It dissolves them in the test tube, it dissolves them when infected via a virus into the brains of mice, and it reduces neuron damage and cognitive decline that happens in these mice, which are engineered with extra human tau protein expression, and exhibit a rapid dementia-like progression.

When CyP40 is added (red), tau fibers can be dissolved in the test tube. FKB51 and -52 are other cycophilins that do not have this particular activity, serving as controls.

Engineered tau-expressing mice (red) can have their cognitive abilities improved after a virus carrying the CyP40 expressing gene was injected into their brains. The Y-axis is rate of errors in a water maze test, done at 3 months, which is relatively late in life for these mice.

It is an impressive and promising piece of work, to go from molecular structure to medically significant function, though it took 19 authors to get there. One notable aspect of this protein is that it does not require ATP- its mechanism is very efficient and the protein is relatively small- properties that are helpful when the targeted protein clump is outside of cells or in dying or dead cells. It is also part of family of 41 such proline isomerase enzymes in humans, so others may be found that operate on the other major culprit in Alzheimers, amyloid beta protein. On the other hand, we already encode and make Cy40 and it relatives. Why are they not being turned on and expressed in our brains where and when they are so desperately needed? The authors are silent on that score, and have taken out a patent for the therapeutic possibilities of the exogenously supplied version.

  • Ignorance is strength.
  • And isn't going to prevent some illogical hypocrisy.
  • Or cruel policy
  • Where is it all headed? To a new and permanent feudalism / authoritarianism.
  • Much ignorance is due to corporate media, whose interests are not ... the public interest.
  • Fake logic finds a home at the Supreme Court.
  • What is to become of the next Afghanistan surge?
  • The Russia story goes deeper.

Saturday, April 22, 2017

How Speech Gets Computed, Maybe

Speech is just sounds, so why does it not sound like noise?

Even wonder how we learn a language? How we make the transition from hearing how a totally foreign language sounds- like gibberish- to fluent understanding of the ideas flowing in, without even noticing their sound characteristics, or at best, appreciating them as poetry or song- a curious mixture of meaning and sound-play? Something happens in our brains, but what? A recent paper discusses the computational activities that are going on under the hood.

Speech arrives as a continuous stream of various sounds, which we can split up into discrete units (phonemes), which we then have to reconstruct as meaning-units, and detect their long-range interactions with other units such as the word, sentence, and paragraph. That is, their grammar and higher-level meanings. Obviously, in light of the difficulty of getting machines to do this well, it is a challenging task. And while we have lots of neurons to throw at the problem, each one is very slow- a mere 20 milliseconds at best, compared to about well under a nanosecond for current computers, about 100 million times faster. It is not a very promising situation.

An elementary neural network, used by the authors. Higher levels operate more slowly, capturing broader meanings.

The authors focus on how pieces of the problem are separated, recombined, and solved, in the context of a stream of stimulus facing a neural network like the brain. They realize that analogical thinking, where we routinely schematize concepts and remember them in relational ways that link between sub-concepts, super-concepts, similar concepts, coordinated experiences, etc. may form a deep precursor from non-language thought that enabled the rapid evolution of language decoding and perception.

One aspect of their solution is that the information of the incoming stream is used, but not rapidly discarded as one would in some computational approaches. Recognizing a phrase within the stream of speech is a big accomplishment, but the next word may alter its interpretation fundamentally, and require access to its component parts for that reinterpretation. So bits of the meaning hierarchy (words, phrases) are identified as they come in, but must also be kept, piece-wise, in memory for further Bayesian consideration and reconstruction. This is easy enough to say, but how would it be implemented?

From the paper, it is hard to tell, actually. The details are hidden in the program they use and in prior work. They use a neural network of several layers, originally devised to detect relational logic from unstructured inputs. The idea was to use rapid classifications and hierarchies from incoming data (specific propositional statements or facts) to set up analogies, which enable drawing very general relations among concepts/ideas. They argue that this is a good model for how our brains work. The kicker is that the same network is quite effective in understanding speech, relating (binding) nearby (in time) meaning units together even while it also holds them distinct and generates higher-level logic from them. It even shows oscillations that match quite closely those seen in the active auditory cortex, which is known to entrain its oscillations to speech patterns. High activity at the 2 and 4 Hz bands seem to relate to the pace of speech.

"The basic idea is to encode the elements that are bound in lower layers of a hierarchy directly from the sequential input and then use slower dynamics to accumulate evidence for relations at higher levels of the hierarchy. This necessarily entails a memory of the ordinal relationships that, computationally, requires higher-level representations to integrate or bind lower-level representations over time—with more protracted activity. This temporal binding mandates an asynchrony of representation between hierarchical levels of representation in order to maintain distinct, separable representations despite binding."

This result and the surrounding work, cloudy though they are, also forms an evolutionary argument, that speech recognition, being computationally very similar to other forms of analogical / relational / hierarchical thinking, may have arisen rather easily from pre-existing capabilities. Neural networks are all the rage now, with Google among others drawing on them for phenomenal advances in speech and image recognition. So there seems to be a convergence from the technology and research sides to say that this principle of computation, so different from the silicon-based sequential and procedural processing paradigm, holds tremendous promise for understanding our brains as well as exceeding them.