Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Sunday, July 7, 2024

Living on the Edge of Chaos

 Consciousness as a physical process, driven by thalamic-cortical communication.

Who is conscious? This is not only a medical and practical question, but a deep philosophical and scientific question. Over the last decades and century, we have become increasingly comfortable assigning consciousness to other animals, in ever-wider circles of understanding and empathy. The complex lives of chimpanzees, as brought into our consciousness by Jane Goodall, are one example. Birds are increasingly appreciated for their intelligence and strategic maneuvering. Insects are another frontier, as we appreciate the complex communication and navigation strategies honeybees, for one example, employ. Where does it end? Are bacteria conscious?

I would define consciousness as responsiveness crosschecked with a rapidly accessible model of the world. Responsiveness alone, such in a thermostat, is not consciousness. But a nest thermostat that checks the weather, knows its occupant's habits and schedules, and prices for gas and electricity ... that might be a little conscious(!) But back to the chain of being- are jellyfish conscious? They are quite responsive, and have a few thousand networked neurons that might well be computing the expected conditions outside, so I would count them as borderline conscious. That is generally where I would put the dividing line, with plants, bacteria, and sponges as not conscious, and organisms with brains, even quite decentralized ones like octopi and snails, as conscious, with jellyfish as slightly conscious. Consciousness is an infinitely graded condition, which in us reaches great heights of richness, but presumably starts at a very simple level.

These classifications imply that consciousness is a property of very primitive brains, and thus in our brains, is likely to be driven by very primitive levels of its anatomy. And that brings us to the two articles for this week, about current theories and work centered on the thalamus as a driver of human consciousness. One paper relates some detailed experimental and modeled tests of information transfer that characterizes a causal back-and-forth between the thalamus and the cortex, in which there is a frequency division between thalamic 10 - 20 Hz oscillations, whose information is then re-encoded and reflected in cortical oscillations of much higher frequency, at about 50 - 150 Hz. It also continues a long-running theme in the field, characterizing the edge-of chaos nature of electrical activity in these thalamus-cortex communications, as being just the kind of signaling suited to consciousness, and tracks the variation of chaoticity during anesthesia, waking, psychedelic drug usage, and seizure. A background paper provides a general review of this field, showing that the thalamus seems to be a central orchestrator of both the activation and maintenance of consciousness, as well as its contents and form.

The thalamus is at the very center of the brain, and, as is typical for primitive parts of the brain, it packs a lot of molecular diversity, cell types, and anatomy in a very small region. More recently evolved areas of the brain tend to be more anatomically and molecularly uniform, while supporting more complexity at the computational level. The thalamus has about thirty "nuclei", or anatomical areas that have distinct patterns of connections and cell types. It is known to relay sensory signals to the cortex, to be central to sleep control and alertness. It sits right over the brain stem, and has radiating connections out to, and back from, the cerebral cortex, suggestive of a hub-like role. 

The thalamus is networked with recurrent connections all over the cortex.


The first paper claims firstly that electrical oscillations in the thalamus and the cortex are interestingly related. Mouse, rats, and humans were all used as subjects and gave consistent results over the testing, supporting the idea that, at very least, we think alike, even if what we think about may differ. What is encoded in the awake brain at 1-13 Hz in the thalamus appears in correlated form (that is, in a transformed way) as 50-100+ Hz in the cortex. They study the statistics of recordings from both areas to claim that there is directional information flow, not just marked by the anatomical connections, but by active, harmonic entrainment and recoding. But this relationship fails to occur in unconscious states, even though the thalamus is at this time (in sleep, or anesthesia) helping to drive slow wave sleep patterns directly in the cortex. This supports the idea that there is a summary function going on, where richer information processed in the cortex is reduced in dimension into the vaguer hunches and impressions that make up our conscious experience. Even when our feelings and impressions are very vivid, they probably do not do justice to the vast processing power operating in the cortex, which is mostly unconscious.

Administering psychedelic drugs to their experimental animals caused greater information transfer backwards from the cortex to the thalamus, suggesting that the animal's conscious experience was being flooded. They also observe that these loops from the thalamus to cortex and back have an edge-of-chaos form. They are complex, ever-shifting, and information-rich. Chaos is an interesting concept in information science, quite distinct from noise. Chaos is deterministic, in that the same starting conditions should always produce the same results. But chaos is non-linear, where small changes to initial conditions can generate large swings in the output. Limited chaos is characteristic of living systems, which have feedback controls to limit the range of activity, but also have high sensitivity to small inputs, new information, etc., and thus are highly responsive. Noise is random changes to a signal that may not be reproducible, and are not part of a control mechanism.

Unfortunately, I don't think the figures from this paper support their claims very well, or at least not clearly, so I won't show them. It is exploratory work, on the whole. At any rate, they are working from, and contributing to, a by now quite well-supported paradigm that puts the thalamus at the center of conscious experience. For example, direct electrical stimulation of the center of the thalamus can bring animals immediately up from unconsciousness induced by anesthesia. Conversely, stimulation at the same place, with a different electrical frequency, (10 Hz, rather than 50 Hz), causes immediate freezing and vacancy of expression of the animal, suggesting interference with consciousness. Secondly, the thalamus is known to be the place which gates what sensory data enters consciousness, based on a long history of attention, ocular rivalry, and blind-sight experiments.

A schematic of how stimulation of the thalamus (in middle) interacts with the overlying cortex. CL stands for the ventral lateral nucleus of thalamus, where these stimulation experiments were targeted. The greek letters alpha and gamma stand for different frequency bands of the neural oscillation.

So, from both anatomical perspectives and functional ones, the thalamus appears at the center of conscious experience. This is a field that is not going to be revolutionized by a lightning insight or a new equation. It is looking very much like a matter of normal science slowly accumulating ever-more refined observations, simulations, better technologies, and theories that gain, piece by piece, on this most curious of mysteries.


Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Saturday, May 6, 2023

The Development of Metamorphosis

Adulting as a fly involves a lot of re-organization.

Humans undergo a slight metamorphosis, during adolescence. Imagine undergoing pupation like insects do and coming out with a totally new body, with wings! Well, Kafka did, and it wasn't very pleasant. But insects do it all the time, and have been doing it for hundreds of millions of years, taking to the air and dominating the biosphere. What goes on during metamorphosis, how complete is its refashioning of the body, and how did it evolve? A recent paper (review) considered in detail how the brains of insects change during metamorphosis, finding a curious blend of birth, destruction, and reprogramming among their neurons.

Time is on the Y axis, and the emergence of later, more advanced types of insects is on the X axis. This shows the progressive elaboration of non-metamorphosis (ametabolous), partially metamorphosing (hemimetabolous), and fully metamorphosing (holometabolous) forms. Dragonflies are only partially metamorphosing in this scheme, though their adult forms are often highly different from their larval (nymph) form.


Insects evolved from crustaceans, and took to land as small silvertail-like creatures with exoskeletons, roughly 450 million years ago. Over 100 million years, they developed the process of metamorphosis as a way to preserve the benefits of their original lifestyle for early development, in moist locations, while conquering the air and distance as adults. Early insect types are termed ametabolous, meaning that they have no metamorphosis at all, developing straight from eggs to an adult-style form. These go through several molts to accommodate growth, but don't redesign their bodies. Next came hemimetabolous development, which is exemplified by grasshoppers and cockroaches. Also dragonflies, which significantly refashion themselves during the last molt, gaining wings. In the nymph stage, those wings were carried around as small patches of flat embryonic tissue, and then suddenly grow out at the last molt. Dragonflies are extreme, and most hemimetabolous insects don't undergo such dramatic change. Last came holometabolous development, which involves pupation and a total redesign of the body that can go from a caterpillar to a butterfly.

The benefit of having wings is pretty clear- it allows huge increases in range for feeding and mating. Dragonflies are premier flying predators. But as a larva, wallowing in fruit juice or leaf sap or underwater, as dragonflies are, wings and long legs would be a hindrance. This conundrum led to the innovation of metamorphosis, based on the already somewhat dramatic practice of molting off the exoskeleton periodically. If one can grow a whole new skeleton, why not put wings on it, or legs? And metamorphosis has been tremendously successful, used by over 98% of insect species.

The adult insect tissues do not come from nowhere- they are set up as arrested embryonic tissues called imaginal discs. These are small patches that exist in the larva at specific positions. During pupation, while much of the rest of the body refashions itself, imaginal discs rapidly develop into future tissues like wings, legs, genitalia, antennas, and new mouth parts. These discs have a fascinating internal structure that prefigures the future organ. The leg disc is concentrically arranged with the more distant future parts (toes) at its center. Transplanting a disc from one insect to another or one place to another doesn't change its trajectory- it will still become a leg wherever it is put. So it is apparent that the larval stage is an intermediate stage of organismal development, where a bunch of adult features are primed but put on hold, while a simpler and much more primitive larval body plan is executed to accommodate its role in early growth and its niche in tight, moist, hidden places.

The new paper focuses on the brain, which larva need as well as adults. So the question is- how does the one brain develop from the other? Is the larval brain thrown away? The answer is that no, the brain is not thrown away at all, but undergoes its own quite dramatic metamorphosis. The adult brain is substantially bigger, so many neurons are added. A few neurons are also killed off. But most of the larval neurons are reprogrammed, trimmed back and regrown out to new regions to do new functions.

In this figure, the neurons are named as mushroom body outgoing neuron (MBON) or dopaminergic neuron (DAN, also MBIN for incoming mushroom body neuron), mushroom body extrinsic neuron to calyx (MBE-CA), and mushroom body protocerebral posterior lateral 1 (PPL1). MBON-c1 is totally reprogrammed, MBON-d1 changes its projections substantially, as do the (teal) incoming neurons, and MBON-12 was not operational in the larval stage at all. Note how MBON-c1 is totally reprogrammed to serve new locations in the adult.

The mushroom body, which is the brain area these authors focus on, is situated below the antennas and mediates smell reception, learning, and memory. Fly biologists regard it as analogous to our cortex- the most flexible area of the brain. Larvae don't have antennas, so their smell/taste reception is a lot more primitive. The mushroom body in drosophila has about a hundred neurons at first, and continuously adds neurons over larval life, with a big push during pupation, ending up with ~2200 neurons in adults. Obviously this has to wire into the antennas as they develop, for instance.

The authors find that, for instance, no direct connections between input and output neurons of the mushroom body (MBIN and MBON, respectively) survive from larval to adult stages. Thus there can be no simple memories of this kind preserved between these life stages. While there are some signs of memory retention for a few things in flies, for the most part the slate is wiped clean. 

"These MBONs [making feedback connections] are more highly interconnected in their adult configuration compared to their larval one: their adult configuration shows 13 connections (31% of possible connections), while their larval configuration has only 7 (17%). Importantly, only three of these connections (7%) are present in both larva and adult. This percentage is similar to the 5% predicted if the two stages were wired up independently at their respective frequencies."


Interestingly, no neuron changed its type- that is, which neurotransmitter it uses to communicate. So, while pruning and rewiring was pervasive, the cells did not fundamentally change their stripes. All this is driven by the hormonal system (juvenile hormone, which blocks adult development, and ecdysone, which drives molting, and in the absence of juvenile hormone, pupation) which in turn drives a program of transcription factors that direct the genes needed for development. While a great deal is known about neuronal pathfinding and development, this paper doesn't comment on those downstream events- how it is that selected neurons are pruned, turned around, and induced to branch out in totally new directions, for instance. That will be the topic of future work.


  • Corrupt business practices. Why is this lawful?
  • Why such easy bankruptcy for corporations, but not for poor countries?
  • Watch the world's mesmerizing shipping.
  • Oh, you want that? Let me jack up the price for you.
  • What transgender is like.
  • "China has arguably been the biggest beneficiary of the U.S. security system in Asia, which ensured the regional stability that made possible the income-boosting flows of trade and investment that propelled the country’s economic miracle. Today, however, General Secretary of the Chinese Communist Party Xi Jinping claims that China’s model of modernization is an alternative to “Westernization,” not a prime example of its benefits."

Saturday, March 18, 2023

The Eye is the Window to the Brain

Alpha oscillations of the brain prefigure the saccades by which our eyes move as we read.

Reading is a complicated activity. We scan symbols on a page, focusing on some in turn, while scanning along for the next one. Data goes to the brain not in full images, but in the complex coding of differences from moment to moment. Simultaneously, various levels of processing in the brain decode the dark and light spots, the letter forms, the word chunks, the phrases, and on up to the ideas being conveyed.

While our brain is not rigidly clocked like a computer, (where each step of computation happens in sync with the master clock), it does have dynamic oscillations at several different frequencies and ranging over variable regions and coalitions of neurons that organize its processing. And the eye is really a part of that same central nervous system- an outpost that conveys so much sensitive information, both in and out.

We take in visual scenes by jerks, or saccades, using our peripheral vision to orient generally and detect noteworthy portions, then bringing our high-acuity fovea to focus on them. The eye moves about four times per second, a span that is used to process the current scene and to plan where to shift next. Alpha oscillations (about 10 per second) in the brain, which are inhibitory, are known to (anti-) correlate with motor control of the saccade period. The processing of the visual sensory system resets its oscillations with each shift in scene, so is keyed to saccades in a receiving sense. Since vision only happens in the rest/focal periods between saccades, it is helpful, conceptually, to coordinate the two processes so that the visual processing system is maximally receptive (via its oscillatory phase) at the same time that the eye comes to rest after a saccade and sends it a new scene. Conversely, the visual sensory system would presumably tell the motor system when it was done processing the last unit, to gate a shift to the next scene.

A recent paper extended this work to ask how brain oscillations relate to the specific visual task of reading, including texts that are more or less difficult to comprehend. They used the non-invasive method of magnetic encephalography to visualize electrical activity within the brains of people reading. The duration of saccades were very uniform, (and short), while the times spent paused on each focal point (word) varied slightly with how difficult the word was to parse. It is worth noting that no evidence supports the lexical processing of words out of the peripheral vision- this only happens from foveal/focused images.

Subjects spent more time focused on rare/difficult words than on easy words, during a free reading exercise (C). On the other hand, the duration of saccades to such words was unchanged (D).

In the author's main finding, alpha oscillations were correlated as the person shifted from word to word, pausing to view each one. These oscillations tracked the pausing more closely when shifting towards more difficult words, rather than to simple words. And these peaks of phase locking happened anatomically in the Brodmann area 7, which is a motor area that mediates between the visual system and motor control of the eye. Presumably this results from communication from the visual processing area to the visual motor area, just next door. They also found that the phase locking was strongest for the start of saccades, not their end, when the scene comes back into focus. This may simply be a timing issue, since there are lags at all points in the visual processing system, and since the saccade duration is relatively fixed, this interval may be appropriate to keep the motor and sensory areas in effective synchronization.

Alpha oscillation locks to some degree with initiation of saccades, and does so more strongly when heading to difficult words, rather than to easy words. Figure B shows the difference in alpha power between the easy and difficult word target. How can this be? 

So while higher frequency (gamma) oscillations participate in sensory processing of vision, this lower alpha frequency is dominant in the area that controls eye movement, in keeping with muscle control mechanisms more generally. But it does raise the question of why they found a signal (phase locking for the initiation of a saccade) for the difficulty of the upcoming word, before it was actually lexically processed. The peripheral visual system is evidently making some rough guess, perhaps by size or some other property, of the difficulty of words, prior to fully decoding them, and it will be interesting to learn where this analysis is done.


  • New uses for AI in medicare advantage.
  • Some problems with environmental review.
  • No-compete "agreements" are no such thing, and worthless anyhow.
  • We wanna be free.

Saturday, February 18, 2023

Everything is Alive, but the Gods are all Dead

Barbara Ehrenreich's memoir and theological ruminations in "Living with a Wild God".

It turns out that everyone is a seeker. Somewhere there must be something or someone to tell us the meaning of life- something we don't have to manufacture with our own hands, but rather can go into a store and buy. Atheists are just as much seekers as anyone else, only they never find anything worth buying. The late writer Barbara Ehrenreich was such an atheist, as well as a remarkable writer and intellectual who wrote a memoir of her formation. Unusually and fruitfully, it focuses on those intense early and teen years when we are reaching out with both hands to seize the world- a world that is maddeningly just beyond our grasp, full of secrets and codes it takes a lifetime and more to understand. Religion is the ultimate hidden secret, the greatest mystery which has been solved in countless ways, each of them conflicting and confounding.

Ehrenreich's tale is more memoir than theology, taking us on a tour through a dysfunctional childhood with alcoholic parents and tough love. A story of growth, striking out into the world, and sad coming-to-terms with the parents who each die tragically. But it also turns on a pattern of mystical experiences that she keeps having, throughout her adult life, which she ultimately diagnoses as dissociative states where she zones out and has a sort of psychedelic communion with the world.

"Something peeled off the visible world, taking with it all meaning, inference, association, labels, and words. I was looking at a tree, and if anyone had asked, that's what I would have said I was doing, but the word "tree" was gone, along with all the notions of tree-ness that had accumulated in the last dozen years or so since I had acquired language. Was it a place that was suddenly revealed to me? Or was it a substance- the indivisible, elemental material out of which the entire known and agreed-upon world arises as a fantastic elaboration? I don't know, because this substance, this residue, was stolidly, imperturbably mute. The interesting thing, some might say alarming, was that when you take away all the human attributions- the words, the names of species, the wisps of remembered tree-related poetry, the fables of photosynthesis and capillary action- that when you take all this this away, there is still something left."

This is not very hard to understand as a neurological phenomenon of some kind of transient disconnection of just the kind of brain areas she mentions- those that do all the labeling, name-calling, and boxing-in. In schizophrenia, it runs to the pathological, but in Ehrenreich's case, she does not regard it as pathological at all, as it is always quite brief. But obviously, the emotional impact and weirdness of the experience- that is something else altogether, and something that humans have been inducing with drugs, and puzzling over, forever. 

Source

As a memoir, the book is very engaging. As a theological quest, however, it doesn't work as well, because the mystical experience is, as noted above, resolutely meaningless. It neither compels Ehrenreich to take up Christianity, as after a Pauline conversion, nor any other faith or belief system. It offers a peek behind the curtain, but, stripped of meaning as this view is, Ehrenreich is perhaps too skeptical or bereft of imagination to give it another, whether of her own or one available from the conventional array of sects and religions. So while the experiences are doubtless mystical, one can not call them religious, let alone god-given, because Ehrenreich hasn't interpreted them that away. This hearkens back to the writings of William James, who declined to assign general significance to mystical experiences, while freely admitting their momentous and convincing nature to those who experienced them.

Only in one brief section (which had clearly been originally destined for an entirely different book) does she offer a more interesting and insightful analysis. There, Ehrenreich notes that the history of religion can be understood as a progressive bloodbath of deicide. At first, everything is alive and sacred, to an animist mind. Every leaf and grain of sand holds wonders. Every stream and cloud is divine. This is probably our natural state, which a great deal of culture has been required to stamp out of us. Next is a hunting kind of religion, where deities are concentrated in the economic objects (and social patterns) of the tribe- the prey animals, the great plants that are eaten, and perhaps the more striking natural phenomena and powerful beasts. But by the time of paganism, the pantheon is cut down still more and tamed into a domestic household, with its soap-opera dramas and an increasingly tight focus on the major gods- the head of the family, as it were. 

Monotheism comes next, doing away with all the dedicated gods of the ocean, of medicine, of amor and war, etc., cutting the cast down to one. One, which is inflated to absurd proportions with all-goodness, all-power, all-knowledge, etc. A final and terrifying authoritarianism, probably patterned on the primitive royal state. This is the phase when the natural world is left in the lurch, as an undeified and unprotected zone where human economic greed can run rampant, safe in the belief that the one god is focused entirely on man's doings, whether for good or for ill, not on that of any other creature or feature of the natural world. A phase when even animals, who are so patently conscious, can, through the narcissism of primitive science and egoistic religion, be deemed mere mechanisms without feeling. This process doesn't even touch on the intercultural deicide committed by colonialism and conquest.

This in turn invites the last deicide- that by rational people who toss aside this now-cartoonish super-god, and return to a simpler reverence for the world as we naturally respond to it, without carting in a lot of social power-and-drama baggage. It is the cultural phase we are in right now, but the transition is painfully slow, uneven, and drawn-out. For Ehrenreich, there are plenty of signs- in the non-linear chemical phenomena of her undergraduate research, in the liveliness of quantum physics even into the non-empty vacuum, in the animals who populate our world and are perhaps the alien consciousnesses that we should be seeking in place of the hunt through outer space, and in our natural delight in, and dreams about, nature at large. So she ends the book as atheist as ever, but hinting that perhaps the liveliness of the universe around us holds some message that we are not the only thinking and sentient beings.

"Ah, you say, this is all in your mind. And you are right to be skeptical; I expect no less. It is in my mind, which I have acknowledged from the beginning is a less than perfect instrument. but this is what appears to be the purpose of my mind, and no doubt yours as well, its designed function beyond all the mundane calculations: to condense all the chaos and mystery of the world into a palpable Other or Others, not necessarily because we love it, and certainly not out of any intention to "worship" it. But because ultimately we may have no choice in the matter. I have the impression, growing out of the experiences chronicled here, that it may be seeking us out." 

Thus the book ends, and I find it a rather poor ending. It feels ripped from an X-Files episode, highly suggestive and playing into all the Deepak and similar mystical tropes of cosmic consciousness. That is, if this passage really means much at all. Anyhow, the rest of the trip is well worth it, and it is appropriate to return to the issue of the mystical experience, which is here handled with such judicious care and restraint. Where imagination could have run rampant, the cooly scientific view (Ehrenreich had a doctorate in biology) is that the experiences she had, while fascinating and possibly book-proposal-worthy, did not force a religious interpretation. This is radically unlike the treatment of such matters in countless other hands, needless to say. Perhaps our normal consciousness should not be automatically valued less than more rare and esoteric states, just because it is common, or because it is even-tempered.


  • God would like us to use "they".
  • If you are interested in early Christianity, Gnosticism is a good place to start.
  • Green is still an uphill battle.

Saturday, February 11, 2023

A Gene is Born

Yes, genes do develop out of nothing.

The "intelligent" design movement has long made a fetish of information. As science has found, life relies on encoded information for its genetic inheritance and the reliable expression of its physical manifestations. The ID proposition is, quite simply, that all this information could not have developed out of a mindless process, but only through "design" by a conscious being. Evidently, Darwinian natural selection still sticks on some people's craw. Michael Behe even developed a pseudo-mathematical theory about how, yes, genes could be copied mindlessly, but new genes could never be conjured out of nothing, due to ... information.

My understanding of information science equates information to loss of entropy, and expresses a minimal cost of the energy needed to create, compute or transmit information- that is, the Shannon limits. A quite different concept comes from physics, in the form of information conservation in places like black holes. This form of information is really the implicit information of the wave functions and states of physical matter, not anything encoded or transmitted in the sense of biology or communication. Physical state information may be indestructable (and un-create-able) on this principle, but coded information is an entirely different matter.

In a parody of scientific discussion, intelligent design proponents are hosted by the once-respectable Hoover Institution for a discussion about, well, god.

So the fecundity that life shows in creating new genes out of existing genes, (duplications), and even making whole-chromosome or whole-genome duplications, has long been a problem for creationists. Energetically, it is easy to explain as a mere side-effect of having plenty of energy to work with, combined with error-prone methods of replication. But creationistically, god must come into play somewhere, right? Perhaps it comes into play in the creation of really new genes, like those that arise from nothing, such as at the origin of life?

A recent paper discussed genes in humans that have over our recent evolutionary history arisen from essentially nothing. It drew on prior work in yeast that elegantly laid out a spectrum or life cycle of genes, from birth to death. It turns out that there is an active literature on the birth of genes, which shows that, just like duplication processes, it is entirely natural for genes to develop out of humble, junky precursors. And no information theory needs to be wheeled in to show that this is possible.

Yeast provides the tools to study novel genes in some detail, with rich genetics and lots of sequenced relatives, near and far. Here is portrayed a general life cycle of a gene, from birth out of non-gene DNA sequences (left) into the key step of translation, and on to a subject of normal natural selection ("Exposed") for some function. But if that function decays or is replaced, the gene may also die, by mutation, becoming a pseudogene, and eventually just some more genomic junk.

The death of genes is quite well understood. The databases are full of "pseudogenes" that are very similar to active genes, but are disabled for some reason, such as a truncation somewhere or loss of reading frame due to a point mutation or splicing mutation. Their annotation status is dynamic, as they are sometimes later found to be active after all, under obscure conditions or to some low level. Our genomes are also full of transposons and retroviruses that have died in this fashion, by mutation.

Duplications are also well-understood, some of which have over evolutionary time given rise to huge families of related proteins, such as kinases, odorant receptors, or zinc-finger transcription factors. But the hunt for genes that have developed out of non-gene materials is a relatively new area, due to its technical difficulty. Genome annotators were originally content to pay attention to genes that coded for a hundred amino acids or more, and ignore everything else. That became untenable when a huge variety of non-coding RNAs came on the scene. Also, occasional cases of very small genes that encoded proteins came up from work that found them by their functional effects.

As genome annotation progressed, it became apparent that, while a huge proportion of genes are conserved between species, (or members of families of related proteins), other genes had no relatives at all, and would never provide information by this highly convenient route of computer analysis. They are orphans, and must have either been so heavily mutated since divergence that their relationships have become unrecognizable, or have arisen recently (that is, since their evolutionary divergence from related species that are used for sequence comparison) from novel sources that provide no clue about their function. Finer analysis of ever more closely related species is often informative in these cases.

The recent paper on human novel genes makes the finer point that splicing and export from the nucleus constitute the major threshold between junk genes and "real" genes. Once an RNA gets out of the nucleus, any reading frame it may have will be translated and exposed to selection. So the acquisition of splicing signals is a key step, in their argument, to get a randomly expressed bit of RNA over the threshold.

A recent paper provided a remarkable example of novel gene origination. It uncovered a series of 74 human genes that are not shared with macaque, (which they took as their reference), have a clear path of origin from non-coding precursors, and some of which have significant biological effects on human development. They point to a gradual process whereby promiscuous transcription from the genome gave rise by chance to RNAs that acquired splice sites, which piped them into the nuclear export machinery and out to the cytoplasm. Once there, they could be translated, over whatever small coding region they might possess, after which selection could operate on their small protein products. A few appear to have gained enough function to encourage expansion of the coding region, resulting in growth of the gene and entrenchment as part of the developmental program.

Brain "organoids" grown from genetically manipulated human stem cells. On left is the control, in middle is where ENSG00000205704 was deleted, and on the right is where ENSG00000205704 is over-expressed. The result is very striking, as an evolutionarily momentous effect of a tiny and novel gene.

One gene, "ENSG00000205704" is shown as an example. Where in macaque, the genomic region corresponding to this gene encodes at best a non-coding RNA that is not exported from the nucleus, in humans it encodes a spliced and exported mRNA that encodes a protein of 107 amino acids. In humans it is also highly expressed in the brain, and when the researchers deleted it in embryonic stem cells and used those cells to grow "organoids", or clumps of brain-like tissue, the growth was significantly reduced by the knockout, and increased by the over-expression of this gene. What this gene does is completely unknown. Its sequence, not being related to anything else in human or other species, gives no clue. But it is a classic example of gene that arose from nothing to have what looks like a significant effect on human evolution. Does that somehow violate physics or math? Nothing could be farther from the truth.

  • Will nuclear power get there?
  • What the heck happened to Amazon shopping?

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, September 10, 2022

Sex in the Brain

The cognitive effects of gonadotropin-releasing hormone.

If you watch the lesser broadcast TV channels, there are many ads for testosterone- elixir of youth, drive, manliness, blaring sales pitches, etc. Is it any good? Curiously, taking testosterone can cause alot of sexual dysfunctions, due to feedback loops that carefully tune its concentration. So generally no, it isn't much good. But that is not to say that it isn't a powerful hormone. A cascade of other events and hormones lead to the production of testosterone, and a recent paper (review) discussed the cognitive effects of one of its upstream inducers, gonadotropin-releasing hormone, or GnRH. 

The story starts on the male Y chromosome, which carries the gene SRY. This is a transcription activator that (working with and through a blizzard of other regulators and developmental processes) is ultimately responsible for switching the primitive gonad to the testicular fate, from its default which is female / ovarian. This newly hatched testis contains Sertoli cells, which secrete anti-Mullerian hormone (AMH, a gene that is activated by SRY directly), which in the embryo drives the regression of female characteristics. At the same time testosterone from testicular Leydig cells drives development of male physiology. The initial Y-driven setup of testosterone is quickly superceded by hormones of the gonadotropin family, one form of which is provided by the placenta. Gonadotropins continue to be essential through development and life to maintain sexual differentiation. This source declines by the third trimester, by which time the pituitary has formed and takes over gonadotropin secretion. It secretes two gondotropin family members, follicular stimulating hormone (FSH) and leutinizing hormone (LH), which each, despite their names, actually have key roles in male as well as female reproductive development and function. After birth, testosterone levels decline and everything is quiescent until puberty, when the hormonal axis driven by the pituitary reactivates.

Some of the molecular/genetic circuitry leading to very early sex differentiation. Note the leading role of SRY in driving male development. Later, ongoing maintenance of this differentiation depends on the gonadotropin hormones.

This pituitary secretion is in turn stimulated by gonadotropin releasing hormone (GnRH), which is the subject of the current story. GnRH is produced by neurons that, in embryogenesis, originate in the nasal / olfactory epithelium and migrate to the hypothalamus, close enough to the pituitary to secrete directly into its blood supply. This circuit is what revs up in puberty and continues in fine-tuned fashion throughout life to maintain normal (or declining) sex functions, getting feedback from the final sex hormones like estrogen and testosterone in general circulation. The interesting point that the current paper brings up is that GnRH is not just generated by neurons pointing at the pituitary. There is a whole other set of neurons in the hypothalamus that also secrete GnRH, but which project (and secrete GnRH) into the cortex and hippocampus- higher regions of the brain. What are these neurons, and this hormone, doing there?

The researchers note that people with Down Syndrome characteristically have both cognitive and sexual defects resembling incomplete development, (among many other issues), the latter of which resemble or reflect a lack of GnRH, suggesting a possible connection. Puberty is a time of heightened cognitive development, and they guessed that this is perhaps what is missing in Down Syndrome. Down Syndrome typically winds up in early-onset Alzheimer disease, which is also characterized by lack of GnRH, as is menopause, and perhaps other conditions. After going through a bunch of mouse studies, the researchers supplemented seven men affected by Down Syndrome with extra GnRH via miniature pumps to their brains, aimed at target areas of this hormone in the cortex. It is noteworthy that GnRH secretion is highly pulsitile, with a roughly 2 hour period, which they found to be essential for a positive effect. 

Results from the small-scale intervention with GnRH injection. Subjects with Down Syndrome had higher cortical connectivity (left) and could draw from a 3-D model marginally more accurately.

The result (also seen in mouse models of Down Syndrome and of Alzheimer's Disease) was that the infusion significantly raised cognitive function over the ensuing months. It is an amazing and intriguing result, indicating that GnRH drives significant development and supports ongoing higher function in the brain, which is quite surprising for a hormone thought to be confined to sexual functions. Whether it can improve cognitive functions in fully developed adults lacking impeding developmental syndromes remains to be seen. Such a finding would be quite unlikely, though, since the GnRH circuit is presumably part of the normal program that establishes the full adult potential of each person, which evolution has strained to refine to the highest possible level. It is not likely to be a magic controller that can be dialed beyond "max" to create super-cognition.

Why does this occur in Down Syndrome? The authors devote a good bit the paper to an interesting further series of experiments, focusing on regulatory micro-RNAs, several of which are encoded in genomic regions duplicated in Down Syndrome. microRNAs are typically regulators that repress transcription, explaining how this whole circuitry of normal development, now including key brain functions, is under-activated in those with Down Syndrome.

The authors offer a subset of regulatory circuitry focusing on micro-RNA repressors of which several are encoded on the trisomic chromosome regions.

"HPG [hypothalamus / pituitary / gonadal hormone] axis activation through GnRH expression at minipuberty (P12; [the phase of testoserone expression in late mouse gestation critical for sexual development]) is regulated by a complex switch consisting of several microRNAs, in particular miR-155 and the miR-200 family, as well as their target transcriptional repressor-activator genes, in particular Zeb1 and Cebpb. Human chromosome 21 and murine chromosome 16 code for at least five of these microRNAs (miR-99a, let-7c, miR-125b-2, miR-802, and miR-155), of which all except miR-802 are selectively enriched in GnRH neurons in WT mice around minipuberty" - main paper

So, testosterone (or estrogen, for that matter) isn't likely to unlock better cognition, but a hormone a couple of steps upstream just might- GnRH. And it does so not through the bloodstream, but through direct injection into key areas of the brain both during development, and also on an ongoing basis through adulthood. Biology as a product of evolution comprises systems that are highly integrated, not to say jury-rigged, which makes biology as a science difficult, being the quest to separate all the variables and delineate what each component and process is doing.


Saturday, August 13, 2022

Titrations of Consciousness

In genetics, we have mutation. In biochemistry, we have titration. In neuroscience, we have brain damage.

My theisis advisor had a favorite saying: "When in doubt, titrate!". That is to say, if you think you have your hands on a key biochemical component, its amount should be clearly influential on the reaction you are looking at. Adding more might make its role clearer, or bring out other dynamics, or, at very least, titration might allow you to not waste it by using just the right amount.

Neuroscience has reached that stage in studies of consciousness. While philosophers wring their hands about the "hardness" of the problem, scientists are realizing that it can be broken down like any other, and studied by its various broken states and disorders, and in its myriad levels / types as induced by drugs, damage, and by evolution in other organisms. A decade ago, a paper showed that the thalamus, a region of the brain right on top of the brain stem and the conduit of much of its traffic with the cortex, has a graded (i.e. titratable) relationship between severity of damage and severity of effects on consciousness. This led to an influential hypothesis- the mesocircuit hypothesis, which portrays wide-ranging circuitry from the thalamus that activates cortical regions, and is somewhat inhibited in return by circuits coming back. 


Degree of damage to a central part of the brain, the thalamus, correlates closely with degree of consciousness disability.

A classification of consciousness / cognition / communication deficits, ranging from coma to normal state. LIS = locked in state, MCS = minimally conscious state, VS = vegetative state (now unresponsive wakefulness syndrome, which may be persistent (PVS).

The anatomy is pretty clear, and subsequent work has focused on the dynamics, which naturally are the heart of consciousness. A recent paper, while rather turgid, supports the mesocircuit hypothesis by analyzing the activation dynamics of damaged brains (vegetative state, now called unresponsive wakefulness syndrome (UWS)), and less severe minimally conscious states (MCS). They did unbiased mathematical processing to find the relevant networks and reverberating communication modes. For example, in healthy brains there are several networks of activity that hum along while at rest, such as the default mode network, and visual network. These networks are then replaced or supplemented by other dynamics when activity takes place, like viewing something, doing something, etc. The researchers measured the metastability or "jumpiness" of these networks, and their frequencies (eigenmodes).

Naturally, there is a clear relationship between dynamics and consciousness. The worse off the patient, the less variable the dynamics, and the fewer distinct frequencies are observed. But the data is hardly crystal clear, so it got published in a minor journal. It is clear that these researchers have some more hunting to do to find better correlates of consciousness. This might come from finer anatomical localization, (hard to do with fMRI), or perhaps from more appropriate math that isolates the truly salient aspects of the phenomenon. In studies of other phenomena such as vision, memory, and place-sensing, the analysis of correlates between measurable brain activity and the subjective or mental aspects of that activity have become immensely more sophisticated and sensitive over time, and one can assume that will be true in this field as well.

Severity of injury correlates with metastability (i.e jumpiness) of central brain networks,  and with consciousness. (HC = healthy control)


  • Senator Grassley can't even remember his own votes anymore.
  • How are the Balkans doing?
  • What's in the latest Covid strain?
  • Economic graph of the week. Real income has not really budged in a very long time.

Sunday, January 16, 2022

Choices, Choices

Hippocampal maps happen in many modes and dimensions. How do they relate to conscious navigation?

How do our brains work? A question that was once mystical is now tantalizingly concrete. Neurobiology is, thanks to the sacrifices of countless rats, mice, and undergraduate research subjects, slowly bringing to light mechanisms by which thoughts flit about the brain. The parallel processing of vision, progressively through the layers of the visual cortex, was one milestone. Another has been work in the hippocampus, which is essential for memory formation as well as mapping and navigation. Several kinds of cells have been found there (or in associated brain areas) which fire when the animal is in a certain place, or crosses a subjective navigational grid boundary, or points its head in a certain direction. 

A recent paper reviewed recent findings about how such navigation signals are bound together and interact with the prefrontal cortex during decision making. One is that locations are encoded in a peculiar way, within the brain wave known as the theta oscillation. These run at about 4 to 12 cycles per second, and as an animal moves or thinks, place cells corresponding to locations behind play at the trough of the cycle, while locations progressively closer, and then in front of the animal play correspondingly higher on the wave. So the conscious path that the animal is contemplating is replayed on a sort of continuous loop in highly time-compressed fashion. And this happens not only while the animal is on the path, but at other times as well, if it is dreaming about its day, or is resting and thinking about its future options.

"For hippocampal place cells to code for both past and future trajectories while the animal navigates through an environment, the hippocampus needs to integrate multiple sensory inputs and self-generated cues by the animal’s movement for both retrospective and prospective coding."


These researchers describe a new piece of the story, that alternate theta cycles can encode different paths. That is, as the wave repeats, the first cycle may encode one future path out of a T-maze, while the next may encode another path out of the same maze, and then repeating back to A, B, etc. It is evident that the animal is trying to decide what to do, and its hippocampus (with associated regions) is helpfully providing mappings of the options. Not only that, but the connecting brain areas heading towards the prefrontal cortex (the nucleus reuniens, entorhinal cortex, and parahippocampal gyrus) separate these path representations into different cell streams, (still on the theta oscillation), and progressively filter one out. Ultimately, the prefrontal cortex represents only one path ... the one that the rat actually chooses to go down. The regions are connected in both directions, so there is clearly top-down as well as bottom-up processing going on. The conclusion is that in general, the hippocampus and allied areas provide relatively unbiased mapping services, while the cortex does the decision making about where to go, and while it may receive.

    "This alternation between left and right begins as early as 25 cm prior to the choice point and will continue until the animal makes its turn"


A rat considers its options. Theta waves are portrayed, as they appear in different anatomical locations in the brain. Hippocampal place cells, on the bottom right, give a mapping of the relevant path repeatedly encoded across single theta wave cycles. One path is encoded in one cycle, the other in the next. Further anatomical locations (heading left) separate the maps into different channels / cells, from which the prefrontal cortex finally selects only the one it intends to actually use.

The hippocampus is not just for visual navigation, however. It is now known to map many other senses in spatial terms, like sounds, smells. It also maps the flow of time in cognitive space, such as in memories, quite apart from spatial mapping. It seems to be a general facility to create cognitive maps of the world, given whatever the animal has experienced and is interested in, at any scale, and in many modalities. The theta wave embedding gives a structure that is highly compressed, and repeated, so that it is available to higher processing levels for review, re-enactment, dreaming, and modification for future planning. 

Thus using the trusty maze test on rats and mice, neuroscientists are slowly, and very painfully, getting to the point of deciphering how certain kinds of thoughts happen in the brain- where they are assembled, how their components combine, and how they relate to behavior. How they divide between conscious and unconscious processes naturally awaits more insight into what this dividing line really consists of.


  • Biochar!
  • More about the coup attempt.
  • Yes, there was electoral fraud.
  • The more you know about fossil fuels, the worse it gets.
  • Graph of the week. Our local sanitation district finds over a thousand omicron genomes per milliliter of intake, which seems astonishing.