Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

Sunday, October 5, 2025

Cycles of Attention

 A little more research about how attention affects visual computation.

Brain waves are of enormous interest, and their significance has gradually resolved over recent decades. They appear to represent synchronous firing of relatively large populations of neurons, and thus the transfer of information from place to place in the brain. They also induce other neurons to entrain with them. The brain is an unstable apparatus, never entraining fully with any one particular signal (that way lies epilepsy). Rather, the default mode of the brain is humming along with a variety of transient signals and thus brain waves as our thoughts, both conscious and unconscious, wander over space and time.

A recent paper developed this growing insight a bit further, by analyzing forward and backward brainwave relations in visual perception. Perception takes place in a progressive way at the back of the brain in the visual cortex, which develops the raw elements of a visual scene (already extensively pre-processed by the retina) into more abstract, useful representations, until we ... see a car, or recognize a face. At the same time, we perceive very selectively, only attending to very small parts of the visual scene, always on the go to other parts and things of interest. There is a feedback process, once things in a scene are recognized, to either attend to them more, or go on to other things. The "spotlight of attention" can direct visual processing, not just by filtering what comes out of the sausage grinder, but actually reaching into the visual cortex to direct processing to specific things. And this goes for all aspects of our cognition, which are likewise a cycle of search, perceive, evaluate, and search some more.

Visual processing generates gamma waves of information in an EEG, directed to, among other areas, the frontal cortex that does more general evaluation of visual information. Gamma waves are the highest frequency brain oscillations, (about 50-100 Hz), and thus are the most information rich, per unit time. This paper also confirmed that top-down oscillations, in contrast, are in the alpha / beta frequencies, (about 5-20 Hz). What they attempted was to link these to show that the top-down beta oscillations entrain and control the bottom-up gamma oscillations. The idea was to literally close the loop on attentional control over visual processing. This was all done in humans, using EEG to measure oscillations all over the brain, and TMS (transcranial magnetic stimulation) to experimentally induce top-down currents from the frontal cortex as their subjects looked at visual fields.

Correlation of frontal beta frequencies onto gamma frequencies from the visual cortex, while visual stimulus and TMS stimulation are both present. At top left is the overall data, showing how gamma cycles from the hind brain fall into various portions of a single beta wave, (bottom), after TMS induction on the forebrain. There is strong entrainment, a bit like AM radio amplitude modulation, where the higher frequency signal (one example top right) sits within the lower-frequency beta signal (bottom right). 

I can not really speak to the technical details and quality of this data, but it is clear that the field is settling into this model of what brain waves are and how they reflect what is going on under the hood. Since we are doing all sorts of thinking all the time, it takes a great deal of sifting and analysis to come up with the kind of data shown here, out of raw EEG from electrodes merely placed all over the surface of the skull. But it also makes a great deal of sense, first that the far richer information of visual bottom-up data comes in higher frequencies, while the controlling information takes lower frequencies. And second, that brain waves are not just a passive reflection of passing reflections, but are used actively in the brain to entrain some thoughts, accentuating them and bringing them to attention, while de-emphasizing others, shunting them to unconsciousness, or to oblivion.


Saturday, September 27, 2025

Dopamine: Get up and Go, or Lie Down and Die

The chemistry of motivation.

A recent paper got me interested in the dopamine neurotransmitter system. There are a limited number of neurotransmitters, (roughly a hundred), which are used for all communication at synapses between neurons. The more common transmitters are used by many cells and anatomical regions, making it hazardous in the extreme to say that a particular transmitter is "for" something or other. But there are themes, and some transmitters are more "niche" than others. Serotonin and dopamine are specially known for their motivational valence and involvement in depression, schizophrenia, addiction, and bipolar disorder, among many other maladies.

This paper described the reason why cancer patients waste away- a syndrome called cachexia. This can happen in other settings, like extreme old age, and in other illnesses. The authors ascribe cachexia (using mice implanted with tumors) to the immune system's production of IL6, one of scores of cytokines, or signaling proteins that manage the vast distributed organ that is our immune system. IL6 is pro-inflammatory, promoting inflammation, fever, and production of antibody-producing B cells, among many other things. These authors find that it binds to the area postrema in the brain stem, where many other blood-borne signals are sensed by the brain- signals that are generally blocked by the blood-brain barrier system.

The binding of IL6 at this location then activates a series of neuronal connections that these authors document, ending up inhibiting dopamine signaling out of the ventral tegmental area (VTA) in the lower midbrain, ultimately reducing dopamine action in the nucleus accumbens, where it is traditionally associated with reward, addiction, and schizophrenia. These authors use optically driven engineered neurons at an intermediate location, the parabrachial nucleus, (PBN), to reproduce how neuron activation there drives inhibition downstream, as the natural IL6 signal also does.  

Schematic of the experimental setup and anatomical locations. The graph shows how dopamine is strongly reduced under cachexia, consequent to the IL6 circuitry the authors reveal.

What is the rationale of all this? When we are sick, our body enters a quite different state- lethargic, barely motivated, apathetic, and resting. All this is fine if our immune system has things under control, uses our energy for its own needs, and returns us to health forthwith, but it is highly problematic if the illness goes on longer. This work shows in a striking and extreme way what had already been known- that prominent dopamine-driven circuits are core micro-motivational regulators in our brains. For an effective review of this area, one can watch a video by Robert Lustig, outlining at a very high level the relationship of the dopamine and serotonin systems.

Treatment of tumor-laden mice with an antibody to IL6 that reduces its activity relieves them of cachexia symptoms and significantly extends their lifespans.

It is something that the Buddhists understood thousands of years ago, and which the Rolling Stones and the advertising industry have taken up more recently. While meditation may not grant access to the molecular and neurological details, it seems to have convinced the Buddha that we are on a treadmill of desire, always unsatisfied, always reaching out for the next thing that might bring us pleasure, but which ultimately just feeds the cycle. Controlling that desire is the surest way to avoid suffering. Nowhere is that clearer than in addiction- real, clinical addictions that are all driven by the dopamine system. No matter what your drug of choice- gambling, sugar, alcohol, cocaine, heroin- the pleasure that they give is fleeting and alerts the dopamine system to motivate the user to seek more of the same. There are a variety of dopamine pathways, including those affecting Parkinson's and reproductive functions, but the ones at issue here are the mesolimbic and mesocortical circuits, that originate in the midbrain VTA and extend respectively to the nucleus accumbens in the lower forebrain, and to the cerebral cortex. These are integrated with the rest of our cognition, enabling motivation to find the root causes of a pleasurable experience, and raise the priority of actions that repeat those root causes. 

So, if you gain pleasure from playing a musical instrument, then the dopamine system will motivate you to practice more. But if you gain pleasure from cocaine, the dopamine system will motivate you to seek out a dealer, and spend your last dollar for the next fix. And then steal some more dollars. This system shows specifically the dampening behavior that is so tragic in addictions. Excess activation of dopamine-driven neurons can be lethal to those cells. So they adjust to keep activation in an acceptable range. That is, they keep you unsatisfied, in order to allow new stimuli to motivate you to adjust to new realities. No matter how much pleasure you give yourself, and especially the more intense that pleasure, it is never enough because this system always adjusts the baseline to match. One might think of dopamine as the micro-manager, always pushing for the next increment of action, no matter how much you have accomplished before, no matter how rosy or bleak the outlook. It gets us out of bed and moving through our day, from one task to the next.

In contrast, the serotonin system is the macro-manager, conveying feelings of general contentment, after a life well-lived and a series of true accomplishments. Short-circuiting this system with SSRIs like prozac carries its own set of hazards, like lack of general motivation and emotional blunting, but it does not have the risk of addiction, because serotonin, as Lustig portrays it, is an inhibitory neurotransmitter, with no risk of over-excitement. The brain does not re-set the baseline of serotonin the same way that it continually resets the baseline of dopamine.

How does all this play out in other syndromes? Depression is, like cachexia, at least in part syndrome of insufficient dopamine. Conversely, bipolar disorder in its manic phase appears to involve excess dopamine, causing hyperactivity and wildly excessive motivation, flitting from one task to the next. But what have dopamine antagonists like haloperidol and clozapine been used for most traditionally? As anti-psychotics in the treatment of schizophrenia. And that is a somewhat weird story. 

Everyone knows that the medication of schizophrenia is a haphazard affair, with serious side effects and limited efficacy. A tradeoff between therapeutic effects and others that make the recipient worse off. A paper from a decade ago outlined why this may be the case- the causal issues of schizophrenia do not lie in the dopamine system at all, but in circuits far upstream. These authors suggest that ultimately schizophrenia may derive from chronic stress in early life, as do so many other mental health maladies. It is a trail of events that raise the stress hormone cortisol, which diminishes cortical inhibition of hippocampal stress responses, and specifically diminishes the GABA (another neurotransmitter) inhibitory interneurons in the hippocampus. 

It is the ventral hippocampus that has a controlling influence over the VTA that in turn originates the relevant dopamine circuitry. The theory is that the ventral hippocampus sets the contextual (emotional) tone for the dopamine system, on top of which episodic stimulation takes place from other, more cognitive and perception-based sources. Over-activity of this hippocampal regulation raises the gain of the other signals, raising dopamine far more than appropriate, and also lowering it at other times. Thus treating schizophrenia with dopamine antagonists counteracts the extreme highs of the dopamine system, which in the nucleus accumbens can lead to hallucinations, delusions, paranoia, and manic activity, but it is a blunt instrument, also impairing general motivation, and further reducing cognitive, affect, parkinsonism, and other problems caused by low dopamine that occurs during schizophrenia in other systems such as the meso-cortical and the nigrostriatal dopamine pathways.

Manipulation of neurotransmitters is always going to be a rough job, since they serve diverse cells and pathways in our brains. Wikipedia routinely shows tables of binding constants for drugs (clozapine, for instance) to dozens of different neurotransmitter receptors. Each drug has its own profile, hitting some receptors more and others less, sometimes in curious, idiosyncratic patterns, and (surprisingly) across different neurotransmitter types. While some of these may occasionally hit a sweet spot, the biology and its evolutionary background has little relation to our current needs for clinical therapies, particularly when we have not yet truly plumbed the root causes of the syndromes we are trying to treat. Nor is precision medicine in the form of gene therapies or single-molecule tailored drugs necessarily the answer, since the transmitter receptors noted above are not conveniently confined to single clinical syndromes either. We may in the end need specific, implantable and computer-driven solutions or surgeries that respect the anatomical complexity of the brain.


Saturday, April 5, 2025

Psilocybin and the Normie Network

Psychedelic mushrooms desynchronize parts of the brain, especially the default mode network, disrupting our normal sense of reality.

Default mode network- sounds rather dull, doesn't it? But somewhere within that pattern of brain activity lies at least some of our consciousness. It is the hum of the brain while we are resting without focus. When we are intensely focused on something outside, in contrast, it is turned down, as we "lose ourselves" in the flow of other activities. This network remains active during light sleep and has altered patterns during REM sleep and dreaming, as one might expect. A recent paper tracked its behavior during exposure to the psychedelic drug psilocybin.

These researchers measured the level of active connectivity between brain regions (by functional MRI) on human subjects given a high dose of psilocybin, or Ritalin- which stands in as another psychoactive stimulant- or nothing. Compared with the control of no treatment, Ritalin caused a slight loss of connectivity (or desynchronization), (below, b), while psilocybin (a) caused huge loss of connectivity, clearly correlated with the subjective intensity of the trip. They also found that if, while on psilocybin, they gave their subjects some task to focus on, their connectivity increased again, again tracking directly with the subjective experiences of their subjects. 

The researchers show a metric of connectivity between distinct brain regions, under three conditions. FC stands for functional connectivity, where high numbers (and brighter colors) stand for more distance, i.e. less connectivity/synchrony. Methylphenidate is Ritalin. Synchrony is heavily degraded under psilocybin.

Of all the networks they analyzed, the default mode network (DNM) was most affected. This network runs between the prefrontal cortex, the posterior cingulate cortex, the hippocampus, the angular gyrus, and temporoparietal junction, among others. These are key areas for awareness, memory, time, social relations, and much else that is highly relevant to conscious awareness and personhood. There is a long thread of work in the same vein that shows that psychedelic drugs have these highly correlated effects on subjective experience and brain patterns. A recent review suggested that while subnetworks like the DNM are weakened, the brain as a whole experiences higher synchrony as well as higher receptivity to outside influence in an edgy process that increases its level of (to put it in terms of complexity theory) chaos or criticality. 

So that is what is happening! But that is not all. The effects of pychedelics, even from one dose, can be long-lasting, even life-changing. The current researchers note that the DMN desynchronization they see persists, at weaker levels, for weeks. This correlates with the subjective experience of changed senses that can result from a significant drug trip. And that trip, as noted above regarding receptivity and chaos, is a delicate thing, highly subject to the environment and mood the subject is experiencing at the time. 

But when a task is being done, the subjects come back down towards normalcy.

These researchers note that brain plasticity comes into play, evidently as a homeostatic response to the wildly novel patterns of brain activation that took the subject out of their rut of DMN self-hood. Synapses change, genes are activated, and the brain reshapes usually in a way that increases receptivity to novel experiences and lifts mood. For example, depression correlates with strong DMN coherence, while successful treatment correlates with less connectivity.

"We propose that psychedelics induce a mode of brain function that is more dynamically flexible, diverse, integrated, and tuned for information sharing, consistent with greater criticality."

So, consider the mind blown. Psychedelics appear to disrupt the usual day-to-day, which is apparently a strongly positive thing to do, both subjectively and clinically. That raises the question of why. Why do we settle into such durable and often negative images of the self and ways of thinking? And why does shaking things up have such positive effects? Jung had a deep conviction that our brains are healing machines, with deep wisdom and powers that are exposed during unusual events, like intense dreams. While there are bad trips, and people who go over the edge from excessive psychedelic use, with a modicum of care, it appears that positive trips, taken in a mood of physical and social safety, let the mind reset in important ways, back to its natural open-ness.


Saturday, October 26, 2024

A Hunt for Causes of Atherosclerosis

Using the most advanced tools of molecular biology to sift through the sands of the genome for a little gold.

Blood vessels have a hard life. Every time you put on shoes, the vessels in your feet get smashed and smooshed, for hours on end. And do they complain? Generally, not much. They bounce back and make do with the room you give them. All through the body, vessels are subject to the pumping of the heart, and variations in blood volume brought on by our salt balance. They have to move when we do, and deal with it whenever we sit or lie on them. Curiously, it is the veins in our legs and calves, that are least likely to be crushed in daily life, that accumulate valve problems and go varicose. Atherosclerosis is another, much more serious problem in larger vessels, also brought on by age and injury, where injury and inflammation of the lining endothelial cells can lead to thickening, lipid/cholesterol accumulation, necrosis, calcification, and then flow restriction and fragmentation risk. 

Cross-section of a sclerotic blood vessel. LP stands for lipid pool, while the box shows necrotic and calcified bits of tissue.

The best-known risk factors for atherosclerosis are lipid-related, such as lack of liver re-capture of blood lipids, or lack of uptake around the body, keeping cholesterol and other lipid levels high in the blood. But genetic studies have found hundreds of areas of the genome with risk-conferring (or risk-reducing) variants, most of which are not related to lipid management. These genome-wide association studies (or GWAS) look for correlations between genetic markers and disease in large populations. So they pick up a lot of low-impact genetic variations that are difficult to study, due to their large number and low impact, which can often imply peripheral / indirect function. High-impact variations (mutations) tend to not survive in the population very long, but when found tend to be far more directly involved and informative.

A recent paper harnessed a variety of modern tools and methods to extract more from the poor information provided by GWAS. They come up with a fascinating tradeoff / link between atherosclerosis and cerebral cavernous malformation (CCM), which is distinct blood vessel syndrome that can also lead to rupture and death. The authors set up a program of analysis that was prodigious, and only possible with the latest tools. 

The first step was to select a cell line that could model the endothelial cells at issue. Then they loaded these cells with custom expression-reducing RNA regulators against each one of the ~1600 genes found in the neighborhood of the mutations uncovered by the GWAS analyses above, plus 600 control genes. Then they sequenced all the RNA messages from these single cells, each of which had received one of these "knock-down" RNA regulators. This involved a couple hundred thousand cells and billions of sequencing reads- no simple task! The point was to gather comprehensive data on what other genes were being affected by the genetic lesion found in the GWAS population, and then to (algorithmically) assemble them into coherent functional groups and pathways which could both identify which genes were actually being affected by the original mutations, and also connect them to the problems resulting in atherosclerosis.

Not to be outdone, they went on to harness the AlphaFold program to hunt for interactions among the proteins participating in some of the pathways they resolved through this vast pipeline, to confirm that the connections they found make sense.

They came up with about fifty different regulated molecular programs (or pathways), of which thirteen were endothelial cell specific. Things like angiogenesis, wound healing, flow response, cell migration, and osmoregulation came up, and are naturally of great relevance. Five of these latter programs were particularly strongly connected to coronary artery disease risk, and mostly concerned endothelial-specific programs of cell adhesion. Which makes sense, as the lack of strong adhesion contributes to injury and invasion by macrophages and other detritus from the blood, and adhesion among the endothelial cells plays a central role in their ability / desire to recover from injury, adjust to outside circumstances, reshape the vessel they are in, etc.

Genes near GWAS variations and found as regulators of other endothelial-related genes are mapped into a known pathway (a) of molecular signaling. The color code of changed expression refers to the effect that the marked gene had on other genes within the five most heavily disease-linked programs/pathways. The numbers refer to those programs, (8=angiogenesis and osmoregulation, 48=cell adhesion, 35=focal adhesion, related to cell adhesion, 39=basement membrane, related to cell polarity and adhesion, 47=angiogenesis, or growth of blood vessels). At bottom (c) is a layout of 41 regulated genes within the five disease-related programs, and how they are regulated by knockdown of the indicated genes on the X axis. Lastly, in d, some of these target genes have known effects on atherosclerosis or vascular barrier syndromes when mutated. And this appears to generally correlate with the regulatory effects of the highlighted pathway genes.

"Two regulators of this (CCM) pathway, CCM2 and TLNRD1, are each linked to a CAD (coronary artery disease) risk variant, regulate other CAD risk genes and affect atheroprotective processes in endothelial cells. ... Specifically, we show that knockdown of TLNRD1 or CCM2 mimics the effects of atheroprotective laminar blood flow, and that the poorly characterized gene TLNRD1 is a newly identified regulator in the CCM pathway."

On the other hand, excessive adhesiveness and angiogenesis can be a problem as well, as revealed by the reverse correlation they found with CCM syndrome. The interesting thing was that the gene CCM2 came up as one of strongest regulators of the five core programs associated with atherosclerosis risk mutations. As can be guessed from its name, it can harbor mutations that lead to CCM. CCM is a relatively rare syndrome (at least compared with coronary artery disease) of localized patches of malformed vessels in the brain, which are prone to rupture, which can be lethal. CCM2 is part of a protein complex, with KRIT1 and PDCD10, and part of a known pathway from fluid flow sensing receptors to transcription regulators (TFs) that turn on genes relevant to the endothelial cells. As shown in the diagram above, this pathway is full of genes that came up in this pathway analysis, from the atherosclerosis GWAS mutations. Note that there is a repression effect in the diagram above (a) between the CCM complex and the MAP kinase cascade that sends signals downstream, accounting for the color reversal at this stage of the diagram.

Not only did they find that this known set of three CCM gene are implicated in the atherosclerosis mutation results, but one of the genes they dug up through their pipeline, TLNRD1, turned out to be a fourth, hitherto unknown, member of the CCM complex, shown via the AlphaFold program to dock very neatly with the others. It is loss of function mutations of genes encoding this complex, which inhibits the expression of endothelial cell pro-cell adhesion and pro-angiogenesis sets of genes, that cause CCM, unleashing these angiogenesis genes to do too much. 

The logic of this pathway overall is that proper fluid flow at the cell surface, as expected in well-formed blood vessels, activates the pathway to the CCM complex, which then represses programs of new or corrective angiogenesis and cell adhesion- the tissue is OK as it is. Conversely, when turbulent flow is sensed, the CCM complex is turned down, and its target genes are turned up, activating repair, revision, and angiogenesis pathways that can presumably adjust the vessel shape to reduce turbulence, or simply strengthen it.

Under this model, malformations may occur during brain development when/where turbulent flow occurs, reducing CCM activation, which is abetted by mutations that help the CCM complex to fall apart, resulting (rarely) in run-away angiogenesis. The common variants dealt with in this paper, that decrease risk of cardiovascular disease / atherosclerosis, appear to have similar, but much weaker effects, promoting angiogenesis, including recovery from injury and adhesion between endothelial cells. In this way, they keep the endothelium tighter and more resistant to injury, invasion by macrophages, and all the downstream sequelae that result in atherosclerosis. Thus strong reduction of CCM gene function is dangerous in CCM syndrome, but more modest reductions are protective in atherosclerosis, setting up a sensitive evolutionary tradeoff that we are clearly still on the knife's edge of. I won't get into the nature of the causal mutations themselves, but they are likely to be diffuse and regulatory in the latter case.

Image of the CCM complex, which regulates response to blood flow, and whose mutations are relevant both to CCM and to atherosclerosis. The structures of TLNRD1 and the docking complex are provided by AlphaFold. 


This method is particularly powerful by being unbiased in its downstream gene and pattern finding, because it samples every expressed gene in the cell and automatically creates related pathways from this expression data, given the perturbations (knockdown of expression) of single target genes. It does not depend on using existing curated pathways and literature that would make it difficult to find new components of pathways. (Though in this case the "programs" it found align pretty closely with known pathways.) On the other hand, while these authors claim that this method is widely applicable, it is extremely arduous and costly, as evidenced by the contribution of 27 authors at top-flight institutions, an unusually large number in this field. So, for diseases and GWAS data sets that are highly significant, with plenty of funding, this may be a viable method of deeper analysis. Otherwise, it is beyond the means of a regular lab.

  • A backgrounder on sedition, treason, and insurrection.
  • And why it matters.
  • Jan 6 was an attempted putsch.
  • Trumpies for Putin.
  • Solar is a no-brainer.
  • NDAs are blatantly illegal and immoral. One would think we would value truth over lies.

Saturday, September 28, 2024

Dangerous Memories

Some memory formation involves extracellular structures, DNA damage, and immune component activation / inflammation.

The physical nature of memories in the brain is under intensive scrutiny. The leading general theory is that of positive reinforcement, where neurons that are co-activated strengthen their connections, enhancing their ability to co-fire and thus to express the same pattern again in the future. The nature of these connections has been somewhat nebulous, assumed to just be the size and stability of their synaptic touch-points. But it turns out that there is a great deal more going on.

A recent paper started with a fishing expedition, looking at changes in gene expression in neurons at various time points after the mice were subjected to a fear learning regimen. They took this out to much longer time points (up to a month) than had been contemplated previously. At short times, a bunch of well-known signals and growth-oriented gene expression happened. At the longest time points, organization of a structure called the perineural net (PNN) was read out of the gene expression signals. This is a extracellular matrix sheath that appears to stabilize neuronal connections and play a role in long-term memory and learning. 

But the real shocker came at the intermediate time point of about four days. Here, there was overexpression of TLR9, which is an immune system detector of broken / bacterial DNA, and inducer in turn of inflammatory responses. This led the authors down a long rabbit hole of investigating what kind of DNA fragmentation is activating this signal, how common this is, how influential it is for learning, and what the downstream pathways are. Apparently, neuronal excitation, particularly over-excitation that might be experienced under intense fear conditions, isn't just stressful in a semiotic sense, but is highly stressful to the participating neurons. There are signs of mitochondrial over-activity and oxidative stress, which lead to DNA breakage in the nucleus, and even nuclear perforation. It is a shocking situation for cells that need to survive for the lifetime of the animal. Granted, these are not germ cells that prioritize genomic stability above all else, but getting your DNA broken just for the purpose of signaling a stress response that feeds into memory formation? That is weird.

Some neuronal cell bodies after fear learning. The red dye is against a marker of DNA repair proteins, which form tight dots around broken DNA. The blue is a general DNA stain, and the green is against a component of the nuclear envelope, showing here that nuclear envelopes have broken in many of these cells.

The researchers found that there are classic signs of DNA breakage, which are what is turning on the TLR9 protein, such as seeing concentrated double-strand DNA repair complexes. All this stress also turned on proteases called caspases, though not the cell suicide program that these caspases typically initiate. Many of the DNA break and repair complexes were, thanks to nuclear perforation, located diffusely at the centrosome, not in the nucleus. TLR9 turns on an inflammatory response via NFKB / RELA. This is clearly a huge event for these cells, not sending them into suicide, but all the alarms short of that are going off.

The interesting part was when the researchers asked whether, by deleting the TLR9 or related genes in the pathway, they could affect learning. Yes, indeed- the fear memory was dependent on the expression of this gene in neurons, and on this cell stress pathway, which appears to be the precondition of setting up the perineural net structures and overall stabilization. Additionally, the DNA damage still happened, but was not properly recognized and repaired in the absence of TLR9, creating an even more dangerous situation for the affected neurons- of genomic instability amidst unrepaired DNA.

When TRL9 is knocked out, DNA repair is cancelled. At bottom are wild-type cells, and at top are mouse neurons after fear learning that have had the gene TLR9 deleted. The red dye is against DNA repair proteins, as is the blue dye in the right-most frames. The top row is devoid of these repair activities.

This paper and its antecedent literature are making the case that memory formation (at least under these somewhat traumatic conditions- whether this is true for all kinds of memory formation remains to be seen) has commandeered ancient, diverse, and quite dangerous forms of cell stress response. It is no picnic in the park with madeleines. It is an all-hands-on-deck disaster scene that puts the cell into a permanently altered trajectory, and carries a variety of long-term risks, such as cancer formation from all the DNA breakage and end-joining repair, which is not very accurate. They mention in passing that some drugs have been recently developed against TLR9, which are being used to dampen inflammatory activities in the brain. But this new work indicates that such drugs are likely double-edged swords, that could impair both learning and the long-term health of treated neurons and brains.

Sunday, July 7, 2024

Living on the Edge of Chaos

 Consciousness as a physical process, driven by thalamic-cortical communication.

Who is conscious? This is not only a medical and practical question, but a deep philosophical and scientific question. Over the last decades and century, we have become increasingly comfortable assigning consciousness to other animals, in ever-wider circles of understanding and empathy. The complex lives of chimpanzees, as brought into our consciousness by Jane Goodall, are one example. Birds are increasingly appreciated for their intelligence and strategic maneuvering. Insects are another frontier, as we appreciate the complex communication and navigation strategies honeybees, for one example, employ. Where does it end? Are bacteria conscious?

I would define consciousness as responsiveness crosschecked with a rapidly accessible model of the world. Responsiveness alone, such in a thermostat, is not consciousness. But a nest thermostat that checks the weather, knows its occupant's habits and schedules, and prices for gas and electricity ... that might be a little conscious(!) But back to the chain of being- are jellyfish conscious? They are quite responsive, and have a few thousand networked neurons that might well be computing the expected conditions outside, so I would count them as borderline conscious. That is generally where I would put the dividing line, with plants, bacteria, and sponges as not conscious, and organisms with brains, even quite decentralized ones like octopi and snails, as conscious, with jellyfish as slightly conscious. Consciousness is an infinitely graded condition, which in us reaches great heights of richness, but presumably starts at a very simple level.

These classifications imply that consciousness is a property of very primitive brains, and thus in our brains, is likely to be driven by very primitive levels of its anatomy. And that brings us to the two articles for this week, about current theories and work centered on the thalamus as a driver of human consciousness. One paper relates some detailed experimental and modeled tests of information transfer that characterizes a causal back-and-forth between the thalamus and the cortex, in which there is a frequency division between thalamic 10 - 20 Hz oscillations, whose information is then re-encoded and reflected in cortical oscillations of much higher frequency, at about 50 - 150 Hz. It also continues a long-running theme in the field, characterizing the edge-of chaos nature of electrical activity in these thalamus-cortex communications, as being just the kind of signaling suited to consciousness, and tracks the variation of chaoticity during anesthesia, waking, psychedelic drug usage, and seizure. A background paper provides a general review of this field, showing that the thalamus seems to be a central orchestrator of both the activation and maintenance of consciousness, as well as its contents and form.

The thalamus is at the very center of the brain, and, as is typical for primitive parts of the brain, it packs a lot of molecular diversity, cell types, and anatomy in a very small region. More recently evolved areas of the brain tend to be more anatomically and molecularly uniform, while supporting more complexity at the computational level. The thalamus has about thirty "nuclei", or anatomical areas that have distinct patterns of connections and cell types. It is known to relay sensory signals to the cortex, to be central to sleep control and alertness. It sits right over the brain stem, and has radiating connections out to, and back from, the cerebral cortex, suggestive of a hub-like role. 

The thalamus is networked with recurrent connections all over the cortex.


The first paper claims firstly that electrical oscillations in the thalamus and the cortex are interestingly related. Mouse, rats, and humans were all used as subjects and gave consistent results over the testing, supporting the idea that, at very least, we think alike, even if what we think about may differ. What is encoded in the awake brain at 1-13 Hz in the thalamus appears in correlated form (that is, in a transformed way) as 50-100+ Hz in the cortex. They study the statistics of recordings from both areas to claim that there is directional information flow, not just marked by the anatomical connections, but by active, harmonic entrainment and recoding. But this relationship fails to occur in unconscious states, even though the thalamus is at this time (in sleep, or anesthesia) helping to drive slow wave sleep patterns directly in the cortex. This supports the idea that there is a summary function going on, where richer information processed in the cortex is reduced in dimension into the vaguer hunches and impressions that make up our conscious experience. Even when our feelings and impressions are very vivid, they probably do not do justice to the vast processing power operating in the cortex, which is mostly unconscious.

Administering psychedelic drugs to their experimental animals caused greater information transfer backwards from the cortex to the thalamus, suggesting that the animal's conscious experience was being flooded. They also observe that these loops from the thalamus to cortex and back have an edge-of-chaos form. They are complex, ever-shifting, and information-rich. Chaos is an interesting concept in information science, quite distinct from noise. Chaos is deterministic, in that the same starting conditions should always produce the same results. But chaos is non-linear, where small changes to initial conditions can generate large swings in the output. Limited chaos is characteristic of living systems, which have feedback controls to limit the range of activity, but also have high sensitivity to small inputs, new information, etc., and thus are highly responsive. Noise is random changes to a signal that may not be reproducible, and are not part of a control mechanism.

Unfortunately, I don't think the figures from this paper support their claims very well, or at least not clearly, so I won't show them. It is exploratory work, on the whole. At any rate, they are working from, and contributing to, a by now quite well-supported paradigm that puts the thalamus at the center of conscious experience. For example, direct electrical stimulation of the center of the thalamus can bring animals immediately up from unconsciousness induced by anesthesia. Conversely, stimulation at the same place, with a different electrical frequency, (10 Hz, rather than 50 Hz), causes immediate freezing and vacancy of expression of the animal, suggesting interference with consciousness. Secondly, the thalamus is known to be the place which gates what sensory data enters consciousness, based on a long history of attention, ocular rivalry, and blind-sight experiments.

A schematic of how stimulation of the thalamus (in middle) interacts with the overlying cortex. CL stands for the ventral lateral nucleus of thalamus, where these stimulation experiments were targeted. The greek letters alpha and gamma stand for different frequency bands of the neural oscillation.

So, from both anatomical perspectives and functional ones, the thalamus appears at the center of conscious experience. This is a field that is not going to be revolutionized by a lightning insight or a new equation. It is looking very much like a matter of normal science slowly accumulating ever-more refined observations, simulations, better technologies, and theories that gain, piece by piece, on this most curious of mysteries.


Saturday, May 25, 2024

Nascent Neurons in Early Animals

Some of the most primitive animals have no nerves or neurons... how do they know what is going on?

We often think of our brains as computers, but while human-made computers are (so far) strictly electrical, our brains have a significantly different basis. The electrical component is comparatively slow, and confined to conduction along the membranes of single cells. Each of these neurons communicate with others using chemicals, mostly at specialized synapses, but also via other small compounds, neuropeptides, and hormones. That is why drugs have so many interesting effects, from anesthesia to anti-depression and hallucination. These properties suggest that the brain and its neurons began, evolutionarily speaking, as chemically excitable cells, before they became somewhat reluctant electrical conductors.

Thankfully, a few examples of early stages of animal evolution still exist. The main branches of the early divergence of animals are sponges (porifera), jellies and corals (ctenophora, cnidiaria), bilaterians (us), and an extremely small family of placozoa. Neural-type functions appear to have evolved independently in each of these lineages, from origins that are clearest in what appears to be the most primitive of them, the placozoa. These are pancake-like organisms of three cell layers, hardly more complex than a single-celled paramecium. They have about six cell types in all, and glide around using cilia, engulfing edible detritus. They have no neurons, let alone synaptic connections between them, yet they have excitable cells that secrete what we would call neuropeptides, that tell nearby cells what to do. Substrances like enkephalins, vasopressin, neurotensin, and the famous glucagon-like peptide are part of the managerie of neuropeptides at work in our own brains and bodies.

A placozoan, about a millimeter wide. They are sort of a super-amoeba, attaching to and gliding over surfaces underwater and eating detritus. They are heavily ciliated, with only a few cell types divided in top, middle, and bottom cell layers. The proto-neural peptidergic cells make up ~13% of cells in this body.


The fact is that excitable cells long predate neurons. Even bacteria can sense things from outside, orient, and respond to them. As eukaryotes, placozoans inherited a complex repertoire of sense and response systems, such as G-protein coupled receptors (GPCRs) that link sensation of external chemicals with cascades of internal signaling. GPCRs are the dominant signaling platforms, along with activatable ion channels, in our nervous systems. So a natural hypothesis for the origin of nervous systems is that they began with chemical sensing and inter-cell chemical signaling systems that later gained electrical characteristics to speed things up, especially as more cells were added, body size increased, and local signaling could not keep up. Jellies, for instance, have neural nets that are quite unlike, and evolutionarily distinct from, the centralized systems of animals, yet use a similar molecular palette of signaling molecules, receptors, and excitation pathways. 

Placozoans, which date to maybe 800 million years ago, don't even have neurons, let alone neural nets or nervous systems. A recent paper labored to catalog what they do have, however, finding a number of pre-neural characteristics. For example, the peptidergic cell type, which secretes peptides that signal to neighboring cells, expresses 25 or more GPCRs, receptors for those same peptides and other environmental chemicals. They state that these GPCRs are not detectably related to those of animals, so placozoans underwent their own radiation, evolving/diversifying a primordial receptor into hundreds that exist in its genome today. The researchers even go so far as to employ the AI program Alpha Fold to model which GPCRs bind to which endogenously produced peptides, in an attempt to figure out the circuitry that these organisms employ.

This peptidergic cell type also expresses other neuron-like proteins, like neuropeptide processing enzymes, transcription regulators Sox, Pax, Jun, and Fos, a neural-specific RNA polyadenylation enzyme, a suite of calcium sensitive channels and signaling components, and many components of the presynaptic scaffold, which organizes the secretion of neuropeptides and other transmitters in neurons, and in placozoa presumably organizes its secretion of its quasi-neuropeptides. So of the six cell types, the peptidergic cell appears to be specialized for signaling, is present in low abundance, and expresses a bunch of proteins that in other lineages became far more elaborated into the neural system. Peptidergic cells do not make synapses or extended cell processes, for example. What they do is to offer this millimeter-sized organism a primitive signaling and response capacity that, in response to environmental cues, prompts it to alter its shape and movement by distributing neuropeptides to nearby effector cells that do the gliding and eating that the peptidergic cells can't do.

A schematic of neural-like proteins expressed in placozoa, characteristic of more advanced presynaptic secretory neural systems. These involve both secretion of neuropeptides (bottom left and middle), the expression of key ion channels used for cell activation (Ca++ channels), and the expression of cell-cell adhesion and signaling molecules (top right).

Why peptides? The workhorse of our brain synapses are simpler chemicals like serotonin, glutamate, and norepinephrine. Yet the chemical palette of such simple compounds is limited, and each one requires its own enzymatic machinery for synthesis. Neuropeptides, in contrast, are typically generated by cleavage of larger proteins encoded from the genome. Thus the same mechanism (translation and cleavage) can generate a virtually infinite variety of short and medium sized peptide sequences, each of which can have its own meaning, and have a GPCR or other receptor tailored to detecting it. The scope of experimentation is much greater, given normal mutation and duplication events through evolutionary time, and the synthetic pipeline much easier to manage. Our nervous systems use a wide variety of neuropeptides, as noted above, and our immune system uses an even larger palette of cytokines and chemokines, upwards of a hundred, each of which have particular regulatory meanings.


An evolutionary scheme describing the neural and proto-neural systems observed among primitive animals.


The placozoan relic lineages show that nervous systems arose in gradual fashion from already-complex systems of cell-cell signaling that focused on chemical rather than electrical signaling. But very quickly, with the advent of only slighly larger and more complex body plans, like those of hydra or jellies, the need for speed forced an additional mode of signaling- the propagation of electrical activity within cells, (the proto-neurons), and their physical extension to capitalize on that new mode of rapid conduction. But never did nervous systems leave behind their chemical roots, as the neurons in our brains still laboriously conduct signals from one neuron to the next via the chemical synapse, secreting a packet of chemicals from one side, and receiving that signal across the gap on the other side.


  • The mechanics of bombing a population back into the stone age.
  • The Saudis and 9/11.
  • Love above all.
  • The lower courts are starting to revolt.
  • Brain worms, Fox news, and delusion.
  • Notes on the origins of MMT, as a (somewhat tedious) film about it comes out.

Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Saturday, May 20, 2023

On the Spectrum

Autism, broader autism phenotype, temperament, and families. It turns out that everyone is on the spectrum.

The advent of genomic sequencing and the hunt for disease-causing mutations has been notably unhelpful for most mental diseases. Possible or proven disease-causing mutations pile up, but they do little to illuminate the biology of what is going on, and even less towards treatment. Autism is a prime example, with hundreds of genes now identified as carrying occasional variants with causal roles. The strongest of these variants affect synapse formation among neurons, and a second class affects long-term regulation of transcription, such as turning genes durably on or off during developmental transitions. Very well- that all makes a great deal of sense, but what have we gained?

Clinically, we have gained very little. What is affected are neural developmental processes that can't be undone, or switched off in later life with a drug. So while some degree of understanding slowly emerges from these studies, translating that to treatment remains a distant dream. One aspect of the genetics of autism, however, is highly informative, which is the sheer number of low-effect and common mutations. Autism can be thought of as coming in two types, genetically- those due to a high effect, typically spontaneous or rare mutation, and those due to a confluence of common variants. The former tends to be severe and singular- an affected child in a family that is otherwise unaffected. The latter might be thought of as familial, where traits that have appeared (mildly) elsewhere in the family have been concentrated in one child, to a degree that it is now diagnosable.

This pattern has given rise to the very interesting concept of the "Broader Autism Phenotype", or BAP. This stems from the observation that families of autistic children have higher rates where ... "the parents, grandparents, and collaterals are persons strongly preoccupied with abstractions of a scientific, literary, or artistic nature, and limited in genuine interest in people." Thus there is not just a wide spectrum of autism proper, based on the particular confluence of genetic and other factors that lead to a diagnosis and its severity, but there is also, outside of the medical spectrum, quite another spectrum of traits or temperaments which tend toward autism and comprise various eccentricities, but have not, at least to date, been medicalized.


The common nature of these variants leads to another question- why are they persistent in the population? It is hard to believe that such a variety and number of variations are exclusively deleterious, especially when the BAP seems to have, well, rather positive aspects. No, I would suggest that an alternative way to describe BAP is "an enhanced ability to focus", and develop interests in salient topics. Ever meet people who are technically useless, but warm-hearted? They are way off on the non-autistic part of the spectrum, while the more technically inclined, the fixers of the world and scholars of obscure topics, are more towards the "ability to focus" part of the spectrum. Only when such variants are unusually concentrated by the genetic lottery do children appear with frank autistic characteristics, totally unable to deal with social interactions, and given to obsessive focus and intense sensitivities.

Thus autism looks like a more general lens on human temperament and evolution, being the tip of a very interesting iceberg. As societies, we need the politicians, backslappers, networkers, and con men, but we also need, indeed increasingly as our societies and technologies developed over the centuries, people with the ability and desire to deal with reality- with technical and obscure issues- without social inflection, but with highly focused attention. Militaries are a prime example, fusing critical needs of managing and motivating people, with a modern technical base of vast scope, reliant on an army of specialists devoted to making all the machinery work. Why does there have to be this tradeoff? Why can't everyone be James Bond, both technically adept and socially debonaire? That isn't really clear, at least to me, but one might speculate that in the first place, dealing with people takes a great deal of specialized intelligence, and there may not be room for everything in one brain. Secondly, the enhanced ability to focus on technical or artistic topics may actively require, as is implicit in doing science and as was exemplified by Mr. Spock, an intentional disregard of social niceties and motivations, if one is to fully explore the logic of some other, non-human, world.


Saturday, May 6, 2023

The Development of Metamorphosis

Adulting as a fly involves a lot of re-organization.

Humans undergo a slight metamorphosis, during adolescence. Imagine undergoing pupation like insects do and coming out with a totally new body, with wings! Well, Kafka did, and it wasn't very pleasant. But insects do it all the time, and have been doing it for hundreds of millions of years, taking to the air and dominating the biosphere. What goes on during metamorphosis, how complete is its refashioning of the body, and how did it evolve? A recent paper (review) considered in detail how the brains of insects change during metamorphosis, finding a curious blend of birth, destruction, and reprogramming among their neurons.

Time is on the Y axis, and the emergence of later, more advanced types of insects is on the X axis. This shows the progressive elaboration of non-metamorphosis (ametabolous), partially metamorphosing (hemimetabolous), and fully metamorphosing (holometabolous) forms. Dragonflies are only partially metamorphosing in this scheme, though their adult forms are often highly different from their larval (nymph) form.


Insects evolved from crustaceans, and took to land as small silvertail-like creatures with exoskeletons, roughly 450 million years ago. Over 100 million years, they developed the process of metamorphosis as a way to preserve the benefits of their original lifestyle for early development, in moist locations, while conquering the air and distance as adults. Early insect types are termed ametabolous, meaning that they have no metamorphosis at all, developing straight from eggs to an adult-style form. These go through several molts to accommodate growth, but don't redesign their bodies. Next came hemimetabolous development, which is exemplified by grasshoppers and cockroaches. Also dragonflies, which significantly refashion themselves during the last molt, gaining wings. In the nymph stage, those wings were carried around as small patches of flat embryonic tissue, and then suddenly grow out at the last molt. Dragonflies are extreme, and most hemimetabolous insects don't undergo such dramatic change. Last came holometabolous development, which involves pupation and a total redesign of the body that can go from a caterpillar to a butterfly.

The benefit of having wings is pretty clear- it allows huge increases in range for feeding and mating. Dragonflies are premier flying predators. But as a larva, wallowing in fruit juice or leaf sap or underwater, as dragonflies are, wings and long legs would be a hindrance. This conundrum led to the innovation of metamorphosis, based on the already somewhat dramatic practice of molting off the exoskeleton periodically. If one can grow a whole new skeleton, why not put wings on it, or legs? And metamorphosis has been tremendously successful, used by over 98% of insect species.

The adult insect tissues do not come from nowhere- they are set up as arrested embryonic tissues called imaginal discs. These are small patches that exist in the larva at specific positions. During pupation, while much of the rest of the body refashions itself, imaginal discs rapidly develop into future tissues like wings, legs, genitalia, antennas, and new mouth parts. These discs have a fascinating internal structure that prefigures the future organ. The leg disc is concentrically arranged with the more distant future parts (toes) at its center. Transplanting a disc from one insect to another or one place to another doesn't change its trajectory- it will still become a leg wherever it is put. So it is apparent that the larval stage is an intermediate stage of organismal development, where a bunch of adult features are primed but put on hold, while a simpler and much more primitive larval body plan is executed to accommodate its role in early growth and its niche in tight, moist, hidden places.

The new paper focuses on the brain, which larva need as well as adults. So the question is- how does the one brain develop from the other? Is the larval brain thrown away? The answer is that no, the brain is not thrown away at all, but undergoes its own quite dramatic metamorphosis. The adult brain is substantially bigger, so many neurons are added. A few neurons are also killed off. But most of the larval neurons are reprogrammed, trimmed back and regrown out to new regions to do new functions.

In this figure, the neurons are named as mushroom body outgoing neuron (MBON) or dopaminergic neuron (DAN, also MBIN for incoming mushroom body neuron), mushroom body extrinsic neuron to calyx (MBE-CA), and mushroom body protocerebral posterior lateral 1 (PPL1). MBON-c1 is totally reprogrammed, MBON-d1 changes its projections substantially, as do the (teal) incoming neurons, and MBON-12 was not operational in the larval stage at all. Note how MBON-c1 is totally reprogrammed to serve new locations in the adult.

The mushroom body, which is the brain area these authors focus on, is situated below the antennas and mediates smell reception, learning, and memory. Fly biologists regard it as analogous to our cortex- the most flexible area of the brain. Larvae don't have antennas, so their smell/taste reception is a lot more primitive. The mushroom body in drosophila has about a hundred neurons at first, and continuously adds neurons over larval life, with a big push during pupation, ending up with ~2200 neurons in adults. Obviously this has to wire into the antennas as they develop, for instance.

The authors find that, for instance, no direct connections between input and output neurons of the mushroom body (MBIN and MBON, respectively) survive from larval to adult stages. Thus there can be no simple memories of this kind preserved between these life stages. While there are some signs of memory retention for a few things in flies, for the most part the slate is wiped clean. 

"These MBONs [making feedback connections] are more highly interconnected in their adult configuration compared to their larval one: their adult configuration shows 13 connections (31% of possible connections), while their larval configuration has only 7 (17%). Importantly, only three of these connections (7%) are present in both larva and adult. This percentage is similar to the 5% predicted if the two stages were wired up independently at their respective frequencies."


Interestingly, no neuron changed its type- that is, which neurotransmitter it uses to communicate. So, while pruning and rewiring was pervasive, the cells did not fundamentally change their stripes. All this is driven by the hormonal system (juvenile hormone, which blocks adult development, and ecdysone, which drives molting, and in the absence of juvenile hormone, pupation) which in turn drives a program of transcription factors that direct the genes needed for development. While a great deal is known about neuronal pathfinding and development, this paper doesn't comment on those downstream events- how it is that selected neurons are pruned, turned around, and induced to branch out in totally new directions, for instance. That will be the topic of future work.


  • Corrupt business practices. Why is this lawful?
  • Why such easy bankruptcy for corporations, but not for poor countries?
  • Watch the world's mesmerizing shipping.
  • Oh, you want that? Let me jack up the price for you.
  • What transgender is like.
  • "China has arguably been the biggest beneficiary of the U.S. security system in Asia, which ensured the regional stability that made possible the income-boosting flows of trade and investment that propelled the country’s economic miracle. Today, however, General Secretary of the Chinese Communist Party Xi Jinping claims that China’s model of modernization is an alternative to “Westernization,” not a prime example of its benefits."

Saturday, April 1, 2023

Consciousness and the Secret Life of Plants

Could plants be conscious? What are the limits of consciousness and pain? 

Scientific American recently reviewed a book titled "Planta Sapiens". The title gives it all away, and the review was quite positive, with statements like: 

"Our senses can not grasp the rich communicative world of plants. We therefore lack language to describe the 'intelligence' of a root tip in conversation with the microbial life of the soil or the 'cognition' that emerges when chemical whispers ripple through a lacework of leaf cells."

This is provocative indeed! What if plants really do have a secret life and suffer pain with our every bite and swing of the scythe? What of our vaunted morals and ethics then?

I am afraid that I take a skeptical view of this kind of thing, so let's go through some of the aspects of consciousness, and ask how widespread it really is. One traditional view, from the ur-scientific types like Descartes, is that only humans have consciousness, and all other creatures, have at best a mechanism, unfeeling and mechanical, that may look like consciousness, but isn't. This, continued in a sense by B. F. Skinner in the 20th century, is a statement from ignorance. We can not fully communicate with animals, so we can not really participate in what looks like their consciousness, so let's just ignore it. This position has the added dividend of supporting our unethical treatment of animals, which was an enormous convenience, and remains the core position of capitalism generally, regarding farm animals (though its view of humans is hardly more generous).

Well, this view is totally untenable, from our experience of animals, our ability to indeed communicate with them to various degrees, to see them dreaming, not to mention from an evolutionary standpoint. Our consciousness did not arise from nothing, after all. So I think we can agree that mammals can all be included in the community of conscious fellow-beings on the planet. It is clear that the range of conscious pre-occupations can vary tremendously, but whenever we have looked at the workings of memory, attention, vision, and other components assumed to be part of or contributors to conscious awareness, they all exist in mammals, at least. 

But what about other animals like insects, jellyfish, or bacteria? Here we will need a deeper look at the principles in play. As far as we understand it, consciousness is an activity that binds various senses and models of the world into an experience. It should be distinguished from responsiveness to stimuli. A thermostat is responsive. A bacterium is responsive. That does not constitute consciousness. Bacteria are highly responsive to chemical gradients in their environment, to food sources, to the pheromones of fellow bacteria. They appear to have some amount of sensibility and will. But we can not say that they have experience in the sense of a conscious experience, even if they integrate a lot of stimuli into a holistic and sensitive approach to their environment. 


The same is true of our own cells, naturally. They also are highly responsive on an individual basis, working hard to figure out what the bloodstream is bringing them in terms of food, immune signals, pathogens, etc. Could each of our cells be conscious? I would doubt it, because their responsiveness is mechanistic, rather than being an independent as well as integrated model of their world. Simlarly, if we are under anaesthesia and a surgeon cuts off a leg, is that leg conscious? It has countless nerve cells, and sensory apparatus, but it does not represent anything about its world. It rather is built to send all these signals to a modeling system elsewhere, i.e. our brain, which is where consciousness happens, and where (conscious) pain happens as well.

So I think the bottom line is that consciousness is rather widely shared as a property of brains, thus of organisms with brains, which were devised over evolutionary time to provide the kind of integrated experience that a neural net can not supply. Jellyfish, for instance, have neural nets that feel pain, respond to food and mates, and swim exquisitely. They are highly responsive, but, I would argue, not conscious. On the other hand, insects have brains and would count as conscious, even though their level of consciousness might be very primitive. Honey bees map out their world, navigate about, select the delicacies they want from plants, and go home to a highly organized hive. They also remember experiences and learn from them.

This all makes it highly unlikely that consciousness is present in quantum phenomena, in rocks, in bacteria, or in plants. They just do not have the machinery it takes to feel something as an integrated and meaningful experience. Where exactly the line is between highly responsive and conscious is probably not sharply defined. There are brains that are exceedingly small, and neural nets that are very rich. But it is also clear that it doesn't take consciousness to experience pain or try to avoid it, (which plants, bacteria, and jellyfish all do). Where is the limit of ethical care, if our criterion shifts from consciousness to pain? Wasn't our amputated leg in pain after the operation above, and didn't we callously ignore its feelings? 

I would suggest that the limit remains that of consciousness, not that of responsiveness to pain. Pain is not problematic because of a reflex reaction. The doctor can tap our knee as often as he wants, perhaps causing pain to our tendon, but not to our consciousness. Pain is problematic because of suffering, which is a conscious construct built around memory, expectations, and models of how things "should" be. While one can easily see that a plant might have certain positive (light, air, water) and negative (herbivores, fungi) stimuli that shape its intrinsic responses to the environment, these are all reflexive, not reflective, and so do not appear (to an admittedly biased observer) to constitute suffering that rises to ethical consideration.