Showing posts with label naturalism. Show all posts
Showing posts with label naturalism. Show all posts

Saturday, January 3, 2026

Tiny Tunings in a Buzz of Neural Activity

Lots of what neurons do adds up to zero: how the cerebellum controls muscle movement.

Our brains are always active. Meditating, relaxing, sleeping ... whatever we are up to, the brain doesn't take a holiday, except in the deepest levels of sleep, when slow waves help the brain to reset and heal itself. Otherwise, it is always going, with only slight variations based on computational needs and transient coalitions, which are extremely difficult to pick out of the background (fMRI signals are typically under 3% of the noise). That leads to the general principle that action in a productive sense is merely a tuned overlay on top of a baseline of constant activity, through most of the brain.

A recent paper discussed how this property manifests in the cerebellum, the "little" brain attached to our big brain, which is a fine-tuning engine especially for motion control, but also many other functions. The cerebellum has relatively simple and massively parallelized circuitry, a bit like a GPU to the neocortex's CPU. It gets inputs from the places like the the spinal cord and sensory areas of the brain, and relays a tuned signal out to, in the case of motor control, the premotor cortex. The current authors focused on the control of eye movement ("saccades") which is a well characterized system and experimentally tractable, in marmoset monkeys. 

After poking a massive array of electrodes into their poor monkey's brains, they recorded from hundreds of cells, including all the relevant neuron types (of which there are only about six). They found that inside the cerebellum, and inside the region they already knew is devoted to eye movement, neurons form small-world groups that interact closely with each other, revealing a new level of organization for this organ.

More significantly, they were able to figure out the central tendency or vector for Purkinje (P) cells they ran across. These are the core cells of the cerebellar circuit, so their firing should correlate in principle with the eye movements that they were simultaneously tracking in these monkeys. So one cell might be an upward directing cell, while another one might be leftward, and so forth. They also found that P cells come in two types- those (bursters) that fire positively around the preferred direction, and others (pausers) that fire all the other times, but fire less when eyes are heading in the "preferred" direction. These properties are already telling us that the neural system does not much care to save energy. Firing most of the time, but then pausing when some preferred direction is hit is perfectly OK. 

Two representative cells are recorded, around the cardinal/radial directions. The first of each pair is recorded from 90 degrees vs its preferred direction, and adding up the two perpendicular vectors (bottom) gives zero net activity. The second of each pair is recorded from the preferred direction (or potent axis), vs 180 degrees away, and when these are subtracted, a net signal is visible (difference). Theta is the direction the eyes are heading.

Their main finding, though, was that there is a lot of vector arithmetic going on, in which cells fire all over their preferred fields, and only when you carefully net their activity over the radial directions can you discern a preferred direction. The figure above shows a couple of cells, firing away no matter which direction the eyes are going. But if you subtract the forward direction from the backward direction, a small net signal is left over, which these authors claim is the real signal from the cerebellum, which signals a change in direction. When this behavior is summed across a large population, (below), the bursters and pausers cancel out, as do the signals going in stray directions. All you are left with is a relatively clean signal centered on the direction being instructed to the eye muscles. Wasteful? Yes. Effective? Apparently. 

The same analysis as above, but on a population basis. Now, in net terms, after all the canceled activity is eliminated, and adding up the pauser and burster activity, strong signals arise from the cerebellum as a whole, in this anatomical region, directing eye saccades

Why does the system work this way? One idea is that, like for the rest of the brain, default activity is the norm, and learning is, perforce, a matter of tuning ongoing activity, not of turning on cells that are otherwise off. The learning (or error) signal is part of the standard cerebellum circuitry- a signal so strong that it temporarily shuts off the normal chatter of the P cell output and retunes it slightly, rendering the subsequent changes seen in the net vector calculation.

A second issue raised by these authors is the nature of inputs to these calculations. In order to know whether and how to change the direction of eye movement, these cells must be getting information about the visual scene and also about the current state of the eyes and direction of movement. 

"The burst-pause pattern in the P cells implied a computation associated with predicting when to stop the saccade. For this to be possible, this region of the cerebellum should receive two kinds of information—a copy of the motor commands in muscle coordinates and a copy of the goal location in visual coordinates."

These inputs come from two types of mossy fiber neurons, which are the single inputs to the typical cerebellar circuit. One "state" type encodes the state of the motor system and current position of the eye. The other "goal" type encodes, only in case of a reward, where the eye "wants" to move, based on other cortical computations, such as the attention system. The two inputs go through separate individual cerebellar circuits, and then get added up on a population basis. When movement is unmotivated, the goal inputs are silent, and resulting cerebellar-directed saccades are more sporadic, scanning the scene haphazardly.

The end result is that we are delving ever more deeply into the details of mental computation. This paper trumpets itself as having revealed "vector calculus" in the cerebellum. However, to me it looks much more like arithmetic than calculus, and nor is the overall finding of small signals among a welter of noise and default activity novel either. All the same, more detail, and deeper technical means, and greater understanding of exactly how all these billions of dumb cells somehow add up to smart activity continues to be a great, and fascinating, quest.


  • How to have Christmas without the religion.
  • Why is bank regulation run by banks? "Member banks ... elect six of the nine members of each Federal Reserve Banks' boards of directors." And the other three typically represent regional businesses as well- not citizens or consumers.

Sunday, December 28, 2025

Lipid Pumps and Fatty Shields- Asymmetry in the Plasma Membrane

The two faces constituting eukaryotic cell membranes are asymmetric.

Membranes are one of those incredibly elegant things in biology. Simple chemicals forces are harnessed to create a stable envelope for the cell, with no need to encode the structure in complicated ways. Rather, it self-assembles, using the oil-vs-water forces of surface tension to form a huge structure with virtually no instruction. Eukaryotes decided to take membranes to the max, growing huge cells with an army of internal membrane-bound organelles, individually managed- each with its own life cycle and purposes.

Yet, there are complexities. How do proteins get into this membrane? How do they orient themselves? Does it need to be buttressed against rough physical insult, with some kind of outer wall? How do nutrients get across, while the internal chemistry is maintained as different from the outside? How does it choose which other cells to interact with, preventing fusion with some, but pursuing fusion with others? For all the simplicity of the basic structure, the early history of life had to come up with a lot of solutions to tough problems, before membrane management became such a snap that eukaryotes became possible.

The authors present their model (in atomic simulation) of a plasma membrane. Constituents are cholesterol, sphingomyelin (SM), phosphatidyl choline (PC) phosphatidyl serine (PS), phosphatidyl ethanolamine (PE), and phosphotidyl ethanolamine plasmalogen. Note how in this portrayal, there is far more cholesterol in the outer leaflet (top), facing the outside world, than there is in the inner leaflet (bottom).

The major constituents of the lipid bilayer are cholesterol, phospholipids, and sphingomyelin. The latter two have charged head groups and long lipid (fatty) tails. The head groups keep that side of the molecule (and the bilayer) facing water. The tails hate water and like to arrange themselves in the facing sheets that make up the inner part of the bilayer. Cholesterol, on the other hand, has only a mildly polar hydroxyl group at one end, and a very hydrophobic, stiff, and flat multi-ring body, which keeps strictly with the lipid tails. The lack of a charged head group means that cholesterol can easily flip between the bilayer leaflets- something that the other molecules with charged headgroups find very difficult. It has long been known that our genomes code for flippases and floppases: ATP-driven enzymes that can flip the charged phospholipids and sphingomyelin from one leaflet to the other. Why these enzymes exist, however, has been a conundrum.

Pumps that drive phospholipids against their natural equilibrium distribution, into one or the other leaflet.

It is not immediately apparent why it would be helpful to give up the natural symmetry and fluidity of the natural bilayer, and invest a lot of energy in keeping the compositions of each leaflet different. But that is the way it is. The outer leaflet of the plasma membrane tends to have more sphingomyelin and cholesterol, and the inner leaflet has more phospholipids. Additionally, those phospholipids tend to have unsaturated tails- that is, they have double bonds that break up the straight fatty tails that are typical in sphingomyelin. Membrane asymmetry has a variety of biological effects, especially when it is missing. Cells that lose their asymmetry are marked for cell suicide, intervention of the immune system, and also trigger coagulation in the blood. It is a signal that they have broken open or died. But these are doubtless later (maybe convenient) organismal consequences of universal membrane asymmetry. They do not explain its origin. 

A recent paper delved into the question of how and why this asymmetry happens, particularly in regard to cholesterol. Whether cholesterol even is asymmetric is controversial in the field, since measuring their location is very difficult. Yet these authors carefully show that, by direct measurement, and also by computer simulation, cholesterol, which makes up roughly forty percent of the membrane (its most significant single constituent, actually), is highly asymmetric in human erythrocyte membranes- about three fold more abundant in the outer leaflet than in the cytoplasmic leaflet. 

Cholesterol migrates to the more saturated leaflet. B shows a simulation where a poly-unstaturated (DAPC) phospholipid with 4 double bonds (blue) is contrasted with a saturated phospholipid (DPPC) with staight lipid tails (orange). In this simulation, cholesterol naturally migrates to the DPPC side as more DAPC is loaded, relieving the tension (and extra space) on the inner leaflet. Panel D shows that in real cells, scrambling the leaflet composition leads to greater cholesterol migration to the inner leaflet. This is a complex experiment, where the fluorescent signal (on the right-side graph) comes from a dye in an introduced cholesterol analog, which is FRET-quenched by a second dye that the experimenters introduced which is confined to the outer membrane. In the natural case (purple), signal is more quenched, since more cholesterol is in the outer leaflet, while after phospholipid scrambling, less quenching of the cholesterol signal is seen. Scrambling is verified (left side) by fluorescently marking the erythrocytes for Annexin 5, which binds to phosphatidylcholine, which is generally restricted to the inner leaflet. 

But no cholesterol flippase is known. Indeed, such a thing would be futile, since cholesterol equilibrates between the leaflets so rapidly. (The rate is estimated at milliseconds, in very rough terms.) So what is going on? These authors argue via experiment and chemical simulation that it is the pumped phospholipids that drive the other asymmetries. It is the straight lipid tails of sphingomyelin that attract the cholesterol, as a much more congenial environment than the broken/bent tails of the other phospholipids that are concentrated in the cytoplasmic leaflet. In turn, the cholesterol also facilitates the extreme phospholipid asymmetry. The authors show that without the extra cholesterol in the outer leaflet, bilayers of that extreme phospholipid composition break down into lipid globs.

When treated (time course) with a chemical that scrambles the plasma membrane leaflet lipid compositions, a test protein (top series) that normally (0 minutes) attaches to the inner leaflet floats off and distributes all over the cell. The bottom series shows binding of a protein (from outside these cells) that only binds phosphatidylcholine, showing that scrambling is taking place.

This sets up the major compositional asymmetry between the leaflets that creates marked differences in their properties. For example, the outer leaflet, due to the straight sphingomyelin tails and the cholesterol, is much stiffer, and packed much tighter, than the cytoplasmic leaflet. It forms a kind of shield against the outside world, which goes some way to explain the whole phenomenon. It is also almost twice as impermeable to water. Conversely, the cytoplasmic leaflet is more loosely packed, and indeed frequently suffers gaps (or defects) in its lipid integrity. This has significant consequences because many cellular proteins, especially those involved in signaling from the surface into the rest of the cytoplasm, have small lipid tails or similar anchors that direct them (temporarily) to the plasma membrane. The authors show that such proteins localize to the inner leaflet precisely because that leaflet has this loose, accepting structure, and are bound less well if the leaflets are scrambled / homogenized.

When the fluid mosaic model of biological membranes was first conceived, it didn't enter into anyone's head that the two leaflets could be so different, or that cells would have an interest in making them so. Sure, proteins in those membranes are rigorously oriented, so that they point in the right direction. But the lipids themselves? What for? Well, they do and now there are some glimmerings of reasons why. Whether this has implications for human disease and health is unknown, but just as a matter of understanding biology, it is deeply interesting.


Saturday, December 13, 2025

Mutations That Make Us Human

The ongoing quest to make biologic sense of genomic regions that differentiate us from other apes.

Some people are still, at this late date, taken aback by the fact that we are animals, biologically hardly more than cousins to fellow apes like the chimpanzee, and descendants through billions of years of other life forms far more humble. It has taken a lot of suffering and drama to get to where we are today. But what are those specific genetic endowments that make us different from the other apes? That, like much of genetics and genetic variation, is a tough question to answer.

At the DNA level, we are roughly one percent different from chimpanzees. A recent sequencing of great apes provided a gross overview of these differences. There are inversions, and larger changes in junk DNA that can look like bigger differences, but these have little biological importance, and are not counted in the sequence difference. A difference of one percent is really quite large. For a three gigabyte genome, that works out to 30 million differences. That is plenty of room for big things to happen.

Gross alignment of one chromosome between the great apes. [HSA- human, PTR- chimpanzee, PPA- bonobo, GGO- gorilla, PPY- orangutan (Borneo), PAB- orangutan (Sumatra)]. Fully aligned regions (not showing smaller single nucleotide differences) are shown in blue. Large inversions of DNA order are shown in yellow. Other junk DNA gains and losses are shown in red, pink, purple. One large-scale jump of a DNA segment is show in green. One can see that there has been significant rearrangement of genomes along the way, even as most of this chromosome (and others as well) are easly alignable and traceable through the evolutionary tree.


But most of those differences are totally unimportant. Mutations happen all the time, and most have no effect, since most positions (particularly the most variable ones) in our DNA are junk, like transposons, heterochromatin, telomeres, centromeres, introns, intergenic space, etc. Even in protein-coding genes, a third of the positions are "synonymous", with no effect on the coded amino acid, and even when an amino acid is changed, that protein's function is frequently unaffected. The next biggest group of mutations have bad effects, and are selected against. These make up the tragic pool of genetic syndromes and diseases, from mild to severe. Only a tiny proportion of mutations will have been beneficial at any point in this story. But those mutations have tremendous power. They can drag along their local DNA regions as they are positively selected, and gain "fixation" in the genome, which is to say, they are sufficiently beneficial to their hosts that they outcompete all others, with the ultimate result that mutation becomes universal in the population- the new standard. This process happens in parallel, across all positions of the genome, all at the same time. So a process that seems painfully slow can actually add up to quite a bit of change over evolutionary time, as we see.

So the hunt was on to find "human accelerated regions" (HAR), which are parts of our genome that were conserved in other apes, but suddenly changed on the way to humans. There roughly three thousand such regions, but figuring out what they might be doing is quite difficult, and there is a long tail from strong to weak effects. There are two general rationales for their occurrence. First, selection was lost over a genomic region, if that function became unimportant. That would allow faster mutation and divergence from the progenitors. Or second, some novel beneficial mutation happened there, bringing it under positive selection and to fixation. Some recent work found, interestingly, that clusters of mutations in HAR segments often have countervailing effects, with one major mutation causing one change, and a few other mutations (vs the ancestral sequence) causing opposite changes, in a process hypothesized to amount to evolutionary fine tuning. 

A second property of HARs is that they are overwhelmingly not in coding regions of the genome, but in regulatory areas. They constitute fine tuning adjustments of timing and amount of gene regulation, not so much changes in the proteins produced. That is, our evolution was more about subtle changes in management of processes than of the processes themselves. A recent paper delved in detail into HAR5, one of the strongest such regions, (that is, strongest prior conservation, compared with changes in human sequence), which lies in the regulatory regions upstream of Frizzled8 (FZD8). FZD8 is a cell surface receptor, which receives signals from a class of signaling molecules called WNT (wingless and int). These molecules were originally discovered in flies, where they signal body development programs, allowing cells to know where they are and when they are in the developmental program, in relation to cells next door, and then to grow or migrate as needed. They have central roles in embryonic development, in organ development, and also in cancer, where their function is misused.

For our story, the WNT/FZD8 circuit is important in fetal brain development. Our brains undergo massive cell division and migration during fetal development, and clearly this is one of the most momentous and interesting differences between ourselves and all other animals. The current authors made mutations in mice that reproduce some of the HAR5 sequences, and investigated their effects. 

Two mouse brains at three months of age, one with the human version of the HAR5 region. Hard to see here, but the latter brain is ~7% bigger.

The authors claim that these brains, one with native mouse sequence, and the other with the human sequences from HAR5, have about a seven percent difference in mass. Thus the HAR5 region, all by itself, explains about one fourteenth of the gross difference in brain size between us and chimpanzees. 

HAR5 is a 619 base-pair region with only four sequence differences between ourselves and chimpanzees. It lies 300,000 bases upstream of FZD8, in a vast region of over a million base pairs with no genes. While this region contains many regulatory elements, (generally called enhancers or enhancer modules, only some of which are mapped), it is at the same time an example of junk DNA, where most of the individual positions in this vast sea of DNA are likely of little significance. The multifarious regulation by all these modules is of course important because this receptor participates in so many different developmental programs, and has doubtless been fine-tuned over the millennia not just for brain development, but for every location and time point where it is needed.

Location of the FZD8 gene, in the standard view of the genome at NIH. I have added an arrow that points to the tiny (in relative terms) FZD8 coding region (green), and a star at the location of HAR5, far upstream among a multitude of enhancer sequences. One can see that this upstream region is a vast area (of roughly 1.5 million bases) with no other genes in sight, providing space for extremely complicated and detailed regulation, little of which is as yet characterized.

Diving into the HAR5 functions in more detail, the authors show that it directly increases FZD8 gene expression, (about 2 fold, in very rough terms), while deleting the region from mice strongly decreases expression in mice. Of the four individual base changes in the HAR5 region, two have strong (additive) effects increasing FZD8 expression, while the other two have weaker, but still activating, effects. Thus, no compensatory regulation here.. it is full speed ahead at HAR5 for bigger brain size. Additionally, a variant in human populations that is responsible for autism spectrum disorders also resides in this region, and the authors show that this change decreases FZD8 expression about 20%. Small numbers, sure, but for a process that directs cell division over many cycles in early brain development, this kind of difference can have profound effects.


The HAR5 region causes increased transcription of FZD8, in mice, compared to the native version and a deletion.

The HAR5 region causes increased cell proliferation in embryonic day 14.5 brain areas, stained for neural markers.

"This reveals Hs-HARE5 modifies radial glial progenitor behavior, with increased self-renewal at early developmental stages followed by expanded neurogenic potential. ... Using these orthogonal strategies we show four human-specific variants in HARE5 drive increased enhancer activity which promotes progenitor proliferation. These findings illustrate how small changes in regulatory DNA can directly impact critical signaling pathways and brain development."

So there you have it. The nuts and bolts of evolution, from the molecular to the cellular, the organ, and then the organismal, levels. Humans do not just have bigger brains, but better brains, and countless other subtle differences all over the body. Each of these is directed by genetic differences, as the combined inheritance of the last six million years since our divergence versus chimpanzees. Only with the modern molecular tools can we see Darwin's vision come into concrete focus, as particular, even quantum, changes in the code, and thus biology, of humanity. There is a great deal left to decipher, but the answers are all in there, waiting.


Saturday, October 18, 2025

When the Battery Goes Dead

How do mitochondria know when to die?

Mitochondria are the energy centers within our cells, but they are so much more. They are primordial bacteria that joined with archaea to collaborate in the creation of eukaryotes. They still have their own genomes, RNA transcription and protein translation. They play central roles in the life and death of cells, they divide and coalesce, they motor around the cell as needed, kiss other organelles to share membranes, and they can get old and die. When mitochondria die, they are sent to the great garbage disposal in the sky, the autophagosome, which is a vesicle that is constructed as needed, and joins with a lysosome to digest large bits of the cell, or of food particles from the outside.

The mitochondrion spends its life (only a few months) doing a lot of dangerous reactions and keeping an electric charge elevated over its inner membrane. It is this charge, built up from metabolic breakdown of sugars and other molecules, that powers the ATP-producing rotary enzyme. And the decline of this charge is a sign that the mitochondrion is getting old and tired. A recent paper described how one key sensor protein, PINK1, detects this condition and sets off the disposal process. It turns out that the membrane charge does not only power ATP synthesis, but it powers protein import to the mitochondrion as well. Over the eons, most of the mitochondrion's genes have been taken over by the nucleus, so all but a few of the mitochondrion's proteins arrive via import- about 1500 different proteins in all. And this is a complicated process, since mitochondria have inner and outer membranes, (just as many bacteria do), and proteins can be destined to any of these four compartments- in either membrane, in the inside (matrix), or in the inter-membrane space. 

Figure 12-26. Protein import by mitochondria.
Textbook representation of mitochondrial protein import, with a signal sequence (red) at the front (N-terminus) of the incoming protein (green), helping it bind successively to the TOM and TIM translocators. 

The outer membrane carries a protein import complex called TOM, while the inner membrane carries an import complex called TIM. These can dock to each other, easing the whole transport process. The PINK1 protein is a somewhat weird product of evolution, spending its life being synthesized, transported across both mitochondrial membranes, and then partially chopped up in the mitochondrial matrix before its remains are exported again and fully degraded. That is when everything is working correctly! When the mitochondrial charge declines, PINK1 gets stuck, threaded through TOM, but unable to transit the TIM complex. PINK1 is a kinase, which phosphorylates itself as well as ubiquitin, so when it is stuck, two PINK1 kinases meet on the outside of the outer membrane, activate each other, and ultimately activate another protein, PARKIN, whose name derives from its importance in parkinson's disease, which can be caused by an excess of defective mitochondria in sensitive tissues, specifically certain regions and neurons of the brain. PARKIN is a ubiquitin ligase, which attaches the degradation signal ubiquitin to many proteins on the surface of the aged mitochondrion, thus signaling the whole mess to be gobbled up by an autophagosome.

A data-rich figure 1 from the paper shows purification of the tagged complex (top), and then the EM structure at bottom. While the purification (B, C) show the presence of TIM subunits, they did not show up in the EM structures, perhaps becuase they were not stable enough or frequent enough in proportion to the TOM subunits. But the PINK1+TOM_VDAC2 structures are stunning, helping explain how PINK1 dimerized so easily when it translocation is blocked.

The current authors found that PINK1 had convenient cysteine residues that allowed it to be experimentally crosslinked in the paired state, and thus freeze the PARKIN-activating conformation. They isolated large amounts of such arrested complexes from human cells, and used electon microscopy to determine the structure. They were amazed to see, not just PINK1 and the associated TOM complex, but also VDAC2, which is the major transporter that lets smaller molecules easily cross the outer membrane. The TOM complexes were beautifully laid out, showing the front end (N-terminus) of PINK1 threaded through each TOM complex, specifically the TOM40 ring structure.

What was missing, unfortunately, was any of the TIM complex, though some TIM subunits did co-purify with the whole complex. Nor was PARKIN or ubiquitin present, leaving out a good bit of the story. So what is VDAC2 doing there? The authors really don't know, though they note that reactive oxygen byproducts of mitochondrial metabolism would build up during loss of charge, acting as a second signal of mitochondrial age. These byproducts are known to encourage dimerization of VDAC channels, which naturally leads by the complex seen here to dimerization and activation of the PINK1 protein. Additionally, VDACs are very prevalent in the outer membrane and prominent ubiquitination targets for autophagy signaling.

To actually activate PARKIN ubiquitination, PINK1 needs to dissociate again, a process that the authors speculate may be driven by binding of ubiquitin by PINK1, which might be bulky enough to drive the VDACs apart. This part was quite speculative, and the authors promise further structural studies to figure out this process in more detail. In any case, what is known is quite significant- that the VDACs template the joining of two PINK1 kinases in mid-translocation, which, when the inner membrane charge dies away, prompts the stranded PINK1 kinases to activate and start the whole disposal cascade. 

Summary figure from the authors, indicating some speculative steps, such as where the reactive oxygen species excreted by VDAC2 sensitise PINK1, perhaps by dimerizing the VDAC channel itself. And where ubiquitin binding by PINK1 and/or VDAC prompts dissociation, allowing PARKIN to come in and get activated by PINK1 and spread the death signal around the surface of the mitochondrion.

It is worth returning briefly to the PINK1 life cycle. This is a protein whose whole purpose, as far as we know, is to signal that mitochondria are old and need to be given last rites. But it has a curiously inefficient way of doing that, being synthesized, transported, and degraded continuously in a futile and wasteful cycle. Evolution could hardly have come up with a more cumbersome, convoluted way to sense the vitality of mitochondria. Yet there we are, doubtless trapped by some early decision which was surely convenient at the time, but results today in a constant waste of energy, only made possible by the otherwise amazingly efficient and finely tuned metabolic operations of PINK1's target, the mitochondrion.


Note that at the glacial maxima, sea levels were almost 500 feet (150 meters) lower than today. And today, we are hitting a 3 million year peak level.

Sunday, October 5, 2025

Cycles of Attention

 A little more research about how attention affects visual computation.

Brain waves are of enormous interest, and their significance has gradually resolved over recent decades. They appear to represent synchronous firing of relatively large populations of neurons, and thus the transfer of information from place to place in the brain. They also induce other neurons to entrain with them. The brain is an unstable apparatus, never entraining fully with any one particular signal (that way lies epilepsy). Rather, the default mode of the brain is humming along with a variety of transient signals and thus brain waves as our thoughts, both conscious and unconscious, wander over space and time.

A recent paper developed this growing insight a bit further, by analyzing forward and backward brainwave relations in visual perception. Perception takes place in a progressive way at the back of the brain in the visual cortex, which develops the raw elements of a visual scene (already extensively pre-processed by the retina) into more abstract, useful representations, until we ... see a car, or recognize a face. At the same time, we perceive very selectively, only attending to very small parts of the visual scene, always on the go to other parts and things of interest. There is a feedback process, once things in a scene are recognized, to either attend to them more, or go on to other things. The "spotlight of attention" can direct visual processing, not just by filtering what comes out of the sausage grinder, but actually reaching into the visual cortex to direct processing to specific things. And this goes for all aspects of our cognition, which are likewise a cycle of search, perceive, evaluate, and search some more.

Visual processing generates gamma waves of information in an EEG, directed to, among other areas, the frontal cortex that does more general evaluation of visual information. Gamma waves are the highest frequency brain oscillations, (about 50-100 Hz), and thus are the most information rich, per unit time. This paper also confirmed that top-down oscillations, in contrast, are in the alpha / beta frequencies, (about 5-20 Hz). What they attempted was to link these to show that the top-down beta oscillations entrain and control the bottom-up gamma oscillations. The idea was to literally close the loop on attentional control over visual processing. This was all done in humans, using EEG to measure oscillations all over the brain, and TMS (transcranial magnetic stimulation) to experimentally induce top-down currents from the frontal cortex as their subjects looked at visual fields.

Correlation of frontal beta frequencies onto gamma frequencies from the visual cortex, while visual stimulus and TMS stimulation are both present. At top left is the overall data, showing how gamma cycles from the hind brain fall into various portions of a single beta wave, (bottom), after TMS induction on the forebrain. There is strong entrainment, a bit like AM radio amplitude modulation, where the higher frequency signal (one example top right) sits within the lower-frequency beta signal (bottom right). 

I can not really speak to the technical details and quality of this data, but it is clear that the field is settling into this model of what brain waves are and how they reflect what is going on under the hood. Since we are doing all sorts of thinking all the time, it takes a great deal of sifting and analysis to come up with the kind of data shown here, out of raw EEG from electrodes merely placed all over the surface of the skull. But it also makes a great deal of sense, first that the far richer information of visual bottom-up data comes in higher frequencies, while the controlling information takes lower frequencies. And second, that brain waves are not just a passive reflection of passing reflections, but are used actively in the brain to entrain some thoughts, accentuating them and bringing them to attention, while de-emphasizing others, shunting them to unconsciousness, or to oblivion.


Saturday, September 27, 2025

Dopamine: Get up and Go, or Lie Down and Die

The chemistry of motivation.

A recent paper got me interested in the dopamine neurotransmitter system. There are a limited number of neurotransmitters, (roughly a hundred), which are used for all communication at synapses between neurons. The more common transmitters are used by many cells and anatomical regions, making it hazardous in the extreme to say that a particular transmitter is "for" something or other. But there are themes, and some transmitters are more "niche" than others. Serotonin and dopamine are specially known for their motivational valence and involvement in depression, schizophrenia, addiction, and bipolar disorder, among many other maladies.

This paper described the reason why cancer patients waste away- a syndrome called cachexia. This can happen in other settings, like extreme old age, and in other illnesses. The authors ascribe cachexia (using mice implanted with tumors) to the immune system's production of IL6, one of scores of cytokines, or signaling proteins that manage the vast distributed organ that is our immune system. IL6 is pro-inflammatory, promoting inflammation, fever, and production of antibody-producing B cells, among many other things. These authors find that it binds to the area postrema in the brain stem, where many other blood-borne signals are sensed by the brain- signals that are generally blocked by the blood-brain barrier system.

The binding of IL6 at this location then activates a series of neuronal connections that these authors document, ending up inhibiting dopamine signaling out of the ventral tegmental area (VTA) in the lower midbrain, ultimately reducing dopamine action in the nucleus accumbens, where it is traditionally associated with reward, addiction, and schizophrenia. These authors use optically driven engineered neurons at an intermediate location, the parabrachial nucleus, (PBN), to reproduce how neuron activation there drives inhibition downstream, as the natural IL6 signal also does.  

Schematic of the experimental setup and anatomical locations. The graph shows how dopamine is strongly reduced under cachexia, consequent to the IL6 circuitry the authors reveal.

What is the rationale of all this? When we are sick, our body enters a quite different state- lethargic, barely motivated, apathetic, and resting. All this is fine if our immune system has things under control, uses our energy for its own needs, and returns us to health forthwith, but it is highly problematic if the illness goes on longer. This work shows in a striking and extreme way what had already been known- that prominent dopamine-driven circuits are core micro-motivational regulators in our brains. For an effective review of this area, one can watch a video by Robert Lustig, outlining at a very high level the relationship of the dopamine and serotonin systems.

Treatment of tumor-laden mice with an antibody to IL6 that reduces its activity relieves them of cachexia symptoms and significantly extends their lifespans.

It is something that the Buddhists understood thousands of years ago, and which the Rolling Stones and the advertising industry have taken up more recently. While meditation may not grant access to the molecular and neurological details, it seems to have convinced the Buddha that we are on a treadmill of desire, always unsatisfied, always reaching out for the next thing that might bring us pleasure, but which ultimately just feeds the cycle. Controlling that desire is the surest way to avoid suffering. Nowhere is that clearer than in addiction- real, clinical addictions that are all driven by the dopamine system. No matter what your drug of choice- gambling, sugar, alcohol, cocaine, heroin- the pleasure that they give is fleeting and alerts the dopamine system to motivate the user to seek more of the same. There are a variety of dopamine pathways, including those affecting Parkinson's and reproductive functions, but the ones at issue here are the mesolimbic and mesocortical circuits, that originate in the midbrain VTA and extend respectively to the nucleus accumbens in the lower forebrain, and to the cerebral cortex. These are integrated with the rest of our cognition, enabling motivation to find the root causes of a pleasurable experience, and raise the priority of actions that repeat those root causes. 

So, if you gain pleasure from playing a musical instrument, then the dopamine system will motivate you to practice more. But if you gain pleasure from cocaine, the dopamine system will motivate you to seek out a dealer, and spend your last dollar for the next fix. And then steal some more dollars. This system shows specifically the dampening behavior that is so tragic in addictions. Excess activation of dopamine-driven neurons can be lethal to those cells. So they adjust to keep activation in an acceptable range. That is, they keep you unsatisfied, in order to allow new stimuli to motivate you to adjust to new realities. No matter how much pleasure you give yourself, and especially the more intense that pleasure, it is never enough because this system always adjusts the baseline to match. One might think of dopamine as the micro-manager, always pushing for the next increment of action, no matter how much you have accomplished before, no matter how rosy or bleak the outlook. It gets us out of bed and moving through our day, from one task to the next.

In contrast, the serotonin system is the macro-manager, conveying feelings of general contentment, after a life well-lived and a series of true accomplishments. Short-circuiting this system with SSRIs like prozac carries its own set of hazards, like lack of general motivation and emotional blunting, but it does not have the risk of addiction, because serotonin, as Lustig portrays it, is an inhibitory neurotransmitter, with no risk of over-excitement. The brain does not re-set the baseline of serotonin the same way that it continually resets the baseline of dopamine.

How does all this play out in other syndromes? Depression is, like cachexia, at least in part syndrome of insufficient dopamine. Conversely, bipolar disorder in its manic phase appears to involve excess dopamine, causing hyperactivity and wildly excessive motivation, flitting from one task to the next. But what have dopamine antagonists like haloperidol and clozapine been used for most traditionally? As anti-psychotics in the treatment of schizophrenia. And that is a somewhat weird story. 

Everyone knows that the medication of schizophrenia is a haphazard affair, with serious side effects and limited efficacy. A tradeoff between therapeutic effects and others that make the recipient worse off. A paper from a decade ago outlined why this may be the case- the causal issues of schizophrenia do not lie in the dopamine system at all, but in circuits far upstream. These authors suggest that ultimately schizophrenia may derive from chronic stress in early life, as do so many other mental health maladies. It is a trail of events that raise the stress hormone cortisol, which diminishes cortical inhibition of hippocampal stress responses, and specifically diminishes the GABA (another neurotransmitter) inhibitory interneurons in the hippocampus. 

It is the ventral hippocampus that has a controlling influence over the VTA that in turn originates the relevant dopamine circuitry. The theory is that the ventral hippocampus sets the contextual (emotional) tone for the dopamine system, on top of which episodic stimulation takes place from other, more cognitive and perception-based sources. Over-activity of this hippocampal regulation raises the gain of the other signals, raising dopamine far more than appropriate, and also lowering it at other times. Thus treating schizophrenia with dopamine antagonists counteracts the extreme highs of the dopamine system, which in the nucleus accumbens can lead to hallucinations, delusions, paranoia, and manic activity, but it is a blunt instrument, also impairing general motivation, and further reducing cognitive, affect, parkinsonism, and other problems caused by low dopamine that occurs during schizophrenia in other systems such as the meso-cortical and the nigrostriatal dopamine pathways.

Manipulation of neurotransmitters is always going to be a rough job, since they serve diverse cells and pathways in our brains. Wikipedia routinely shows tables of binding constants for drugs (clozapine, for instance) to dozens of different neurotransmitter receptors. Each drug has its own profile, hitting some receptors more and others less, sometimes in curious, idiosyncratic patterns, and (surprisingly) across different neurotransmitter types. While some of these may occasionally hit a sweet spot, the biology and its evolutionary background has little relation to our current needs for clinical therapies, particularly when we have not yet truly plumbed the root causes of the syndromes we are trying to treat. Nor is precision medicine in the form of gene therapies or single-molecule tailored drugs necessarily the answer, since the transmitter receptors noted above are not conveniently confined to single clinical syndromes either. We may in the end need specific, implantable and computer-driven solutions or surgeries that respect the anatomical complexity of the brain.


Saturday, September 20, 2025

Gold Standard

The politics and aesthetics of resentment. Warning: this post contains thought crime.

I can not entirely fathom thinking on the right these days. It used to be that policy disputes occured, intelligent people weighed in from across a reasonable spectrum of politics, and legislation was hammered out to push some policy modestly forward (or backward). This was true for civil rights, environmental protection, deregulation, welfare reform, even gay marriage. That seems to be gone now. Whether it is the atomization of attention and thought brought on by social media, or the mercenary propaganda of organs like FOX news, the new mode of politics appears to be destructive, vindictive spite. A spiral of extremism.

It also has a definite air of resentment, as though policy is not the point, nor is power, entirely, but owning the libtards is the real point- doing anything that would be destructive of liberal accomplishments and ideals. We know that the president is a seething mass of resentments, but how did that transform alchemically into a political movement?

I was reading a book (Deep South) by Paul Theroux that provides some insight. It is generally a sour and dismissive, full of a Yankee's distain for the backwardness of the South. And it portrays the region as more or less third world. Time and again, towns are shadowed by factories closed due to off-shoring.  What little industry the South had prior to NAFTA was eviscerated, leaving agriculture, which is increasingly automated and corporatized. It is an awful story of regression and loss of faith. And the author of this process was, ironically, a Southerner- Bill Clinton. Clinton went off to be a smarty-pants, learned the most advanced economic theories, and concluded that NAFTA was a good deal for the US, as it was for the other countries involved, and for our soft power in the post-world war 2 world. The South, however, and a good deal of the Rust Belt, became sacrifice zones for the cheaper goods coming in from off-shore.

What seemed so hopeful in the post-war era, that America would turn itself into a smart country, leading the world in science, technology, as well as in political and military affairs, has soured into the realization that all the smart kids moved to the coasts, leaving a big hole in the middle of the country. The meritocracy accomplished what it was supposed to, establishing a peerless educational system that raised over half the population into the ranks of college graduates. But it opened eyes in other ways as well, freeing women from the patterns of patriarchy, freeing minorities from reflexive submission, and opening our history to quite contentious re-interpretation. And don't get me started on religion!


So there has been a grand conjunction of resentment, between a population sick of the dividends of the educational meritocracy over a couple of generations, and a man instinctively able to mirror and goad those resentments into a destructive political movement. His aesthetic communicates volumes- garish makeup, obscene ties, and sharing with Vladimir Putin a love of gold-gilded surfaces. To the lower class, it may read expensive and successful, but to the well educated, it reeks of cheapness, focusing on surface over substance, a bullying, mob aesthetic, loudly anti-democratic.

Reading the project 2025 plans for this administration, I had thought we would be looking at a return to the monetary gold standard. But no, gold has come up in many other guises, not that one. Gold crypto coins, Gold immigration card, Oval office gold, golden hair. But most insulting of all was the ordering up of gold standard science. The idea that the current administration is interested in, or capable of, sponsoring high quality personnel, information or policy of any kind has been thoroughly refuted by its first months in office. The resentment it channels is directed against, first and foremost, those with moral integrity. Whether civil servants, diplomats, or scientists, all who fail to bend the knee are enemies of this administration. This may not be what the voters had in mind, but it follows from the deeper currents of frustration with liberal dominance of the meritocracy and culture.

But what is moral integrity? I am naturally, as a scientist, talking about truth. A morality of truth, where people are honest, communicate in truthful fashion, and care about reality, including the reality of other people and their rights / feelings. As the quote has it, reality has a well-known liberal bias. But it quickly becomes apparent that there are other moralities. What we are facing politically could be called a morality of authority. However alien to my view of things, this is not an invalid system, and it is central to the human condition, modeled on the family. Few social systems are viable without some hierarchy and relation of submission and authority. How would a military work without natural respect for authority? And just to make this philosophical and temperamental system complete, one can posit a morality of nurture as well, modeled on mothering, unconditional love, and encouragement.

This triad of moralities is essential to human culture, each component in continual dynamic tension. Our political moment shows how hypertrophy of the morality of authority manifests. Lies and ideology are a major tool, insisting that people take their reality from the leader, not their own thoughts or from experts who hew to a morality of truth. Unity of the culture is valued over free analysis. As one can imagine, over the long run of human history, the moralities of nurture and authority have been dominant by far. They are the poles of the family system. It was the Enlightenment that raised the morality of truth as an independent pole in this system for the culture at large, not just for a few scholars and clerics. Not that truth has not always been an issue in people's lives, with honesty a bedrock principle, and people naturally caring whether predicted events really happen, whether rain really falls, the sun re-appears, etc. But as an organizing cultural principle that powers technological and thus social and cultural progress, it is a somewhat recent phenomenon.

It is notable that scientists, abiding by a morality of truth, tend to have very peaceful cultures. They habitually set up specialized organizations, mentor students, and collaborate nationally and internationally. Scientists may work for the military, but within their own cultures, have little interest in starting wars. It is however a highly competitive culture, with critical reviewing, publishing races, and relentless experimentation designed to prove or disprove models of reality. Authority has its place, as recognized experts get special privileges, and established facts tend to be hard to move. At risk of sounding presumptuous, the morality of truth represents an enormous advance in human culture, not to be lightly dismissed. And the recent decades of science in the US have been a golden age that have produced a steady stream of technological advance and international power, not to mention Nobel prizes and revelations of the beauty of nature. That is a gold standard. 


Saturday, September 6, 2025

How to Capture Solar Energy

Charge separation is handled totally differently by silicon solar cells and by photosynthetic organisms.

Everyone comes around sooner or later to the most abundant and renewable form of energy, which is the sun. The current administration may try to block the future, but solar power is the best power right now and will continue to gain on other sources. Likewise, life started by using some sort of geological energy, or pre-existing carbon compounds, but inevitably found that tapping the vast powers streaming in from the sun was the way to really take over the earth. But how does one tap solar energy? It is harder than it looks, since it so easily turns into heat and lost energy. Some kind of separation and control are required, to isolate the power (that is to say, the electron that was excited by the photon of light), and harness it to do useful work.

Silicon solar cells and photosynthesis represent two ways of doing this, and are fundamentally, even diametrically, different solutions to this problem. So I thought it would be interesting to compare them in detail. Silicon is a semiconductor, torn between trapping its valence electrons in silicon atoms, or distributing them around in a conduction band, as in metals. With elemental doping, silicon can be manipulated to bias these properties, and that is the basis of the solar cell.

Schematic of a silicon solar cell. A static voltage exists across the N-type to P-type boundary, sweeping electrons freed by the photoelectric effect (light) up to the conducting electrode layer.


Solar cells have one side doped to N status, and the bulk set to P doping status. While the bulk material is neutral on both sides, at the boundary, a static charge scheme is set up where electrons are attracted into the P-side, and removed from the N-side. This static voltage has very important effects on electrons that are excited by incoming light and freed from their silicon atoms. These high energy electrons enter the conduction band of the material, and can migrate. Due to the prevailing field, they get swept towards the N side, and thus are separated and can be siphoned off with wires. The current thus set up can exert a pressure of about 0.6 volt. That is not much, nor is it equivalent to the 2 to 3 electron volts received from each visible photon. So a great deal of energy is lost as heat.

Solar cells do not care about capturing each energized electron in detail. Their purpose is to harvest a bulk electrical voltage + current with which to do some work in our electrical grids. Photosynthesis takes an entirely different approach, however. This may be mostly for historical and technical reasons, but also because part of its purpose is to do chemical work with the captured electrons. Biology tends to take a highly controlling approach to chemistry, using precise shapes, functional groups, and electrical environments to guide reactions to exact ends. While some of the power of photosynthesis goes toward pumping protons out of the membrane, setting up a gradient later used to make ATP, about half is used for other things like splitting water to replace lost electrons, and making reducing chemicals like NADPH.

A portion of a poster about the core processes of photosynthesis. It provides a highly accurate portrayal of the two photosystems and their transactions with electrons and protons.

In plants, photosynthesis is a chain of processes focused around two main complexes, photosystems I and II, and all occurring within membranes- the thylakoid membranes of the chloroplast. Confusingly, photosystem II comes first, accepting light, splitting water, pumping some protons, and sending out a pair of electrons on mobile plastoquinones, which eventually find their way to photosystem I, which jacks up their energy again using another quantum of light, to produce NADPH. 

Photosystem II is full of chlorophyll pigments, which are what get excited by visible photons. But most of them are "antenna" chlorophylls, passing the excitation along to a pair of centrally located chlorophylls. Note that the light energy is at this point passed as a molecular excitation, not as a free electron. This passage may happen by Förster resonance energy transfer, but is so fast and efficient that stronger Redfield coupling may be involved as well. Charge separation only happens at the reaction center, where an excited electron is popped out to a chain of recipients. The chlorophylls are organized so that the pair at the reaction center have a slightly lower energy of excitation, thus serve as a funnel for excitation energy from the antenna system. These transfers are extremely rapid, on the picosecond time scale.

It is interesting to note tangentially that only red light energy is used. Chlorophylls have two excitation states, excited by red light (680 nm = 1.82 eV) and blue light (400-450 nm, 2.76 eV) (note the absence of green absorbance). The significant extra energy from blue light is wasted, radiated away to let it (the excited electron) relax to the lower excitation state, which is then passed though the antenna complex as though it had come from red light. 

Charge separation is managed precisely at the photosystem II reaction center through a series of pigments of graded energy capacity, sending the excited electron first to a neighboring chlorophyll, then to a pheophytin, then to a pair of iron-coordinated quinones, which then pass two electrons to a plastoquinone that is released to the local membrane, to float off to the cytochrome b6f complex. In photosystem II, another two photons of light are separately used to power the splitting of one water molecule, (giving two electrons and pumping two protons). So the whole process, just within photosystem II, yields, per four light quanta, four protons pumped from one side of the membrane to the other. Since the ATP sythetase uses about three protons per ATP, this nets just over one ATP per four photons. 

Some of the energetics of photosystem II. The orientations and structures of the reaction center paired chlorophylls (Pd1, Pd2), the neighboring chlorophyll (Chl), and then the pheophytin (Ph) and quinones (Qa, Qb) are shown in the inset. Energy of the excited electron is sacrifice gradually to accomplish the charge separation and channeling, down to the final quinone pairing, after which the electrons are released to a plastoquinone and send to another complex in the chain.

So the principles of silicon and biological solar cells are totally different in detail, though each gives rise to a delocalized field, one of electrons flowing with a low potential, and the other of protons used later for ATP generation. Each energy system must have a way to pop off an excited electron in a controlled, useful way that prevents it from recombining with the positive ion it came from. That is why there is such an ornate conduction pathway in photosystem II to carry that electron away. Overall, points go to the silicon cell for elegance and simplicity, and we in our climate crisis are the beneficiaries, if we care to use it. 

But the photosynthetic enzymes are far, far older. A recent paper pointed out that no only are photosystems II and I clearly cousins of each other, but it is likely that, contrary to the consensus heretofore, photosystem II is the original version, at least of the various photosystems that currently exist. All the other photosystems (including those in bacteria that lack oxygen stripping ability) carry traces of the oxygen evolving center. It makes sense that getting electrons is a fundamental part of the whole process, even though that chemistry is quite challenging. 

That in turn raises a big question- if oxygen evolving photosystems are primitive (originating very roughly with the last common ancestor of all life, about four billion years ago) then why was earth's atmosphere oxygenated only from two billion years ago onward? It had been assumed that this turn in Earth history marked the evolution of photosystem II. The authors point out additionally that there is also evidence for the respiratory use of oxygen from these extremely early times as well, despite the lack of free oxygen. Quite perplexing, (and the authors decline to speculate), but one gets the distinct sense that possibly life, while surprisingly complex and advanced from early times, was not operating at the scale it does today. For example, colonization of land had to await the buildup of sufficient oxygen in the atmosphere to provide a protective ozone layer against UV light. It may have taken the advent of eukaryotes, including cyanobacterial-harnessing plants, to raise overall biological productivity sufficiently to overcome the vast reductive capacity of the early earth. On the other hand, speculation about the evolution of early life based on sequence comparisons (as these authors do) is notoriously prone to artifacts, since what evolves at vanishingly slow rates today (such as the photosystem core proteins) must have originally evolved at quite a rapid clip to attain the functions now so well conserved. We simply can not project ancient ages (at the four billion year time scales) from current rates of change.


Saturday, August 23, 2025

Why Would a Bacterium Commit Suicide?

Our innate immune system, including suicide of infected cells, has antecedents in bacteria.

We have a wide variety of defense from pathogens, from our skin and its coating of RNase and antimicrobial peptides, to the infinite combinatorial firepower of the adaptive immune system, which is primed by vaccines. In between is something called the innate immune system, which is built-in and static rather than adaptive, but is very powerful nonetheless. It is largely built around particular proteins that recognize common themes in pathogens, like the free RNA and DNA of viral genomes, or lipopolysaccharide that coats most bacteria. There are also internal damage signals, such as cellular proteins that have leaked out and are visible to wandering immune cells, that raise similar alarms. The alarms lead to inflammation, the gathering of immune cells, and hopefully to resolution of the problem. 

One powerful defensive strategy our cells have is apoptosis, or cellular suicide. If the signals from an incoming infection are too intense, a cell, in addition to activating its specific antiviral defenses, goes a few steps further and generates a massive inflammasome that rounds up and turns on a battery of proteases that chew up the cell, destroying it from inside. The pieces are then strewn around to be picked up by the macrophages and other cleanup crews, which hopefully can learn something from the debris about the invading pathogen. One particular target of these proteases are gasdermins, which are activated via this proteolysis and then assemble into huge pores that plant themselves into the plasma membrane and mitochondrial membranes, rapidly killing the cell by collapsing all the ion gradients across these membranes. 

A human cell committing apoptosis, and falling apart.

A recent paper showed that key parts of this apparatus is present in bacteria as well. It was both technically interesting, since they relied on a lot of AI tools to discern the rather distant relations between pathogen (that is to say, phages- the viruses of bacteria) receptors from bacteria, and generally intriguing, because suicide is generally something thought to be a civilized behavior of cells in multicellular organisms, protecting the rest of the body from spread of the pathogen. Bacteria, despite living in mucky biofilms and other kinds of colonies, are generally thought to be loners, only out for their own reproduction. Why would they kill themselves? Well, anytime they are in a community, that community is almost certainly composed of relatives, probably identical clones of a single founding cell. So it would be a highly related community indeed, and well worth protecting in this way. 

A bacterial gasdermin outruns phages infecting the cell. Two kinds of cells are mixed together here, ones without a gasdermin (green) and ones with (black). All are infected at zero time, and a vital dye is added (pink) that only gets into cells through large pores, like the gasdermin pore. At 45 minutes and after, the green (control) cells are dying and getting blown apart by escaping phages. On the other hand, the gasdermin+ cells develop pores and get stained pink, showing that they are dead too. But they don't blow up, indicating that they have shut down phage propagation.

The researchers heard that some bacteria have gasdermins, so they wondered whether they have the other parts of the system- the proteases and the sensor proteins. And indeed, they do. While traditional sequence similarity analysis didn't say so, structural comparison courtesy of the AlphaFold program showed that a protease in the same operon as gasdermin had CARD domains. These domains are signatures of caspases and of caspase interacting proteins, like the sensor proteins in the human innate immune system. They bind other CARD domains, thus mediating assembly of the large complexes that lead to inflammation and apoptosis.

Structure of the bacterial CARD domain, courtesy of AlphaFold, showing some similarity with a human CARD domain, which was not very apparent on the sequence level.

The operon of this bacterium, which encodes the whole system- gasdermin, protease (two of them), and sensor.

The researchers then raised their AI game by using another flavor of AlphaFold to predict interactions that the bacterial CARD/protease protein might have. This showed an interaction with another protein in the same operon, with similarity to NLR sensor proteins in humans, which they later confirmed happened in vitro as well. This suggests that this bacterium, and many bacteria, have the full circuit of sensor for incoming phage, activatable caspase-like protease, and then cleavable gasdermin as the effector of cell suicide.

A comparison of related operons from several other bacteria.

Looking at other bacteria, they found that many have similar systems. Some link to other effectors, rather than a pore-forming gasdermin. But most share a similar sensor-to-protease circuit that is the core of this defense system. Lastly, they also asked what triggers this whole system from the incoming phage. The answer, in this case, is a phage protein called rIIB. Unfortunately, it is not clear either what rIIB does for the phage or whether it triggers the CARD/gasdermin system by activating the bacterial NLR sensor protein, as would be assumed. What is known, however, is that rIIB has a function in defending phage against another bacterial defense system called RexAB. This it looks as though this particular arms race has ramified into a complicated back and forth as bacteria try as best they can to insure themselves against mass infection.