Showing posts with label article review. Show all posts
Showing posts with label article review. Show all posts

Saturday, April 11, 2026

Pumping Calcium

An ornate ion pump manages rapid outflow of calcium.

In the beginning, the egg cell experienced a wave of calcium release, triggered by union with a sperm cell. This blocked other sperm from entering, and prepared the egg to become a zygote and embark on embryogenesis. It is but one example of the pervasive role of calcium signaling among animals. Another is the muscle activation cycle, which relies on calcium release from the specialized sarcoplasmic reticulum (in response to a nerve activation) to get the cell as a whole contracting. Generally, calcium is kept very low in the cytoplasm, and high in the endoplasmic reticulum and outside the cell. Thus, channels gated by electrical activation or other signals can cause rapid cytoplasmic calcium spikes and signal widely within a cell. 

On the flip side, there have to be pumps that keep the cytoplasmic concentration low, and a recent paper elucidates the structure of one such pump that is remarkably fast, while also closely regulated. It is an impressive machine. PMCA2 is an ATP-using calcium pump that sits in the plasma membrane and carries out what is called the Post-Albers cycle. This is a flip-switch mechanism for pumping ions, where ATP drives conformational switches alternately exposing ion binding sites to each side of the membrane. When the pore is open to the cytoplasm, there is no competition from higher concentrations outside, so the active site can bind one internal calcium, given a high-affinity site. Then, after the conformational switch, the pore is exposed to the outside, and at the same time the site is reconfigured to be lower-affinity, releasing the calcium ion into a high concentration environment. Neurons especially use calcium signaling extensively to operate synapses and regulate growth and development. Their rapid and frequent signaling requires a pump that has especially high capacity. PMCA2 operates at a maximal rate of several thousands of Ca2+ ions pumped per second.

Cartoon of the Post-Albers cycle, which is shared by a large family of active ATP-using pumps that transfer ions against their chemical concentration gradient. M is the main transmembrane domain of the pump, where the ions traverse the membrane. The N, P, and A domains are regulatory, especially binding and cleaving ATP  at an interface between the N, P, and A domain. The cycle links power steps (1,2) with conformational changes that carefully gate the pumping process.

And that is not all. Since calcium has a charge of 2+ and this pump does not intend to alter charge across the membrane, the pump simultaneously has binding sites for counter-ions (generally two OH-) that are transferred in the opposite direction from the calcium. Not only that, but every pump of this kind requires regulation of various kinds. PMAC2 is activated by phosphatidyl inositol 4,5 bisphosphate (PIP2), which is another important signaling molecule generated by specific PI kinases in response to activation of G-protein coupled receptors or protein kinase C, which may respond to external signals. In very general terms, these tend to be pro-growth or stress-induced pathways. These regulatory processes can tune the overall rate of recovery from rapid Ca2+ signaling events, by adjusting the level and activity of pumps like PMAC2. 

ATP binds at the N/P/A domain interface, and its hydrolysis (and loss of ADP) generates extensive shape changes, including into the transmembrane M domain. At the very bottom, the calcium ion is shown in green, bound inside the M domain pumping channel. The motions here are subtle, but enough to dramatically reshape the calcium channel.

The authors, using various substrate variants and other tricks, were able to develop structures of PMAC2 in several steps of the pumping cycle, using cryo-electron microscopy. The ATPase site in the N domain (red) is far from the channel that conducts the calcium ion (brown, far bottom). They show extensive shape changes from binding or losing the ATP molecule, though they mostly concern the intracellular domains (red, blue, yellow). The effects on the transmembrane pore domain are rather subtle, shown on right. The authors claim that, compared to other pumps of this large family, the structural changes are significantly less, suggesting that evolution for speed has caused the mechanism to become more efficient, with less wasted motion per conduction event, at least in the channel region itself.

Relation of the PIP2 binding domain (orange/red stick figures) to the calcium core binding site. PIP2 appears to be essential for rapid pump operation. At bottom is shown some schematics of the gating provided by PIP2 in bound and unbound states, especially via the D873 side chain (negatively charged aspartic acid).


They also find that the activating molecule PIP2 is neatly parked right next to the main calcium binding and conduction region, and is more or less essential for enzyme activity. In the graph above (e), they show that several single mutations made in the calcium binding high affinity site, for example switching the negatively charged D873 for the positively charged K (lysine), kills ion pumping activity. Mutation of the PIP2 binding pocket (KKQ->TLL, around position 347) likewise kills enzyme activity.

Relation of the counter-ion channel (red dots) with the calcium channel. Both are essential parts of the mechanism. Closeups with the coordinating protein side chains shown on the right.

The whole mechanism is alluded to in the last figure, where the central calcium binding site is shown, with the general direction of calcium pumping. The counter-ion transport area is shown nearby as a flurry of red dots (standing for water molecules, which at this scale are interchangeable with OH ions). Specific single mutations in either area, either changing negatively charged E412 to positively charged lysine at the calcium binding pocket, or changing polar S877 in the water/hydroxy binding area to the bulky and hydrophobic F (phenylalanine), each kill pumping activity (graph). 

While it would be ideal to have a more dynamic representation of what is going on, the new structures give tremendous detail, including the associated ATP, PIP2, calcium, and water molecules. The mutations also nail down several functional points. Obviously a rather intricate and well-oiled machine that keeps its bit of cellular calcium homeostasis on an even keel. It is hard to believe that the sum of thousands of machines like this one is life, but the deeper we look the more true that appears to be.


Saturday, April 4, 2026

Not Every Transcript is Golden

 Reflections on junk DNA, and junk transcripts.

Some time ago, a large project in molecular biology determined that most regions of the genome are transcribed. The authors and most observers took this to mean that most regions are functional, quite in contrast to the reigning theory up to that point, that our genomes host a smattering of genes floating in a sea of "junk" DNA. That theory was based on the now-ancient observations of reannealing curves for bulk DNA from humans and other species which found that most of our DNA re-anneals very quickly, due to the fact that it is repetitive. Most of our genomes (60%) are taken up with LINE repeats, SINE repeats, old retro-transposons, stray duplications, and other repetitive material that, at a first glance, seems like junk. There has been a battle ever since, between proponents of junk DNA and those who see function around every corner. As we learn more about the genome, many more functions have indeed come to light, like distant enhancers and regulatory RNAs of many flavors. But overall, there still seems to be a lot of junk. 

A recent paper took an oblique shot at this field, looking at the profusion of alternative gene transcripts, which can number into the hundreds for a single gene. (This was also reviewed.) These are generally called isoforms, and arise due to variable ways one gene's RNA products can be initiated, terminated, and spliced. So not only are most regions of the genome transcribed in some form, actively transcribed regions can be transcribed and processed in myriad ways to lead to different RNA products. Here again, there has been an analogous argument, about whether every such isoform has a function, or whether isoforms might arise from more or less sporadic processes, often as unintended and non-functional sparks coming out of the machinery. The importance of isoforms is very well documented in many cases, so the possibility of function, sometimes highly conserved, is not in question. Only the importance of every last variation in combinatorial collections of isoforms that can number into the hundreds.

Here is an image from the first page (of about six pages) of RNA transcripts coming off the notorious BRCA1 gene, which is intensely studied for its role in breast cancer. Each line is a distinct mRNA transcript. Each darker bar is an exon, which are separated by introns. The darker colored exons are in the protein coding region, while the lighter exons signify the untranslated upstream and downstream ends. I count about 315 transcripts described for this genetic locus. The idea that each of these has some evolutionarily constrained and important function is, on the face of it, absurd.

The authors took an interesting evolutionary approach, reasoning that species with larger population sizes experience more stringent purifying selection, and thus should (in theory) show tighter control over stray genomic products such as isoforms, if most transcript isoforms are neutral (or even deleterious) accidents, rather than intentional and functional forms. Thankfully, animals come in a wide range of population sizes, from insects to crocodiles and primates; very large to very small. While population size is hard to calculate, several convenient proxies are known, like lifespan, body size, etc. When they totted everything up, they saw clear correlations between these proxies and the number of alternative RNA products per gene- also termed transcript diversity. They sliced up the data by organ where the RNA was expressed, and by the source of the RNA variation- either different initiation, different termination, different splicing. In all cases the trend was the same. In species with larger population sizes, the diversity of transcripts was lower, agreeing with their hypothesis that when greater selecive force is available, the slop from the transcription and transcript processing machinery declines.

The authors draw correlations between alternative splicing (AI) diversity in an organism's cells and its population size. 

The authors additionally note that there is a similar relationship between alternative splice site usage and expression level of a gene. That is, the higher the gene expression, the less likely that minor splice sites are used, indicating that here again, higher selective pressure helps to clear out non-functional off-products of the transcription apparatus.

The correlations found here are only that- correlations. While significant, they are not terribly strong, let alone stark. So it is evident that our gene expression machinery has a lot of play in it, and this falls on a spectrum from deleterious to critically functional. It is, after all, machinery, not divine. It is also grist for evolution itself- it is useful to have some slop so that there is always some diversity in the gearing to accommodate new selective pressures. But the idea that just because a distinct transcript exists, it is biologically functional, or that, similarly, because a genomic region is transcribed, it is a "gene" rather than junk DNA.. that does not hold water. Every nucleotide in the genome has its own unique selective constraints, and for many of them, that constraint is zero.


  • The world order, and our position in it, is crumbling.
  • Whence Hungary?
  • Another AI tax, as if gobbling up power wasn't bad enough.
  • Mindless.

Saturday, March 28, 2026

Death and Resurrection ... Of a Gene

The SLAMF9 gene became non-functional in the human lineage, and then later was re-activated. Why?

Biology is amazingly intricate, but it is often also needlessly complex- evidence for the haphazard, if eventually pointed, mechanisms of the evolutionary process. We will take up the discussion of "junk" DNA again next week, but molecular biology is full of redundant and excessive processes, which should certainly be mystifying from a "design" perspective. At the frontier of natural selection are neutral and near-neutral genetic elements, which change over time due to chance, lacking selection pressure towards conservation. Pseudogenes (of which we have about 20,000- almost as many as functional genes) are one form of neutral element. They are typically remnants of functional genes that have been duplicated and inactivated by mutation. They are a lively area of genome annotation because it is hard to be sure that they are really dead. Despite what looks like an inactivating mutation, they typically still produce RNA transcripts, and may produce partial or alternative proteins as well. The literature is full of experiments finding products and activities from genes annotated elsewhere as pseudogenes. And what looks like a pseudogene from one sample might just be an allele, the same gene being whole and active in other people.

So, it is hard to know what any particular genetic region is doing without a lot of evolutionary, functional, and even population analysis. A recent paper looked deeply at one gene- a gene that seems to have flipped back and forth between functional and non-functional states in the human lineage. It is a rare example of a gene coming back from what is usually a one-way trip into mutational oblivion, once its function- and thus selective pressure for conservation- have disappeared.

SLAMF9 is one of a family (signaling lymphocyte activation molecule family) of surface receptors that occur in many cells of the immune system, help activate responses in these cells, and also recognize some viruses and bacteria. They bind to each other and to other components of the immune system, creating complex signaling networks. Genes involved in our immune systems are commonly subject to rapid evolution, the arms race against our many pathogens being relentless. Sometimes that takes the form of gene inactivation, if a particular receptor, for instance, has been turned against us by a pathogen that uses it for binding and cell entry. 

This week's authors were facing a conundrum. They were studying SLAMF9, and found the mouse version easy to clone and express in the lab. But the human version ... that was another story, frustratingly impossible to express in usable amounts. When they looked at the protein sequence, they were in for a big surprise:

At the front end of SLAMF9, there is very strong conservation across mammals... except when it comes to humans! The signal peptide is what directs this protein to be inserted into the plasma membrane, and is cleaved off the mature protein. In red is highlighted the region starkly different in humans, which naturally affects (not in a good way) the signal cleavage process. "a" and "b" point to important domains of the cytoplasmic side of the final protein, which are just barely preserved/conserved in the human form.

This alignment among various mammalian versions (orthologs) of SLAMF9 shows that they are all pretty much the same... except for the human version. All the way from mouse to chimpanzee nothing has changed at the front end of this protein. That is amazing in itself, showing very strong conservation. But then after our lineage split from chimpanzees, something weird. A small segment at the front of this protein is totally different. This area is important because it carries the cleavage site of the signal sequence. The signal sequence directs the protein to be sent to the membrane (as this is a trans-membrane receptor), and this cleavage site is bad, explaining why the author's attempt to express this protein went so poorly. It might be enough for modest expression in the natural setting, but not enough for their investigations.

At the DNA level, it is clear that what happened to the protein was a double frame shift in translation, out of frame at the front, then recovered frame at the second mutation. The mutations must have been independent events, but the order of their occurrence is not known. The first intron trails off to the left, while the coding sequence tails off to the right.

When they looked at the DNA sequence, the reason for this change in the protein sequence became clearer. There was a frame shift, with only small changes in the DNA sequence that led to the bigger change in the protein sequence. On the left, there is a shift in the splice site at the end of the first intron (splice acceptor). This shifts the mRNA product by four bases (vs the start site of translation), creating a frame shift in translation, as portrayed in the amino acid codes given. On the right, there is a one nucleotide deletion, causing another frame shift that brings the translation back into the normal frame. 

They sampled all the available archeological samples from the human lineage- Neanderthals and Denisovans, and each were the same as the current human sequence. So, whatever happened did so between the split from chimpanzees and the advent of these available homo species. And what happened were two distinct events- the second frame shift and the first frame shift are independent genetic mutations. 

Which happened first? That is uncertain, but the authors show that the right-most frame shift (called g.621delT) did not influence the change in the splice site. The splice site change was caused by a series of about six mutations within the first intron, (not shown), which shifted the pattern of mRNA self-hybridization that helps direct splice site selection. So it is likely that the splice site change happened first, essentially killing the gene. And then the downstream frameshift happened later on to rescue it in a partial, not very well-expressed way. However, either mutation could have happened first to functionally kill off this gene, and then further mutation(s) to recover its function. In any case, both events happened within this roughly six-million-year time span that generated our immediate lineage, becoming firmly fixed as the only version of this gene now in our collective genome.

What might cause these events? It all goes back to the function of SLAMF9. As shown above, it is very highly conserved. But, being part of the immune system and the interface we show to pathogens, it is also on the front line of the bio-warfare arms race. As humans started ranging far beyond their original habitats, they doubtless encountered many new pathogens. It seems likely that killing off this gene might have resolved one such fight, at least for a little while, perhaps by removing a pathogen entry point. But later on, it became beneficial to recover it, which is to say that new mutations that restored its function even a little bit were evidently selected for, and spread in the population. There was a race at this point between the accumulation of more (now neutral) mutations that would have permanently inactivated this gene, and the advent of that one special mutation that could save it. The overall conservation of SLAMF9 argues that saving it must have conferred significant benefits.


Saturday, March 7, 2026

How 5S rRNA Gets Into the Ribosome

For a minor component, it gets a lot of molecular love.

As mentioned several times in this space, the ribosome, which synthesizes proteins according to mRNA instructions, is an extremely ancient and complicated machine. Its core, including the catalytic site, is RNA. This marks it as a hold-over from the RNA world, as the thing that made proteins, (probably tiny proteins at first), before proteins had become a thing. But boy has there been a lot of duct-taping since then. In humans, there are four ribosomal RNAs, eighty proteins pasted on the outside, and hundreds of other proteins or RNAs involved in assembling the ribosome, not to mention dozens of initiation factors and other regulators that help during translation.

A recent paper discussed the maturation of 5S ribosomal RNA, which is the smallest rRNA, and one whose function is more peripheral than the large central 16S and 23S rRNAs. It is present in all life forms, though ribosomes inside mitochondria do without it. Its processing is an interesting case study of the complexity that has accumulated over the eons. Exactly what the 5S rRNA does remains a bit unclear, though it clearly contributes to the dynamics of the large ribosomal subunit, and occupies the "central protuberance". One group ligated it into the large subunit 23S rRNA, showing that translation still worked quite well with the 5S portion stably tacked into the structure. But then they also found that these ribosomes fell with high frequency into an unproductive locked state, suggesting that the independent nature of the 5S rRNA plays an important role in the dynamics of the ribosome. 

At any rate, the assembly of 5S into the rest of the structure is a story in itself. There are multiple steps involved, some involving ATP-using helicases. As it comes off the gene, 5S rRNA is bound by two proteins- the TFIIIA regulatory factor that activates its transcription, and also La protein (aka La antigen), which is a storage protein, named after systemic lupus, for which it is one immune target. To be incorportated into ribosomes, the RNA is next bound by a complex of Rpl5 and Rpl11, which will remain with the 5S RNA and become part of the eventual ribosome. Next come Rpf2 and Rrs1, which are two assembly facilitators that bind as a complex. Then comes Rsa4, which is similarly an assembly protein that helps the whole mess bind to the proper place on the (immature) large ribosomal subunit. Lastly, Rea1 (called MDN1 in humans) is an ATP-driven RNA helicase that wrenches the whole 5S-containing protuberance into its final and quite different position. 

The authors provide a scheme for the stepwise processing and assembly of 5S rRNA into the ribosome, involving numerous assembly factors, ribosomal proteins, and a helicase. 

It is quite an amazing story of progressive assembly, all to attach an element of the ribosome that is hardly central, but is rather a relatively late accretion on the machinery. Nevertheless, it evidently deserves specialized attention for correct placement. 

A less schematic view of various steps heading toward ribosomal assembly. 5S rRNA is in teal, and the helicase Rea1 is in dark gold, mounted like a wrench at the top of the (late) structure.

  • We are strangling Ukraine. Why?
  • Building more housing reduces housing shortages.

Saturday, February 14, 2026

We Have Rocks in Our Heads ... And Everywhere Else, Too

On the evolution and role of iron-sulfur complexes.

Some of the more persuasive ideas about the origin of life have it beginning in the rocks of hydrothermal vents. Here was a place with plenty of energy, interesting chemistry, and proto-cellular structures available to host it. Some kind of metabolism would by this theory have come first, followed by other critical elements like membranes and RNA coding/catalysis. This early earth lacked oxygen, so iron was easily available, not prone to oxidation as now. Thus life at this early time used many minerals in its metabolic processes, as they were available for free. Now, on today's earth, they are not so free, and we have complex processes to acquire and manage them. One of the major minerals we use is the iron-sulfur complex, (similar to pyrite), which comes in a variety of forms and is used by innumerable enzymes in our cells. 

The more common iron-sulfur complexes, with sulfur in yellow, iron in orange.


The principle virtue of the iron-sulfur complex is its redox flexibility. With the relatively electronically "soft" sulfur, iron forms semi-covalent-style bonds, while being able to absorb or give up an electron safely, without destroying nearby chemicals as iron alone typically does. Depending on the structure and liganding, the voltage potential of such complexes can be tuned all over the (reduction potential) map, from -600 to +400 mV. Many other cofactors and metals are used in redox reactions, but iron-sulfur is the most common by far.

Reduction potentials (ability to take up an electron, given an electrical push) of various iron-sulfur complexes.

Researchers had assumed that, given the abundance of these elements, iron-sulfur complexes were essentially freely acquired until the great oxidation event, about two to three billion years ago, when free oxygen started rising and free iron (and sulfur) disappeared, salted away into vast geological deposits. Life faced a dilemma- how to reliably construct minerals that were now getting scarce. The most common solution was a three enzyme system in mitochondria that 1) strips a sulfur from the amino acid cysteine, a convenient source inside cells, 2) scaffolds the construction of the iron-sulfur complex, with iron coming from carrier proteins such as frataxin, and 3) employs several carrier proteins to transfer the resulting complexes to enzymes that need them. 

But a recent paper described work that alters this story, finding archaeal microbes that live anaerobically and make do with only the second of these enzymes. A deep phylogenetic analysis shows that the (#2) assembly/scaffold enzymes are the core of this process, and have existed since the last common ancestor of all life. So they are incredibly ancient, and it turns out to that iron-sulfur complexes can not just be gobbled up from the environment, at least not by any reasonably advanced life form. Rather, these complexes need to be built and managed under the care of an enzyme.

The presented structures of the dimer of SmsB (orange) and SmsC (blue) that dimerize again to make up a full iron-sulfur scaffolding and production enzyme in the archaean Methanocaldococcus jannaschii. Note the reaction scheme where ATP comes in and evicts the iron-sulfur cluster. On right is shown how ATP fits into the structure, and how it nudges the iron-sulfur binding area (blue vs green tracing).

A recent paper from this group extended their analysis to the structure of the assembly/scaffold enzyme. They find that, though it is a symmetrical dimer of a complex of two proteins, it only deals with one iron-sulfur complex at at time. It also binds and cleaves ATP. But ATP seems to have more of an inhibitory role than one that stimulates assembly directly. The authors suggest that high levels of ATP signal that less iron-sulfur complex is needed to sustain the core electron transport chains of metabolism, making this ATP inhibition an allosteric feedback control mechanism in these archaeal cells. I might add, however, that ATP binding may well also have a role in extricating the assembled iron-sulfur cluster from the enzyme, as that complex is quite well coordinated, and could use a push to pop out into the waiting arms of target enzymes.

"These ancestral systems were kept in archaea whereas they went through stepwise complexification in bacteria to incorporate additional functions for higher Fe-S cluster synthesis efficiency leading to SUF, ISC and NIF." - That is, the three-component systems present in eukaryotes, which come in three types.

In the author's structure, the iron-sulfur complex, liganded by three cysteines within the SmsC protein. But note how, facing the viewer, the complex is quite exposed, ready to be taken up by some other enzyme that has a nice empty spot for it.

Additionally, these archaea, with this simple one-step iron cluster formation pathway, get their sulfur not from cysteine, but from ambient elemental sulfur. Which is possible, as they live only in anaerobic environments, such as deep sea hydrothermal vents. So they represent a primitive condition for the whole system as may have occurred in the last common ancestor of all life. This ancestor is located at the split between bacteria and archaea, so was a fully fledged and advanced cell, far beyond the earlier glimmers of abiogenesis, the iron sulfur world, and the RNA world.


Saturday, January 31, 2026

How do Anesthetics Work?

Ask a simple question, and get an answer that gets weirder the deeper you dig.

Anesthesia is wonderful. Quite simply, it makes bearable, even unnoticeable, what would be impossible or excruciating. It is also one of the most mysterious phenomena short of consciousness itself. All animals can be anesthetized, from bacteria on up. All sorts of chemicals can be used, from xenon to simple ethers, to complex fluoride-substituted forever chemicals, all with similar effects. Yet there are also complex sub-branches of anesthesia, like pain relief, muscular immobilization, shutdown of consciousness, and amnesia against remembering what happened, that chemicals affect differentially. It resembles sleep, and shares some circuitry with that process, but is of course is induced quite differently.

The first red herring was the Meyer–Overton rule, established back in 1899, that showed that anesthetic potency correlates closely with the lipophilicity of the chemical, from nitrogen (not very good) to xenon (pretty good) to chloroform (very good). All the forever (heavily fluorinated) chemicals used as modern anesthetics, like isoflurane and sevoflurane, have extremely high lipophilicity. This suggested that the mechanism of action was simply mixing into membranes somehow, altering their structure, and thus neuronal action.. something along that line. 

Structures of several general anesthetics. 8 is isoflurane.

But when researchers looked more closely there were some chemical differences that did not track with this hypothesis. Chiral enantiomers behaved differently in some cases, indicating that these chemicals do bind to something (that is, a protein) specifically. Also, variant genes started cropping up that conferred resistance to anesthesia or were found to bind particular anesthetics at working concentrations. Also, more complex, injectable anesthetics like fentanyl and propofol have slightly more defined targets and modes of action. So while anesthetics clearly partition to membranes, and the binding sites are often at protein-membrane interfaces, the modern theory of how they work is that they bind to ion channels and neurotransmitter receptors and affect their functions. Proteins generally have hydrophobic interiors, so the lipophilicity of these chemicals may track with binding / disrupting protein interiors as much as membrane interiors. And other proteins such as microtubules have been drawn into the discussion as well (leading indirectly to some very unfortunate theories about consciousness). 

But which key protein do they bind? Here again, mysteries abound, as they do not bind just one, but many. And not just that, they turn some of their targets on, others off. One target is the GABA receptor, which characterizes the major inhibitory neurons of the central nervous system. These are turned on. At high concentrations, anesthetics can even turn these receptors on without any GABA neurotransmitter present. Another is the NMDA receptor, which is the target of opioids, and of ketamine. These receptors are turned off. So, for some reason, still somewhat obscure, the net result of many specific bindings to an array of channels by an array of chemicals results in ... anesthesia.

A recent paper raised my interest in this area, as its authors demonstrated yet another target for inhaled anesthetics like isoflurane, and dove with exquisite detail into its mechanism. They were working on the ryanodine receptor, which isn't even a cell surface protein, but sits in the endoplasmic reticulum (or sarcoplasmic reticulum in muscles) and conducts calcium out of these organelles. This receptor is huge- the largest known- coding over five thousand amino acids (RYR1 of humans), due to numerous built-in regulatory structures. For example, it is sensitive to caffeine, but in a different location than where it is sensitive to isoflurane. Calcium is a very important signal within cells, key to muscle activity, and also to neuronal activation. The endoplasmic reticulum serves as a storehouse of calcium, from which signals can be sparked as needed by outside signals, including a spike in calcium itself (thus creating a positive feedback loop). These receptors (a family of three in human) are named for an obscure chemical (indeed a poison) that activates these channels, and all three are expressed in the brain. 

The authors were led to this receptor because mutations were known to cause malignant hyperthermia, a side effect of a few of the common anesthesia drugs where body temperature rises uncontrollably, driven from muscle tissue, where ryanodine receptors in the sarcoplasmic reticulum are particularly common and heavily used to regulate muscle activity and metabolism. That suggested that anesthetics such as isoflurane might bind to this receptor directly, turning it on. That was indeed the case. They started with cultured cells expressing each receptor family member in turn, and tested each receptor's response to isoflurane. Internal (cytoplasmic) calcium rose especially with the family member RYR1. That led to various control experiments and a hunt (by mutating and doctoring the RYR1 protein) for the particular region being bound by the anesthetic. After a lengthy search, they found residue 4000 was a critical one, as a mutation from methionine to phenylalanine reduced the isoflurane response about ten-fold. This is part of a binding pocket as shown below.

Structure of isoflurane, (B), bound to the RYR1 protein pocket. This is a pocket that happens to also bind another activator of this channel, 4-CMC. A layout of the whole active binding pocket is given on the right. At bottom are calcium channel responses of the wild-type and point mutant forms of RYR1, showing the dramatic effect these single site mutations have on isoflurane response.

Fine, but what about anesthesia? The next step was to test this mutation in whole mice, where, lo and behold, isoflurane anesthesia of otherwise normal mice was made slightly more difficult by this mutant form of RYR1. Additionally, these mice had no other observable problems- not in behavior, not in sleep. That is remarkable as a finding about anesthesia, but the effect was quite small- about 10% or so shift in the needed concentration of isoflurane. They go on to mention that this is similar in scale to knockouts or mutations in other known targets of anesthetic drugs:

  • 10% shift in the curve from the M4000F mutation of RYR1
  • 14% shift in the isoflurane curve from a mutation in GABA receptor, GABAAR.
  • 5% shift in the isoflurane curve (though a 20% or more shift for halothane) for mutations in KCNK9, a potassium channel.

What this is telling us is that there are many targets for anesthetic drugs. They are spread over many neurotransmitter and physiological systems. They each contribute modestly (and variably, depending on the drug) to the net effect of any one drug. The various affected channels and membrane receptors curiously combine to achieve anesthesia across all animals and even microorganisms, which naturally also rely on channels and transmembrane receptors for their various sensing and motion activation needs. We are left with a blended hypothesis where yes, there are specific protein targets for each anesthetic that mediate their action. On the other hand, these targets are far from unique, spread across many proteins, yet are also highly conserved, looking almost like they are implicit in the nature of transmembrane proteins in general. 

One gets the distinct impression that there should be endogenous equivalents, as there are for opioids and cannabinoids- some internal molecule that provides sedation when needed, such as for deep illness or end-of-life crisis. That molecule has not yet been found, but the natural world abounds in sedatives, (alcohol is certainly one), so the logic of anesthesia becomes one of biological and evolutionary logic, as much as one of chemical mechanism.


Sunday, January 18, 2026

The Fire Inside: Eukaryotic Locomotion

The GTP-based Rho/Rac system of actin regulation runs in unseen waves of activation.

One of the amazing capabilities of eukaryotic cells, inherited in part from their archaeal parents, is free movement and phagocytosis. These cells have an internal cytoskeleton, plus methods to anchor to a substrate, (via focal adhesions), which allows them to manipulate their membrane, their shape, and their locomotion. The cytoskeleton is composed of two main types of fibers, actin and microtubules. Microtubules are much larger than actin and organize major trackways of organelle movement around the cell (including the movement of chromosomes in mitosis), and also form the core of cilia and flagella. But it is actin that does most of the work of moving cells around, with dynamic networks that generate the forces behind spiky to ruffly protrusions, that power things like the adventuresome pathfinding of neurons as they extend their axons into distant locations.

Schematic of the actin cytoskeleton of a typical eukaryotic cell.

Actin is an ATPase all by itself. ATP promotes its stability, and also its polymerization into filaments. So, cell edges can grow just by adding actin to filament ends. Actin cross-linking proteins also exist, that create the meshwork that supports extended filopodia. But obviously, actin all by itself is not a regulated solution to cell movement. There is an ornately complex system of control, not nearly understood, that revolves around GTPase and binding proteins. These proteins (mainly RhoA, Rac1, and Cdc42, though there are twenty related family members in humans) have knife-edge regulation, being on when binding GTP, and off after they cleave off the phosphate and are left binding GDP (the typical, default, state). Yet other proteins regulate these regulators- GTPase exchange factors (GEFs) encourage release of GDP and binding of GTP, while GTPase activating proteins (GAPs) encourage the cleavage of GTP to GDP. The GTP binding proteins interact (depending on their GTP status) with a variety of effector proteins. One example is a family of formins, which chaperone the polymerization of actin. At the head of the pathway, signals coming from external or internal conditions regulate the GTPases, creating (in extremely simplified terms) a pathway that gets the cell to respond by moving toward things it wants, and away from things it does not want. 

This is a very brief post, just touching on one experiment done on this system. Exploring its full complexity is way beyond my current expertise, though we may return to aspects of this fascinating biological pathway periodically in the future. An important paper in the field hooked up fluorescent dyes to one of the effector protein domains that binds only GTP/active RhoA. They tethered this to the (inside) membrane of their cultured cells, and took movies of what the cell looked like, using a microscopy method that looks at very thin sections- only the membrane, essentially not the rest of the cell. RhoA, though graced with a small lipid tail, is typically cytoplasmic when inactive, and travels to the membrane when activated. They were shocked to find that in resting cells, without much locomotion going on, there were recurring waves of activation of RhoA that swept hither and yon across the cell membranes. 

Four examples of RhoA getting bound in its active state in a wave-like way, over 7 1/2 minutes in a resting cell. GEF-H1 is ARHGEF2, one of the regulators that can turn RhoA on. The first three panels have ARHGEF2 versions that are operational, but the fourth (bottom right) is of a cell with an anti-RNA to ARHGEF2, turning its expression level down. In this cell, the waves of RhoA activation and recruitment to the membrane are substantially dampened.

These pulses were made even more intense if the cells were treated with nocodazole, which disrupts microtubules, destabilizes the cytoskeleton, and makes the actin regulatory / structure system work harder. They found that myosin (the motor protein that moves cargoes over actin filaments) was also rapidly relocalized, mirroring some of what happened with RhoA. They also found that ARHGEF2 contained two RhoA binding domains, (one binding active RhoA, one binding inactive RhoA), enabling it to feedback-amplify the positive activation of RhoA, thereby explaining some of the extremely dynamic activity seen here. 

And they also found that the arrival of negative regulators such as ARHGAP35 was delayed by a couple of seconds vs the activation of RhoA, providing the time window needed to see wave formation out of a mechanism of positive feedback followed by squelching by a negative regulator. Lastly, they found that these dynamics were significantly different if the cells were grown on stiffer vs softer substrates. Stiffer substrates allowed the formation of stronger surface attachments, concentrating RhoA and myosin at these adhesion locations. 

These researchers are clearly only scratching the surface of this system, as there are endless complexities left to investigate. The upshot of this one set of observations is that neurons are not the only excitable cells. With a bit of molecular / experimental magic, heretofore unseen intracellular dynamics can be visualized to show that eukaryotic cells have an exquisitely regulated internal excitation system that is part of what drives their shape-shifting capabilities, including processes like phagocytosis and neuronal growth / path-finding. 


Saturday, January 3, 2026

Tiny Tunings in a Buzz of Neural Activity

Lots of what neurons do adds up to zero: how the cerebellum controls muscle movement.

Our brains are always active. Meditating, relaxing, sleeping ... whatever we are up to, the brain doesn't take a holiday, except in the deepest levels of sleep, when slow waves help the brain to reset and heal itself. Otherwise, it is always going, with only slight variations based on computational needs and transient coalitions, which are extremely difficult to pick out of the background (fMRI signals are typically under 3% of the noise). That leads to the general principle that action in a productive sense is merely a tuned overlay on top of a baseline of constant activity, through most of the brain.

A recent paper discussed how this property manifests in the cerebellum, the "little" brain attached to our big brain, which is a fine-tuning engine especially for motion control, but also many other functions. The cerebellum has relatively simple and massively parallelized circuitry, a bit like a GPU to the neocortex's CPU. It gets inputs from the places like the the spinal cord and sensory areas of the brain, and relays a tuned signal out to, in the case of motor control, the premotor cortex. The current authors focused on the control of eye movement ("saccades") which is a well characterized system and experimentally tractable, in marmoset monkeys. 

After poking a massive array of electrodes into their poor monkey's brains, they recorded from hundreds of cells, including all the relevant neuron types (of which there are only about six). They found that inside the cerebellum, and inside the region they already knew is devoted to eye movement, neurons form small-world groups that interact closely with each other, revealing a new level of organization for this organ.

More significantly, they were able to figure out the central tendency or vector for Purkinje (P) cells they ran across. These are the core cells of the cerebellar circuit, so their firing should correlate in principle with the eye movements that they were simultaneously tracking in these monkeys. So one cell might be an upward directing cell, while another one might be leftward, and so forth. They also found that P cells come in two types- those (bursters) that fire positively around the preferred direction, and others (pausers) that fire all the other times, but fire less when eyes are heading in the "preferred" direction. These properties are already telling us that the neural system does not much care to save energy. Firing most of the time, but then pausing when some preferred direction is hit is perfectly OK. 

Two representative cells are recorded, around the cardinal/radial directions. The first of each pair is recorded from 90 degrees vs its preferred direction, and adding up the two perpendicular vectors (bottom) gives zero net activity. The second of each pair is recorded from the preferred direction (or potent axis), vs 180 degrees away, and when these are subtracted, a net signal is visible (difference). Theta is the direction the eyes are heading.

Their main finding, though, was that there is a lot of vector arithmetic going on, in which cells fire all over their preferred fields, and only when you carefully net their activity over the radial directions can you discern a preferred direction. The figure above shows a couple of cells, firing away no matter which direction the eyes are going. But if you subtract the forward direction from the backward direction, a small net signal is left over, which these authors claim is the real signal from the cerebellum, which signals a change in direction. When this behavior is summed across a large population, (below), the bursters and pausers cancel out, as do the signals going in stray directions. All you are left with is a relatively clean signal centered on the direction being instructed to the eye muscles. Wasteful? Yes. Effective? Apparently. 

The same analysis as above, but on a population basis. Now, in net terms, after all the canceled activity is eliminated, and adding up the pauser and burster activity, strong signals arise from the cerebellum as a whole, in this anatomical region, directing eye saccades

Why does the system work this way? One idea is that, like for the rest of the brain, default activity is the norm, and learning is, perforce, a matter of tuning ongoing activity, not of turning on cells that are otherwise off. The learning (or error) signal is part of the standard cerebellum circuitry- a signal so strong that it temporarily shuts off the normal chatter of the P cell output and retunes it slightly, rendering the subsequent changes seen in the net vector calculation.

A second issue raised by these authors is the nature of inputs to these calculations. In order to know whether and how to change the direction of eye movement, these cells must be getting information about the visual scene and also about the current state of the eyes and direction of movement. 

"The burst-pause pattern in the P cells implied a computation associated with predicting when to stop the saccade. For this to be possible, this region of the cerebellum should receive two kinds of information—a copy of the motor commands in muscle coordinates and a copy of the goal location in visual coordinates."

These inputs come from two types of mossy fiber neurons, which are the single inputs to the typical cerebellar circuit. One "state" type encodes the state of the motor system and current position of the eye. The other "goal" type encodes, only in case of a reward, where the eye "wants" to move, based on other cortical computations, such as the attention system. The two inputs go through separate individual cerebellar circuits, and then get added up on a population basis. When movement is unmotivated, the goal inputs are silent, and resulting cerebellar-directed saccades are more sporadic, scanning the scene haphazardly.

The end result is that we are delving ever more deeply into the details of mental computation. This paper trumpets itself as having revealed "vector calculus" in the cerebellum. However, to me it looks much more like arithmetic than calculus, and nor is the overall finding of small signals among a welter of noise and default activity novel either. All the same, more detail, and deeper technical means, and greater understanding of exactly how all these billions of dumb cells somehow add up to smart activity continues to be a great, and fascinating, quest.


  • How to have Christmas without the religion.
  • Why is bank regulation run by banks? "Member banks ... elect six of the nine members of each Federal Reserve Banks' boards of directors." And the other three typically represent regional businesses as well- not citizens or consumers.

Sunday, December 28, 2025

Lipid Pumps and Fatty Shields- Asymmetry in the Plasma Membrane

The two faces constituting eukaryotic cell membranes are asymmetric.

Membranes are one of those incredibly elegant things in biology. Simple chemicals forces are harnessed to create a stable envelope for the cell, with no need to encode the structure in complicated ways. Rather, it self-assembles, using the oil-vs-water forces of surface tension to form a huge structure with virtually no instruction. Eukaryotes decided to take membranes to the max, growing huge cells with an army of internal membrane-bound organelles, individually managed- each with its own life cycle and purposes.

Yet, there are complexities. How do proteins get into this membrane? How do they orient themselves? Does it need to be buttressed against rough physical insult, with some kind of outer wall? How do nutrients get across, while the internal chemistry is maintained as different from the outside? How does it choose which other cells to interact with, preventing fusion with some, but pursuing fusion with others? For all the simplicity of the basic structure, the early history of life had to come up with a lot of solutions to tough problems, before membrane management became such a snap that eukaryotes became possible.

The authors present their model (in atomic simulation) of a plasma membrane. Constituents are cholesterol, sphingomyelin (SM), phosphatidyl choline (PC) phosphatidyl serine (PS), phosphatidyl ethanolamine (PE), and phosphotidyl ethanolamine plasmalogen. Note how in this portrayal, there is far more cholesterol in the outer leaflet (top), facing the outside world, than there is in the inner leaflet (bottom).

The major constituents of the lipid bilayer are cholesterol, phospholipids, and sphingomyelin. The latter two have charged head groups and long lipid (fatty) tails. The head groups keep that side of the molecule (and the bilayer) facing water. The tails hate water and like to arrange themselves in the facing sheets that make up the inner part of the bilayer. Cholesterol, on the other hand, has only a mildly polar hydroxyl group at one end, and a very hydrophobic, stiff, and flat multi-ring body, which keeps strictly with the lipid tails. The lack of a charged head group means that cholesterol can easily flip between the bilayer leaflets- something that the other molecules with charged headgroups find very difficult. It has long been known that our genomes code for flippases and floppases: ATP-driven enzymes that can flip the charged phospholipids and sphingomyelin from one leaflet to the other. Why these enzymes exist, however, has been a conundrum.

Pumps that drive phospholipids against their natural equilibrium distribution, into one or the other leaflet.

It is not immediately apparent why it would be helpful to give up the natural symmetry and fluidity of the natural bilayer, and invest a lot of energy in keeping the compositions of each leaflet different. But that is the way it is. The outer leaflet of the plasma membrane tends to have more sphingomyelin and cholesterol, and the inner leaflet has more phospholipids. Additionally, those phospholipids tend to have unsaturated tails- that is, they have double bonds that break up the straight fatty tails that are typical in sphingomyelin. Membrane asymmetry has a variety of biological effects, especially when it is missing. Cells that lose their asymmetry are marked for cell suicide, intervention of the immune system, and also trigger coagulation in the blood. It is a signal that they have broken open or died. But these are doubtless later (maybe convenient) organismal consequences of universal membrane asymmetry. They do not explain its origin. 

A recent paper delved into the question of how and why this asymmetry happens, particularly in regard to cholesterol. Whether cholesterol even is asymmetric is controversial in the field, since measuring their location is very difficult. Yet these authors carefully show that, by direct measurement, and also by computer simulation, cholesterol, which makes up roughly forty percent of the membrane (its most significant single constituent, actually), is highly asymmetric in human erythrocyte membranes- about three fold more abundant in the outer leaflet than in the cytoplasmic leaflet. 

Cholesterol migrates to the more saturated leaflet. B shows a simulation where a poly-unstaturated (DAPC) phospholipid with 4 double bonds (blue) is contrasted with a saturated phospholipid (DPPC) with staight lipid tails (orange). In this simulation, cholesterol naturally migrates to the DPPC side as more DAPC is loaded, relieving the tension (and extra space) on the inner leaflet. Panel D shows that in real cells, scrambling the leaflet composition leads to greater cholesterol migration to the inner leaflet. This is a complex experiment, where the fluorescent signal (on the right-side graph) comes from a dye in an introduced cholesterol analog, which is FRET-quenched by a second dye that the experimenters introduced which is confined to the outer membrane. In the natural case (purple), signal is more quenched, since more cholesterol is in the outer leaflet, while after phospholipid scrambling, less quenching of the cholesterol signal is seen. Scrambling is verified (left side) by fluorescently marking the erythrocytes for Annexin 5, which binds to phosphatidylcholine, which is generally restricted to the inner leaflet. 

But no cholesterol flippase is known. Indeed, such a thing would be futile, since cholesterol equilibrates between the leaflets so rapidly. (The rate is estimated at milliseconds, in very rough terms.) So what is going on? These authors argue via experiment and chemical simulation that it is the pumped phospholipids that drive the other asymmetries. It is the straight lipid tails of sphingomyelin that attract the cholesterol, as a much more congenial environment than the broken/bent tails of the other phospholipids that are concentrated in the cytoplasmic leaflet. In turn, the cholesterol also facilitates the extreme phospholipid asymmetry. The authors show that without the extra cholesterol in the outer leaflet, bilayers of that extreme phospholipid composition break down into lipid globs.

When treated (time course) with a chemical that scrambles the plasma membrane leaflet lipid compositions, a test protein (top series) that normally (0 minutes) attaches to the inner leaflet floats off and distributes all over the cell. The bottom series shows binding of a protein (from outside these cells) that only binds phosphatidylcholine, showing that scrambling is taking place.

This sets up the major compositional asymmetry between the leaflets that creates marked differences in their properties. For example, the outer leaflet, due to the straight sphingomyelin tails and the cholesterol, is much stiffer, and packed much tighter, than the cytoplasmic leaflet. It forms a kind of shield against the outside world, which goes some way to explain the whole phenomenon. It is also almost twice as impermeable to water. Conversely, the cytoplasmic leaflet is more loosely packed, and indeed frequently suffers gaps (or defects) in its lipid integrity. This has significant consequences because many cellular proteins, especially those involved in signaling from the surface into the rest of the cytoplasm, have small lipid tails or similar anchors that direct them (temporarily) to the plasma membrane. The authors show that such proteins localize to the inner leaflet precisely because that leaflet has this loose, accepting structure, and are bound less well if the leaflets are scrambled / homogenized.

When the fluid mosaic model of biological membranes was first conceived, it didn't enter into anyone's head that the two leaflets could be so different, or that cells would have an interest in making them so. Sure, proteins in those membranes are rigorously oriented, so that they point in the right direction. But the lipids themselves? What for? Well, they do and now there are some glimmerings of reasons why. Whether this has implications for human disease and health is unknown, but just as a matter of understanding biology, it is deeply interesting.


Saturday, December 20, 2025

Man is Wolf to Man

The current administration's predatory and corrupt version of capitalism.

What is corruption? Isn't capitalism all about getting as much money as you can? Then doesn't it follow that there can be no such thing as corruption, which is defined as going against the rules? What rules?

We as a country go on a trip with every new president, learning about their nature and values as we accompany them through their brief span of history. Few presidents wear very well after their honeymoon, since the process of getting elected requires some shading of the truth, truth that inevitably comes out later on. The current administration is an odd example, since in his first term, Trump was not allowed (for very good reasons!) to be himself. The second time round has been a different story, and we are getting a deep look at his character. 

The US has always had a double relationship with capitalism, tilting between rampant competition / exploitation and reverence for rules and legal systems. Slavery, obviously, is the foremost example, with slaveholders enshrining in a document dedicated to human freedom their own legal rights to property in comprehensively oppressed people. The founders, on making their constitution, feverishly set to work creating institutions for the common good, such as the treasury, mail system, judicial system, patents, and military. But, at the same time, we have long had an ideology of free enterprise- of land, resource, and human exploitation almost without limit. 

Charles Ponzi was not working in Italy, after all, but in the US, as was Bernie Madoff. Now crypto is the popular mechanism of picking people's pockets, facilitating mundane crime such as money laundering and ransomware attacks at the same time that it provides flourishing vistas of direct fraud, in rug pulls, hacks, and market manipulation. A recent article reviewed the pathetic world of multilevel marketing, another model of predation where ambitious entrepreneurs are sucked into schemes that are engineered both to fail, and to induce the victims to blame themselves.

The administration has clearly made it its mission to celebrate these forms of business- the predators, the grifters, the destructive businessmen among us who think that taxes are for little people, and rules for someone else. Consumer protection agencies have been shuttered, the IRS eviscerated, investigations cancelled. Pardons have been going out, not only to the January 6th conspirators and their militias, but to the money launderers, the corrupt politicians, and crypto bros. It is a sustained campaign of norm and rule-breaking by a grossly tasteless, shockingly greedy and small-minded president, (and sexual predator), who cannot conceive of rational policy, uncorrupted institutions, or fairness, much less civility, as a principle. A person with deep psychological problems. And thus, is incapable of long-term policy that is the bedrock of durable, functional institutions, either commercial or governmental.

Following gold, like a cat following a laser pointer.

We are all worried about fascism, as that seems to be the aesthetic and the model of power the administration is tending towards. But what they have done so far doesn't even come up to the level of fascism, really. The president is not smart enough to have a coherent policy or ideological platform. The weave does not leave room for a program that would be attractive beyond the nihilistic base. There are inclinations, and moods, and tantrums, love for Putin, and a lot of nostalgia for policies of decades, if not centuries, ago. There is hate. But without a program that binds all these ingredients into even a marginally coherent approach to the future, it will inevitably fall apart. True believers don't do policies or reason- conspiracy theories are enough. Thinking, apparently, is for libtards. 

The fact of the matter is that capitalism is not equivalent to the law of the jungle. A legal system, and rules, are required to prevent capitalists from making military forays into each other's empires, and to prevent the workers from taking up their pitchforks, among much else. It is founded on the limited liability company, itself a legal construct, not to mention all the financial, educational, and physical infrastructure that forms its essential background. There is no going Galt here. Indeed, the whole point of captialism is not to screw everyone and make a few people very rich. Rather, it is to diffuse labor, useful products and productive technologies across society in a way that utilizes everyone's talents and supplies everyone's needs.  The point is general prosperity, not inequality. 

Institutions are built on rules, and they can die two ways- either people disregard and lose faith in the rules, (maybe because they are made by corrupt processes), or the rules become so elaborate and sclerotic that the point of the institution is lost. These ways map roughly onto our political divide, which, when it compromises to the middle produces something akin to a functional mean. But the current administration, and its ideology of thorough-going corruption on personal, business, and governmental levels, is, with the connivance of an equally unhinged supreme court, creating a legacy of cruelty and destruction that is surely a sad way to mark next year's anniversary of our institutional founding.


  • How the wingnut evangelicals, rightist Catholics, and their funders have bought into the burn-it-all-down program of predation, with a little help from the Russians.
  • Being populist means lying, unfortunately.
  • We can and should give real help to Ukraine. How about blockading Russian rather than Venezuelan tankers?
  • A hiring hellscape, with AI battling on both sides.
  • Destroying science, at the NIH.