Showing posts with label medicine. Show all posts
Showing posts with label medicine. Show all posts

Saturday, March 9, 2024

Getting Cancer Cells to Shoot Themselves

New chemicals that make novel linkages among cellular components can be powerful drugs.

One theme that has become common in molecular biology over the years is the prevalence of proteins whose only job is to bring other proteins together. Many proteins lack any of the usual jazzy functions, like catalytic enzyme, or ion channel, or signaling kinase, but just serve as "conveners", bringing other proteins together. Typically they are regulated in some way, by phosphorylation, expression, or localization, and some of these proteins serve as key "scaffolds" for the activation of some process, like G-protein activation, or cell cycle control, or cell growth. 

Well, the drug industry has caught on, and is starting to think about chemicals that can do similar things, resulting in occasionally powerful results. Conventional drug design has aimed to bind to whatever protein is responsible for some ill, and inhibit it. Such as an oncogene, or an over-active component of the immune system. This has led to many great drugs, but has significant limitations. The chemical has to bind not just anywhere on the target, but at the particular spot (the active site) that is its business end, where its action happens. And it has to bind really well, since binding and inhibiting only half the target proteins in a cell (or the body) will typically only have a modest effect. These requirements are quite stringent and result in many protein targets being deemed difficult to drug, or "undruggable".

A paradigm for a new kind of chemical drug, which links two functions, is the PROTAC class, which combines binding with a target on one end, with another end that binds to the cell's protein destruction machinery, thereby not just inhibiting the target, but destroying it. A new paper describes an even more nuclear option along this line of drug development, linking an oncogene with a second part that activates the cellular suicide machinery. One can imagine that this approach can have far more dramatic effects.

These researchers synthesize and demonstrate a chemical that binds on one end the oncogene BCL6, mutations of which can cause B cell lymphoma. This gene is a transcription repressor, and orchestrates the development of particular immunologic T cells called T follicular helper cells. One of its roles is to prevent the suicide of these cells when an antigen is present, which is when the cells are most needed. If over-expressed in cancer, these cells think they really need to protect the body and proliferate wildly.

The other end of this chemical, called TCIP1, binds to BRD4, which is another transcription regulator, but this one activates the cell suicide genes, instead of turning them off. Both ends of this molecule were based on previously known structures. The innovation was solely in linking them together. I should say parenthetically that BRD4 is itself recognized as an oncogene, as it can promote cell growth and prevent cell suicide in many settings. So it has ambivalent roles, (inviting a lot of vague writing), and it is somewhat curious that these researchers focused on BRD4 as an apoptosis driver.

"TCIP1 kills diffuse large B cell lymphoma cell lines, including chemotherapy-resistant, TP53-mutant lines, at EC50 of 1–10 nM in 72 h" 
Here EC50 means the effective concentration where the effect is 50% of maximal. This value of 1.3 nano molar is a very low concentration for a drug, meaning it is highly effective. TP53 is another cancer-driving mutation, common in treatment-resistant cancers. The drug has a characteristic and curious dosage behavior, as its effect decreases at higher concentrations. This is because each individual end of the molecule starts to bind and saturate targets independently, reducing the rate of linkage between the two target proteins, and thus the intended effect.

Chemical structure of TCIP1. The left side binds to BRD4, a regulator of cell suicide, while the right side binds to BCL6, an oncogene.

The authors did numerous controls with related chemicals, and tracked genes that were targeted by the novel chemical, all to show that the dramatic effects they were seeing were specifically caused by the linkage of the two chemical functions. Indeed, BCL6 represses its own transcription in the natural course of affairs, and the new drug reverses this behavior as well, inducing more of its own synthesis, which now potentiates the drug's lethal effect. While the authors did not show effectiveness in animals, they did show that TCIP1 is not toxic in mice. Neither did they show that TCIP1 is orally available, but administered it by injection. But even by this mode, it would, if effective, be a very exciting therapy. Not surprisingly, the authors report a long series of biotech industry ties (rooted at Stanford) and indicate that this technology is under license for drug development.

This approach is highly promising, and a significant advance in the field. It should allow increased flexibility in targeting all kinds of proteins that may or not cause disease, but are specific to or over-expressed in disease states, in order to address those diseases. It will allow increased flexibility in targeting apoptosis (cell suicide) pathways through numerous entry points, to have the same ultimate (and highly effective) therapeutic endpoint. It allows drugs to work at low concentrations, not needing to fully occupy or inhibit their targets. Many possible areas of therapy can be envisioned, but one is aging. By targeting and killing senescent cells, which are notorious for promoting aging, significant increases in lifespan and health are conceivable. 


  • Biden is doing an excellent job.
  • Annals of mental decline.
  • Maybe it is an anti-addiction drug.
  • One gene that really did the trick.
  • A winning issue.
  • It is hard to say yet whether nuclear power is a climate solution, or an expensive distraction.

Saturday, February 17, 2024

A New Form of Life is Discovered

An extremely short RNA is infectious and prevalent in the human microbiome.

While the last century might be called the DNA century, at least for molecular biology, the current century might be called that of RNA. A blizzard of new RNA types and potentials have been discovered in the normal eukaryotic milieu, including miRNA, eRNA, lincRNA. An RNA virus caused a pandemic, which was remedied by an RNA vaccine. Nobel prizes have been handed out in these fields, and we are also increasingly aware that RNA lies at the origin of life itself, as the first genetic and catalytic mechanism.

One of these Nobel prize winners recently undertook a hunt for small RNAs that might be lurking in the human microbiome- the soup of bacteria, fungi, and all the combined products that cover our surfaces, inside and out. What they found was astonishing- an RNA of merely 1164 nucleotides, which folds up into a rigid, linear rod, which they call "obelisks". This is not a product of the host genome, nor of any other known organism, but is rather some kind of extremely minimal pathogen that, like a transposon or self-splicing intron, is entirely nucleic-acid based. And the more they hunted, the more they found, ultimately finding thousands of obelisk-like entities hidden in the many databases of the world drawn from various environmental and microbiome samples. There is some precedent for this kind of structure, in the form of hepatitis D. This "viroid" of only 1682 nucleotides is a parasite of hepatitis B virus, depending on that virus for key replication functions. While normal viruses (like hepatitis B) encode many key functions of their own, like envelope proteins, genome packaging proteins, and replication enzymes, viroids tend to not encode anything, though hepatitis D does encode one antigenic protein, which exacerbates hepatitis B infections.

The obelisk RNA viroid-like species appear to encode one or two proteins, and possibly a ribozyme as well. The functions of all these are as yet unknown, but necessarily the RNAs rely entirely some host cell (currently unknown) functions to do their thing, such as the RNA polymerase to create copies of itself. Unknown also is whether they are dependent on other viruses, or only on cells for their propagation. Being just discovered, the researchers can do a great deal of bioinformatics, such as predicting the structure of the encoded protein, and the structure of the RNA genome. But key biology, like how they interact with host cells, what functions the host provides, and how they replicate, not to mention possible pathogenic consequences, remain unknown.

The highly self-complementary structure of one obelisk RNA sequence, leading to its identification and naming. In green is one reading frame, which codes for the main protein, of unknown function.

The curious thing about these new obelisk viroid-like RNAs is that, while common in human microbiomes, both oral and gut-derived, they are found only in 5-10% of them, not in all samples. This sort of suggests that they may account for some of the variability traceable to microbiomes, such as autoimmune issues, chronic ailments, nutritional variations, even effects on mood, etc.

Once a lot of databases were searched, obelisk RNAs turn up everywhere, even in some bacteria.

This work was done entirely in silico. Not a single wet-lab experiment was performed. It is a testament to the power of having alot of genomes at our disposal, and of modern computational firepower. This lab just had the idea that novel small viroid-like RNAs might exhibit certain types of (circular, self-complementary) structure, which led to this discovery of a novel form of "life". Are these RNAs alive? Certainly not. They are mere molecules and parasites that feed off, and transport themselves between, more fully functional cells. But they are part of the tapestry of life, which itself is wholly molecular, with many amazing emergent properties. Whether these obelisks turn out to have any medical or ecological significance, they are one more example of the lengths (and shorts) to which Darwinian selection has gone in the struggle for existence. 


Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Saturday, June 10, 2023

A Hard Road to a Cancer Drug

The long and winding story of the oncogene KRAS and its new drug, sotorasib.

After half a century of the "War on Cancer", new treatments are finally straggling into the clinic. It has been an extremely hard and frustrating road to study cancer, let alone treat it. We have learned amazing things, but mostly we have learned how convoluted a few billion years of evolution can make things. The regulatory landscape within our cells is undoubtedly the equal of any recalcitrant bureaucracy, full of redundant offices, multiple veto points, and stakeholders with obscure agendas. I recently watched a seminar in the field, which discussed one of the major genes mutated in cancer and what it has taken to develop a treatment against it. 

Cancer is caused by DNA mutations, and several different types need to occur in succession. There are driver mutations, which are the first step in the loss of normal cellular control. But additional mutations have to happen for such cells to progress through regulatory blocks, like escape from local environmental controls on cell type and cell division, past surveillance by the immune system, and past the reluctance of differentiated cells to migrate away from their resident organ. By the end, cancer cells typically have huge numbers of mutations, having incurred mutations in their DNA repair machinery in an adaptive effort to evade all these different controls.

While this means that many different targets exist that can treat some cancers, it also means that any single cancer requires a precisely tailored treatment, specific to its mutated genes. And that resistance is virtually inevitable given the highly mutable nature of these cells. 

One of the most common genes to be mutated to drive cancer (in roughly 20% of all cases) is KRAS, part of the RAS family of NRAS, KRAS, and HRAS. These were originally discovered through viruses that cause cancer in rats. These viruses (such as Kirsten rat sarcoma virus) had a copy of a rat gene in it, which it overpoduces and uses to overcome normal proliferation controls during infection. The viral gene was called an oncogene, and the original rat (or human) version was called a proto-oncogene, named KRAS. The RAS proteins occupy a central part of the signaling path that external events and stresses turn on to activate cell growth and proliferation, called the MAP kinase cascade. For instance, epidermal growth factor comes along in the blood, binds to a receptor on the outside of a cell, and turns on RAS, then MEK, MAPK, and finally transcription regulators that turn on genes in the nucleus, resulting in new proteins being expressed. "Turning on" means different things at each step in this cascade. The transcription regulators typically get phosphorylated by their upstream kinases like MAPK, which tag them for physical transport into the nucleus, where they can then activate genes. MAPK is turned on by being itself phosphorylated by MEK, and MEK is phosphorylated by RAF. RAF is turned on by binding to RAS, whose binding activity in turn is regulated by the state of a nucleotide (GTP) bound by RAS. When binding GTP, RAS is on, but if binding GDP, it is off.

A schematic of the RAS pathway, whereby extracellular growth signals are interpreted and amplified inside our cells, resulting in new gene expression as well as other more immediate effects. The cell surface receptor, activated by its ligand, activates associated SOS which activates RAS to the active (GTP) state. This leads to a kinase cascade through RAF, MEK, and MAPK and finally to gene regulators like MYC.

This whole system seems rather ornate, but it accomplishes one important thing, which is amplification. One turned-on RAF molecule or MEK molecule can turn on / phosphorylate many targets, so this cascade, though it appears linear in a diagram, is acutally a chain reaction of sorts, amplifying as it goes along. And what governs the state of RAS and its bound GTP? The state of the EGFR receptor, of course. When KRAS is activated, the resident GDP leaves, and GTP comes to take its place. RAS is a weak GTPase enzyme itself, slowly converting itself from the active back to the inactive state with GDP. 

Given all this, one would think that RAS, and KRAS in particular, might be "druggable", by sticking some well-designed molecule into the GTP/GDP binding pocket and freezing it in an inactive state. But the sad fact of the matter is that the affinity KRAS has to GTP is incredibly high- so high it is hard to measure, with a binding constant of about 20 pM. That is, half the KRAS-bound GTP comes off when the ambient concentration of GTP is infinitesimal, 0.02 nano molar. This means that nothing else is likely to be designed that can displace GTP or GDP from the KRAS protein, which means that in traditional terms, it is "undruggable". What is the biological logic of this? Well, it turns out that the RAS enzymes are managed by yet other proteins, which have the specific roles of prying GDP off (GTP exchange factor, or GEF) and of activating the GTP-ase activity of RAS to convert GTP to GDP (GTPase activating protein, or GAP). It is the GEF protein that is stimulated by the receptors like EGFR that induce RAS activity. 

So we have to be cleverer in finding ways to attack this protein. Incidentally, most of the oncogenic mutations of KRAS are at the twelfth residue, glycine, which occupies a key part of the GAP binding site. As glycine is the smallest amino acid, any other amino acid here is bulkier, and blocks GAP binding, which means that KRAS with any of these mutations can not be turned off. It just keeps on signaling and signaling, driving the cell to think it needs to grow all the time. This property of gain of function and the ability of any mutation to fit the bill is why this particular defect in KRAS is such a common cancer-driving mutation. It accounts for ~90% of pancreatic cancers, for instance. 

The seminar went on a long tangent, which occupied the field (of those looking for ways to inhibit KRAS with drugs) for roughly a decade. RAS proteins are not intrinsically membrane proteins, but they are covalently modified with a farnesyl fatty tail, which keeps them stuck in the cell's plasma membrane. Indeed, if this modification is prevented, RAS proteins don't work. So great- how to prevent that? Several groups developed inhibitors of the farnesyl transferase enzyme that carries out this modification. The inhibitors worked great, since the farnesyl transferase has a nice big pocket for its large substrate to bind, and doesn't bind it too tightly. But they didn't inhibit the RAS proteins, because there was a backup system- geranygeranyl transferase that steps into the breach as a backup, which can attach an even bigger fatty tail to RAS proteins. Arghhh!

While some are working on inhibiting both enzymes, the presenter, Kevan Shokat of UCSF, went in another direction. As a chemist, he figured that for the fraction of the KRAS mutants at position 12 that transform from glycine to cysteine, some very specific chemistry (that is, easy methods of cross-linking), can be brought to bear. Given the nature of the genetic code, the fraction of mutations that go from glycine to cysteine are small, there being eight amino acids that are within a one-base change of glycine, coded by GGT. So at best, this approach is going to have a modest impact. Nevertheless, there was little choice, so they forged ahead with a complicated chemical scheme to make a small molecule that could chemically crosslink to that cysteine, with selectivity determined by a modest shape fit to the surface of the KRAS protein near this GEF binding site. 

A structural model of KRAS, with its extremely tightly-bound substrate GDP in orange. The drug sotorasib is below in teal, bound in another pocket, with a tail extending upwards to the (mutant) cysteine 12, which is not differentiated by color, but sits over a magnesium ion (green) being coordinated by GDP. The main job of sotorasib is to interfere with the binding of the guanine exchange factor (GEF) which happens on the surface to its left, and would reset KRAS to an active state.

This approach worked surprisingly well, as the KRAS protein obligingly offfered a cryptic nook that the chemists took advantage of to make this hybrid compound, now called the drug sotorasib. This is an FDA-approved treatment for cancers which are specifically driven by this particular KRAS mutation of position 12 from glycine to cysteine. That research group is currently trying to extend their method to other mutant forms, with modest success. 

So let's take a step back. This new treatment requires, obviously, the patient's tumor to be sequenced to figure out its molecular nature. That is pretty standard these days. And then, only a small fraction of patients will get the good news that this drug may help them. Lung cancers are the principal candidates currently, (of which about 15% have this mutation), while only about 1-2% of other cancers have this mutation. This drug has some toxicity- while it is a magic bullet, its magic is far from perfect, (which is odd given the exquisite selectivity it has for the mutated form of KRAS, which should only exist in cancer tissues). And lastly, it gives, on average, under six months of reprieve from cancer progression, compared to four and a half months with a more generic drug. As mentioned above, tumors at this stage are riven with other mutations and evolve resistence to this treatment with appalling relentlessness.

While it is great to have developed a new class of drugs like this one against a very recalcitrant target, and done so on a highly rational basis driven by our growing molecular knowlege of cancer biology, this result seems like a bit of a let-down. And note also that this achievement required decades of publicly funded research, and doubtless a billion dollars or more of corporate investment to get to this point. Costs are about twenty five thousand dollars per patient, and overall sales are maybe two hundred million dollars per year, expected to increase steadily.

Does this all make sense? I am not sure, but perhaps the important part is that things can not get worse. The patent on this drug will eventually expire and its costs will come down. And the research community will keep looking for other, better ways to attack hard targets like KRAS, and will someday succeed.


Saturday, February 4, 2023

How Recessive is a Recessive Mutation?

Many relationships exist between mutation, copy number, and phenotype.

The traditional setup of Mendelian genetics is that an allele of a gene is either recessive or dominant. Blue eyes are recessive to brown eyes, for the simple reason that blue arises from the absence of an enzyme, due to a loss of function mutation. So having some of that enzyme, from even one "brown" copy of that gene, is dominant over the defective "blue" copy. You need two "blue" alleles to have blue eyes. This could be generalized to most genes, especially essential genes, where lacking both copies is lethal, while having one working copy will get you through, and cover for a defective copy. Most gene mutations are, by this model, recessive. 

But most loci and mutations implicated in disease don't really work like that. Some recent papers delved into the genetics of such mutations, and observed that their recessiveness was all over the map, a spectrum, really, of effects from fully recessive to dominant, with most in the middle ground. This is informative for clinical genetics, but also for evolutionary studies, suggesting that evolution is not, after all, blind to the majority of mutations, which are mostly deleterious, exist most of the time in the haploid (one-copy) state, and would be wholly recessive by the usual assumption.

The first paper describes a large study over the Finnish population, which benefited from several advantages. Finns have a good health system with thorough records which are housed in a national biobank. The study used 177,000 health records and 83,000 variants in coding regions of genes collected from sequencing studies. Second, the Finnish population is relatively small and has experienced bottlenecks from smaller founding populations, which amplifies the prevalence of variants that those founders had. That allows those variants to rise to higher rates of appearance, especially in the homozygous state, which generally causes more noticeable disease phenotypes. Both the detectability and the statistics were powered by this higher incidence of some deleterious mutations (while others, naturally, would have been more rare than the world-wide average, or absent altogether).

Thirdly, the authors emphasize that they searched for various levels of recessive effect, which is contrary to the usual practice of just assuming a linear effect. A linear model says that one copy of a mutation has half the effect of two copies- which is true sometimes, but not most of the time, especially in more typical cases of recessive effect where one copy has a good deal less effect, if not zero. Returning to eye color, if one looks in detail, there are many shades of eyes, even of blue eyes, so it is evident that the alleles that affect eye color are various, and express to different degrees (have various penetrance, in the parlance). While complete recessiveness happens frequently, it is not the most common case, since we generally do not routinely express excess amounts of proteins from our genes, making loss of one copy noticeable most of the time, to some degree. This is why the lack of a whole chromosome, or an excess of a whole chromosome, has generally devastating consequences. Trisomies in only three chromosomes are viable (that is, not lethal), and confer various severe syndromes.

A population proportion plot vs age of disease diagnosis for three different diseases and an associated genetic variant. In blue is the normal ("wild-type") case, in yellow is the heterozygote, and in red the homozygote with two variant alleles. For "b", the total lack of XPA causes skin cancer with juvenile onset, and the homozygotic case is not shown. The Finnish data allowed detection of rather small recessive effects from variations that are common in that population. For instanace, "a" shows the barely discernable advancement of age of diagnosis for a disease (hearing loss) that in the homozygotic state is universal by age 10, caused by mutations in GJB2.

The second paper looked more directly at the fitness cost of variations over large populations, in the heterozygous state. They looked at loss-of-function (LOF) mutations of over 17,000 genes, studying their rate of appearance and loss from human populations, as well as in pedigrees. These rates were turned, by a modeling system, into fitness costs, which are stated in percentage terms, vs wild type. A fitness cost of 1% is pretty mild, (though highly significant over longer evolutionary time), while a fitness cost of 10% is quite severe, and one of 100% is immediately lethal and would never be observed in the population. For example, a mutation that is seen rarely, and in pedigrees only persists for a couple of generations, implies a fitness cost of over 10%.

They come up with a parameter "hs", which is the fitness cost "s" of losing both copies of a gene, multiplied by "h", a measure of the dominance of the mutation in a single copy.


In these graphs, human genes are stacked up in the Y axis sorted by their computed "hs" fitness cost in the heterozygous state. Error bars are in blue, showing that this is naturally a rather error-prone exercise of estimation. But what is significant is that most genes are somewhere on the spectrum, with very few having negligible effects, (bottom), and many having highly significant effects (top). Genes on the X chromosome are naturally skewed to much higher significance when mutated, since in males there is no other copy, and even in females, one X chromosome is (randomly) inactivated to provide dosage compensation- that is, to match the male dosage of production of X genes- which results in much higher penetrance for females as well.


So the bottom line is that while diploidy helps to hide alot of variation in sexual organisms, and in humans in particular, it does not hide it completely. We are each estimated to receive, at birth, about 70 new mutations, of which 1/1000 are the kind of total loss of gene function studied here. This work then estimates that 20% of those mutations have a severe fitness effect of >10%, meaning that about one in seventy zygotes carry such a new mutation, not counting what it has inherited from its parents, and will suffer ill effects immediately, even though it has a wild-type copy of that gene as well.

Humans, as other organisms, have a large mutational load that is constantly under surveillance by natural selection. The fact that severe mutations routinely still have significant effects in the heterozygous state is both good and bad news. Good in the sense that natural selection has more to work with and can gradually whittle down on their frequency without necessarily waiting for the chance of two meeting in an unfortunate homozygous state. But bad in the sense that it adds to our overall phenotypic variation and health difficulties a whole new set of deficiencies that, while individually and typically minor, are also legion.


Saturday, December 3, 2022

Senescent, Cancerous Cannibals

Tumor cells not only escape normal cell proliferation controls, but some of them eat nearby cells.

Our cells live in an uneasy truce. Cooperation is prized and specialization into different cell types, tissues, and organs is pervasive. But deep down, each cell wants to grow and survive, prompting many mechanisms of control, such as cell suicide (apoptosis) and immunological surveillance (macrophages, killer T-cells). Cancer is the ultimate betrayal, not only showing disregard for the ruling order, but in its worst forms killing the whole organism in a pointless drive for growth.

A fascinating control mechanism that has come to prominence recently is cellular senescence. In petri dishes, cells can only be goosed along for a few dozen cycles of division until they give out, and become senescent. Which is to say, they cease replicating but remain alive. It was first thought that this was another mechanism to keep cancer under control, restricting replication to "stem" cells and their recent progeny. But a lot of confusing and interesting observations indicate that the deeper meaning of senescence lies in development, where it appears to function as an alternate form of cell suicide, delayed so that tissues are less disrupted. 

Apoptosis is used very widely during development to reshape tissues, and senescence is used extensively as well in these programs. Senescent cells are far from quiescent, however. They have high metabolic activity and are particularly notorious for secreting a witches' brew of inflammatory cytokines and other proteins- the senescence-associated secretory phenotype, or SASP. in the normal course of events, this attracts immune system cells which initiate repair and clearance operations that remove the senescent cells and make sure the tissue remains on track to fulfill its program. These SASP products can turn nearby cells to senescence as well, and form an inflammatory micro-environment that, if resolved rapidly, is harmless, but if persistent, can lead to bad, even cancerous local outcomes. 

The significance of senescent cells has been highlighted in aging, where they are found to be immensely influential. To quote the wiki site:

"Transplantation of only a few (1 per 10,000) senescent cells into lean middle-aged mice was shown to be sufficient to induce frailty, early onset of aging-associated diseases, and premature death."

The logic behind all this seems to be another curse of aging, which is that while we are young, senescent cells are cleared with very high efficiency. But as the immune system ages, a very small proportion of senescent cells are missed, which are, evolutionarily speaking, an afterthought, but gradually accumulate with age, and powerfully push the aging process along. We are, after all, anxious to reduce chronic inflammation, for example. A quest for "senolytic" therapies to clear senescent cells is becoming a big theme in academia and the drug industry and may eventually have very significant benefits. 

Another odd property of senescent cells is that their program, and the effects they have on nearby cells, resemble to some partial degree those of stem cells. That is, the prevention of cell death is a common property, as is the prevention of certain controls preventing differentiation. This brings us to tumor cells, which frequently enter senescence under stress, like that of chemotherapy. This fate is highly ambivalent. It would have been better for such cells to die outright, of course. Most senescent tumor cells stay in senescence, which is bad enough for their SASP effects in the local environment. But a few tumor cells emerge from senescence, (whether due to further mutations or other sporadic properties is as yet unknown), and they do so with more stem-like character that makes them more proliferative and malignant.

A recent paper offered a new wrinkle on this situation, finding that senescent tumor cells have a novel property- that of eating neighboring cells. As mentioned above, senescent cells have high metabolic demands, as do tumor cells, so finding enough food is always an issue. But in the normal body, only very few cells are empowered to eat other cells- i.e. those of the immune system. To find other cells doing this is highly unusual, interesting, and disturbing. It is one more item in the list of bad things that happen when senescence and cancer combine forces.

A senescent tumor cell (green) phagocytoses and digests a normal cell (red).


  • Shockingly, some people are decent.
  • Tangling with the medical system carries large risks.
  • Is stem cell therapy a thing?
  • Keep cats indoors.

Saturday, September 17, 2022

Death at the Starting Line- Aneuploidy and Selfish Centromeres

Mammalian reproduction is unusually wasteful, due to some interesting processes and tradeoffs.

Now that we have settled the facts that life begins at conception and abortion is murder, a minor question arises. There is a lot of murder going on in early embryogenesis, and who is responsible? Probably god. Roughly two-thirds of embryos that form are aneuploid (have an extra chromosome or lack a chromosome) and die, usually very soon. Those that continue to later stages of pregnancy cause a high rate of miscarriages-about 15% of pregnancies. A recent paper points out that these rates are unusual compared with most eukaryotes. Mammals are virtually alone in exhibiting such high wastefulness, and the author proposes an interesting explanation for it.

First, some perspective on aneupoidy. Germ cells go through a two-stage process of meiosis where their DNA is divided two ways, first by homolog pairs, (that is, the sets inherited from each parent, with some amount of crossing-over that provides random recombination), and second by individual chromosomes. In more primitive organisms (like yeast) this is an efficient, symmetrical, and not-at-all wasteful process. Any loss of genetic material would be abhorrent, as the cells are putting every molecule of their being into the four resulting spores, each of which are viable.

A standard diagram of meiosis. Note that the microtubules (yellow) engage in a gradual and competitive process of capturing centromeres of each chromosome to arrive at the final state of regular alignment, which can then be followed by even division of the genetic material and the cell.


In animals, on the other hand, meiosis of egg cells is asymmetric, yielding one ovum / egg and three polar bodies, which  have various roles in some species to assist development, but are ultimately discarded. This asymmetric division sets up a competition between chromosomes to get into the egg, rather than into a polar body. One would think that chromosomes don't have much say in the matter, but actually, cell division is a very delicate process that can be gamed by "strong" centromeres.

Centromeres are the central structures on chromosomes that form attachments to the microtubules forming the mitotic spindle. This attachment process is highly dynamic and even competitive, with microtubules testing out centromere attachment sites, and using tension ultimately as the mark of having a properly oriented chromosome with microtubules from each side of the dividing cell (i.e. each microtubule organizing center) attached to each of the centromeres, holding them steady and in tension at the midline of the cell. Well, in oocytes, this does not happen at the midline, but lopsidedly towards one pole, given that one of the product cells is going to be much larger than the others. 

In oocytes, cell division is highly asymmetric with a winner-take-all result. This opens the door to a mortal competition among chromosomes to detect which side is which and to get on the winning side. 

One of the mysteries of biology is why the centromere is a highly degenerate, and also a speedily evolving, structure. They are made up of huge regions of monotonously repeated DNA, which have been especially difficult to sequence accurately. Well, this competition to get into the next generation can go some way to explain this structure, and also why it changes rapidly, (on evolutionary time scales), as centromeric repeats expand to capture more microtubules and get into the egg, and other portions of the machinery evolve to dampen this unsociable behavior and keep everyone in line. It is a veritable arms race. 

But the funny thing is that it is only mammals that show a particularly wasteful form of this behavior, in the form of frequent aneuploidy. The competition is so brazen that some centromeres force their way into the egg when there is already another copy there, generating at best a syndrome like Down, but for all other chromosomes than #21, certain death. This seems rather self-defeating. Or does it?

The latest paper observes that mammals devote a great deal of care to their offspring, making them different from fish, amphibians, and even birds, which put most of their effort into producing the very large egg, and relatively less (though still significant amounts) into care of infants. This huge investment of resources means that causing a miscarriage or earlier termination is not a total loss at all, for the rudely trisomic extra chromosome. No, it allows resource recovery in the form of another attempt at pregnancy, typically quite soon thereafter, at which point the pushy chromosome gets another chance to form a proper egg. It is a classic case of extortion at the molecular scale. 


  • Do we have rules, or not?
  • How low will IBM go, vs its retirees?

Saturday, September 10, 2022

Sex in the Brain

The cognitive effects of gonadotropin-releasing hormone.

If you watch the lesser broadcast TV channels, there are many ads for testosterone- elixir of youth, drive, manliness, blaring sales pitches, etc. Is it any good? Curiously, taking testosterone can cause alot of sexual dysfunctions, due to feedback loops that carefully tune its concentration. So generally no, it isn't much good. But that is not to say that it isn't a powerful hormone. A cascade of other events and hormones lead to the production of testosterone, and a recent paper (review) discussed the cognitive effects of one of its upstream inducers, gonadotropin-releasing hormone, or GnRH. 

The story starts on the male Y chromosome, which carries the gene SRY. This is a transcription activator that (working with and through a blizzard of other regulators and developmental processes) is ultimately responsible for switching the primitive gonad to the testicular fate, from its default which is female / ovarian. This newly hatched testis contains Sertoli cells, which secrete anti-Mullerian hormone (AMH, a gene that is activated by SRY directly), which in the embryo drives the regression of female characteristics. At the same time testosterone from testicular Leydig cells drives development of male physiology. The initial Y-driven setup of testosterone is quickly superceded by hormones of the gonadotropin family, one form of which is provided by the placenta. Gonadotropins continue to be essential through development and life to maintain sexual differentiation. This source declines by the third trimester, by which time the pituitary has formed and takes over gonadotropin secretion. It secretes two gondotropin family members, follicular stimulating hormone (FSH) and leutinizing hormone (LH), which each, despite their names, actually have key roles in male as well as female reproductive development and function. After birth, testosterone levels decline and everything is quiescent until puberty, when the hormonal axis driven by the pituitary reactivates.

Some of the molecular/genetic circuitry leading to very early sex differentiation. Note the leading role of SRY in driving male development. Later, ongoing maintenance of this differentiation depends on the gonadotropin hormones.

This pituitary secretion is in turn stimulated by gonadotropin releasing hormone (GnRH), which is the subject of the current story. GnRH is produced by neurons that, in embryogenesis, originate in the nasal / olfactory epithelium and migrate to the hypothalamus, close enough to the pituitary to secrete directly into its blood supply. This circuit is what revs up in puberty and continues in fine-tuned fashion throughout life to maintain normal (or declining) sex functions, getting feedback from the final sex hormones like estrogen and testosterone in general circulation. The interesting point that the current paper brings up is that GnRH is not just generated by neurons pointing at the pituitary. There is a whole other set of neurons in the hypothalamus that also secrete GnRH, but which project (and secrete GnRH) into the cortex and hippocampus- higher regions of the brain. What are these neurons, and this hormone, doing there?

The researchers note that people with Down Syndrome characteristically have both cognitive and sexual defects resembling incomplete development, (among many other issues), the latter of which resemble or reflect a lack of GnRH, suggesting a possible connection. Puberty is a time of heightened cognitive development, and they guessed that this is perhaps what is missing in Down Syndrome. Down Syndrome typically winds up in early-onset Alzheimer disease, which is also characterized by lack of GnRH, as is menopause, and perhaps other conditions. After going through a bunch of mouse studies, the researchers supplemented seven men affected by Down Syndrome with extra GnRH via miniature pumps to their brains, aimed at target areas of this hormone in the cortex. It is noteworthy that GnRH secretion is highly pulsitile, with a roughly 2 hour period, which they found to be essential for a positive effect. 

Results from the small-scale intervention with GnRH injection. Subjects with Down Syndrome had higher cortical connectivity (left) and could draw from a 3-D model marginally more accurately.

The result (also seen in mouse models of Down Syndrome and of Alzheimer's Disease) was that the infusion significantly raised cognitive function over the ensuing months. It is an amazing and intriguing result, indicating that GnRH drives significant development and supports ongoing higher function in the brain, which is quite surprising for a hormone thought to be confined to sexual functions. Whether it can improve cognitive functions in fully developed adults lacking impeding developmental syndromes remains to be seen. Such a finding would be quite unlikely, though, since the GnRH circuit is presumably part of the normal program that establishes the full adult potential of each person, which evolution has strained to refine to the highest possible level. It is not likely to be a magic controller that can be dialed beyond "max" to create super-cognition.

Why does this occur in Down Syndrome? The authors devote a good bit the paper to an interesting further series of experiments, focusing on regulatory micro-RNAs, several of which are encoded in genomic regions duplicated in Down Syndrome. microRNAs are typically regulators that repress transcription, explaining how this whole circuitry of normal development, now including key brain functions, is under-activated in those with Down Syndrome.

The authors offer a subset of regulatory circuitry focusing on micro-RNA repressors of which several are encoded on the trisomic chromosome regions.

"HPG [hypothalamus / pituitary / gonadal hormone] axis activation through GnRH expression at minipuberty (P12; [the phase of testoserone expression in late mouse gestation critical for sexual development]) is regulated by a complex switch consisting of several microRNAs, in particular miR-155 and the miR-200 family, as well as their target transcriptional repressor-activator genes, in particular Zeb1 and Cebpb. Human chromosome 21 and murine chromosome 16 code for at least five of these microRNAs (miR-99a, let-7c, miR-125b-2, miR-802, and miR-155), of which all except miR-802 are selectively enriched in GnRH neurons in WT mice around minipuberty" - main paper

So, testosterone (or estrogen, for that matter) isn't likely to unlock better cognition, but a hormone a couple of steps upstream just might- GnRH. And it does so not through the bloodstream, but through direct injection into key areas of the brain both during development, and also on an ongoing basis through adulthood. Biology as a product of evolution comprises systems that are highly integrated, not to say jury-rigged, which makes biology as a science difficult, being the quest to separate all the variables and delineate what each component and process is doing.


Sunday, July 10, 2022

Tooth Development and Redevelopment

Wouldn't it be nice to regrow teeth? Sharks do.

Imagine for a minute if instead of fillings, crowns, veneers, posts, bridges, and all the other advanced technologies of dental restoration, a tooth could be removed, and an injection prompt the growth of a complete replacement tooth. That would be amazing, right? Other animals, such as sharks and fish, regrow teeth all the time. But we only get two sets- our milk teeth and mature teeth. While mature mammalian teeth are incredibly tough and generally last a lifetime, modern agriculture and other conditions have thrown a wrench into human dental health, which modern dentistry has only partially restored. As evolution proceeded into the mammalian line, tooth development became increasingly restricted and specialized, so that the generic teeth that sharks spit out throughout their lives have become tailored for various needs across the mouth, firmly anchored into the jaw bone, and precisely shaped to fit against each other. But the price for this high-level feature set seems to be that we have lost the ability to replace them.

So researchers are studying tooth development in other animals- wondering how similar they are to human development, and whether some of their tricks can be atavistically re-stimulated in our own tissues. While the second goal remains a long way off, the first has been productively pursued, with teeth forming a model system of complex tissue development. A recent paper (with review) looked at similarities between molecular details of shark and mammalian tooth development.

Teeth are the result of an interaction between epithelial tissues and mesenchymal tissues- two of the three fundamental tissues of early embryogenesis. Patches of epithelium form dental arches around the two halves of the future mouth. Spots around these arches expand into dental placodes, which grow into buds, and as they interact continuously with the inner mesenchyme, form enamel knots. The epithelial cells of the knot then eventually start producing enamel as they pull away from interface, while the mesenchymal cells produce dentin and then the pulp and other bone-anchoring tissues of the inner tooth and root as they pull away in the opposite direction. 

Embryonic tooth development, which depends heavily on the communication between epithelial tissue (white) and mesenchymal tissue (pink). An epithelial "enamel knot" (PEK/ SEK) develops at the future cusp(s), where enamel will be laid down by the epithelial cells, and dentin by the mesenchymal cells. Below are some of the molecules known to orchestrate the activities of all these cells. Some of these molecules are extracellular signals (BMP, FGF, WNT), while others are cell-internal components of the signaling systems (LEF, PAX, MSX).

Naturally, all this doesn't happen by magic, but by a symphony of gene expression and molecular signals going back and forth. These signals are used in various combinations in many developmental processes, but given the cell types located here, due to the prior location-based patterning of the embryo in larger coordinate schemes, and the particular combination of signals, they orchestrate tooth development. Over evolution, these signals have been diverse in the highest degree across mammals, creating teeth of all sorts of conformations and functions, from whale baleen to elephant tusks. The question these researchers posed was whether sharks use the same mechanisms to make their teeth, which across that phylum are also highly diverse in form, including complicated cusp patterns. Indeed, sharks even develop teeth on their skin- miniature teeth called denticles.

Shark skin is festooned with tiny teeth, or denticles.

These authors show detailed patterns of expression of a variety of the known gene-encoded components of tooth development, in a shark. For example, WNT11(C)  is expressed right at the future cusp, also known as the enamel knot, an organizing center for tooth development. Dental epithelium (de) and dental mesenchyme (dm) are indicated. Cell nuclei are stained with DAPI, in gray. Dotted lines indicate the dental lamina composed of he dental epithelium, and large arrows indicate the presumptive enamel knot, which prefigures the cusp of the tooth and future enamel deposition.

The answer- yes indeed. For instance, sharks use the WNT pathway (panel C) and associated proteins (panels A, B, D) in the same places as mammals do, to determine the enamel knot, cusp formation, and the rest. The researchers use some chemical enhancers and inhibitors of WNT signaling to demonstrate relatively mild effects, with the inhibitor reducing tooth size and development, and the enhancer causing bigger teeth, occasionally with additional cusps. While a few differences were seen, overall, tooth development in sharks and mammals is quite similar in molecular detail. 

The researchers even went on to deploy a computer model of tooth development that incorporates twenty six gene and cellular parameters, which had been developed for mammals. They could use it to model the development of shark teeth quite well, and also model their manipulations of the WNT pathway to come out with realistic results. But they did not indicate that the overall differences in detail between mouse and shark tooth development were recapitulated faithfully by these model alterations. So it is unlikely that strict correspondence of all the network functions could be achieved, even though the overall system works similarly.

The authors offer a general comparison of mouse and shark tooth development, centered around the dental epithelium, with mesenchyme in gray. Most genes are the same (that is, orthologous) and expressed in the same places, especially including an enamel knot organizing center. For mouse, a WNT analog is not indicated, but does exist and is an important class of signal.

These authors did not, additionally, touch on the question of why tooth production stops in mammals, and is continuous in sharks. That is probably determined at an earlier point in the tissue identity program. Another paper indicated that a few of the epithelial stem cells that drive tooth development remain about in our mouths through adulthood. Indeed, these cells cause rare cancers (ameloblastoma). It is these cells that might be harnessed, if they could be prodded to multiply and re-enter their developmental program, to create new teeth.


  • Boring, condescending, disposable, and modern architecture is hurting us.
  • Maybe attacking Russia is what is needed here.

Sunday, May 29, 2022

Evolution Under (Even in) Our Noses

The Covid pandemic is a classic and blazingly fast demonstration of evolution.

Evolution has been "controversial" in some precincts. While tradition told the fable of genesis, evolution told a very different story of slow yet endless change and adaptation- a mechanistic story of how humans ultimately arose. The stark contrast between these stories, touching both on the family tree we are heir to, and also on the overall point and motivation behind the process, caused a lot of cognitive dissonance, and is a template of how a fact can be drawn into the left/right, blue/red, traditional/progressive cultural vortex.

This all came to a head a couple of decades ago, when in the process of strategic retreat, anti-evolution forces latched onto some rather potent formulations, like "just a theory", and "intelligent design". These were given a lot of think tank support and right wing money, as ways to keep doubt alive in a field that scientifically had been settled and endlessly ramified for decades. To scientists, it was the height of absurdity, but necessitated wading into the cultural sphere in various ways that didn't always connect effectively with their intended audience. But eventually, the tide turned, courts recognized that religion was behind it all, and kept it out of schools. Evolution has more or less successfully receded from hot-button status.

One of the many rearguard arguments of anti-evolutionists was that sure, there is short-term evolution, like that of microbes or viruses, but that doesn't imply that larger organisms are they way they are due to evolution and selection. That would be simply beyond the bounds of plausibility, so we should search for explanations elsewhere. At this point they were a little gun-shy and didn't go so far in public as to say that elsewhere might be in book like the Bible. This line of argument was a little ironic, since Darwin himself hardly knew about microbes, let alone viruses, when he wrote his book. The evidence that he adduced (in some profusion) described the easily visible signs of geology, of animals and plants around the world, (including familar domestic animals), which all led to the subtle, yet vast, implications he drew about evolution by selection. 

So it has been notable that the vistas of biology that opened up since that time, in microbiology, paleontology, genetics, molecular biology, et al., have all been guided by these original insights and have in turn supported them without fail. No fossils are found out of order in the strata, no genes or organisms parachute in without antecedents, and no chicken happens without an egg. Evolution makes sense of all of biology, including our current pandemic.

But you wouldn't know it from the news coverage. New variants arise into the headlines, and we are told to "brace" for the next surge, or the next season. Well, what has happened is that the SARS-COV2 virus has adapted to us, as we have to it, and we are getting along pretty well at this point. Our adaptation to it began as a social (or antisocial!) response that was very effective in frustrating transmission. But of late, it has been more a matter of training our immune systems, which have an internal selective principle. Between rampant infections and the amazing vaccines, we have put up significant protective barriers to severe illness, though not, notably, to transmission.

But what about the virus? It has adapted in the most classic of ways, by experiencing a wide variety of mutations that address its own problems of survival. It is important to remember that this virus originated in some other species (like a bat) and was not very well adapted to humans. Bats apparently have countless viruses of this kind that don't do them much harm. Similarly, HIV originated in chimpanzee viruses that didn't do them much harm either. Viruses are not inherently interested in killing us. No, they survive and transmit best if they keep us walking around, happily breathing on other people, with maybe an occasional sneeze. The ultimate goal of every virus is to stay under the radar, not causing its host to either isolate or die. (I can note parenthetically that viruses that do not hew to this paradigm, like smallpox, are typically less able to mutate, thus less adaptable, or have some other rationale for transmission than upper respiratory spread.)

And that is clearly what has happened with SARS-COV2. Local case rates in my area are quite high, and wastewater surveilance indicates even higher prevalence. Isolation and mask mandates are history. Yet hospitalizations remain very low, with no one in the ICU right now. Something wonderful has happened. Part of it is our very high local vaccination rate, (96% of the population), but another part is that the virus has become less virulent as it has adapted to our physiology, immune systems, media environment and social practices, on its way to becoming endemic, and increasingly innocuous. All this in a couple of years of world-wide spread, after billions of infections and transmissions.

The succession (i.e. evolution) of variants detected in my county

The trend of local wastewater virus detection, which currently shows quite high levels, despite mild health outcomes.

So what has the virus been doing? While it has many genes and interactions with our physiology, the major focus has been on the spike protein, which is most prominent on the viral surface, is the first protein to dock to specific human proteins (the ACE2 cell surface receptor), and is the target of all the mRNA and other specific subunit vaccines. (As distinct from the killed virus vaccines that are made from whole viruses.) It is the target of 40% of the antibodies we naturally make against the whole virus, if we are infected. It is also, not surprisingly, the most heavily mutated portion of the virus, over the last couple of years of evolution. One paper counts 45 mutations in the spike protein that have risen to the level of "variants of concern" at WHO. 

"We found that most of the SARS-COV-2 genes are undergoing negative purifying selection, while the spike protein gene (S-gene) is undergoing rapid positive selection."


Structure of the spike protein, in its normal virus surface conformation, (B, C), and in its post-triggering extended conformation that reaches down into the target cell's membrane, and later pulls the two together. Top (in B, C) is where it binds to the ACE2 target on respiratory cells, and bottom is its anchor in the viral membrane coat (D shows it upside-down). At top (A) is the overall domain structure of the protein, in its linear form as synthesized, especially the RBD (receptor binding domain) and the two protease cleavage sites that prepare it for eventual triggering.


The spike protein is a machine, not just a blob. As shown in this video, it starts as a pyramidal blob flexibly tethered to the viral surface. Binding the ACE2 proteins in our respiratory tracts triggers a dramatic re-organization whereby this blob turns into a thin rope, which drops into the target cell. Meanwhile, the portion stuck to the virus unfolds as well and turns into threads that wind back around the newly formed rope, thereby pulling the virus and the target cell membrane together and ultimately fusing them. This is, mechanistically, how the virus gets inside our cells.

The triggering of the spike protein is a sensitive and adjustable process. In related viruses, the triggering is more difficult, and waits till the virus is engulfed in a vesicle that taken into the cell, and acidified in the normal process of lysosomal destruction / ingestion of outside materials. The acidification triggers these viral spike proteins to fire and release the virus into the cell. Triggering also requires cleavage of the spike protein with proteases that cut it at two locations. Other related viruses sometime wait for a target host protease to do the honors, but SARS-COV2 spike protein apparently is mostly cleaved during production by its originating host. This raises the stakes, since it can then more readily trigger, by accident, or once it finds proper ACE2 receptors on a target host. One theme of recent SARS-COV2 evolution is that triggering has become slightly easier, allowing the virus to infect higher up in the respiratory system. The original strains set up infections deep in the lung, but recent variants infect higher up, which lessens the systemic risks of infection to the host, promotes transmissibility, and speeds the infection and transmission process. 

The mutations G339D, N440K, L452R, S477N, T478K, and E484K in the spike region that binds to ACE2 (RBD, or receptor binding domain) promotes this interaction, raising transmissibility. (The nomenclature is that the number gives the position of the amino acid in the linear protein sequence, and the letters give the original version of the amino acid in one letter code (start) and in the mutated version (end)). Overall, mutations of the spike protein have increased the net charge on the spike protein significantly in the positive direction, which encourages binding to the negatively charged ACE2 protein. D614G is not in this region, but is nearby and seems to have similar effects, stabilizing the protein. The P681 mutation in one of the cleaved regions promotes proteolysis by the enzyme furin, thus making the virus more trigger-able. 

What are some other constraints on the spike protein? It needs to evade our vaccines and natural immunity, but has seemingly adapted to a here-and-gone infection style, though with periodic re-infection, like other colds. So any change is good for the purpose of camouflage, as long as its essential functions remain intact. The N-terminal, or front, domain of the spike protein, which is not involved directly in ACE2 binding, has experienced a series of mutations of this kind. An additional function it seems to have is to mimic a receptor for the cytokine interleukin 8, which attracts neutrophils and encourages activation of macrophages. Such mimicry may reduce this immune reaction, locally. 

In comparison to all these transmissibility-enhancing mutations, it is not clear yet where the mutations that decrease virulence are located. It is likely that they are more widely distributed, not in the gene encoding the spike protein. SARS-COV2 has a remarkable number of genes with various interactions with our immune systems, so the scope for tuning is prodigious. If all this can be accomplished in a couple of years, image what a million, or a billion, years can do for other organisms that, while they have slower reproduction cycles and more complicated networks of internal and external relations, still obey that great directive to adapt to their circumstances.


  • Late link, on receptor binding vs immune evasion tradeoffs.
  • Yes, chimpanzees can talk.
  • The rich are getting serious about destroying democracy.
  • Forced arbitration is, generally, unconscionable and should be illegal.
  • We could get by with fewer nuclear weapons.
  • Originalism would never allow automatic or semiautomatic weapons.