Saturday, January 7, 2023

A New Way of Doing Biology

Structure prediction of proteins is now so good that computers can do a lot of the work of molecular biology.

There are several royal roads to knowledge in molecular biology. First, and most traditional, is purification and reconstitution of biological molecules and the processes they carry out, in the test tube. Another is genetics, where mutational defects, observed in whole-body phenotypes or individually reconstituted molecules, can tell us about what those gene products do. Over the years, genetic mapping and genomic sequencing allowed genetic mutations to be mapped to precise locations, making them increasingly informative. Likewise, reverse genetics became possible, where mutational effects are not generated randomly by chemical or radiation treatment of organisms, but are precisely engineered to find out what a chosen mutation in a chosen molecule could reveal. Lastly, structural biology contributed the essential ground truth of biology, showing how detailed atomic interactions and conformations lead to the observations made at higher levels- such as metabolic pathways, cellular events, and diseases. The paradigmatic example is DNA, whose structure immediately illuminated its role in genetic coding and inheritance.

Now the protein structure problem has been largely solved by the newest generations of artificial intelligence, allowing protein sequences to be confidently modeled into the three dimensional structures they adopt when mature. A recent paper makes it clear that this represents not just a convenience for those interested in particular molecular structures, but a revolutionary new way to do biology, using computers to dig up the partners that participate in biological processes. The model system these authors chose to show this method is the bacterial protein export process, which was briefly discussed in a recent post. They are able to find and portray this multi-step process in astonishing detail by relying on a lot of past research including existing structures and the new AI searching and structure generation methods, all without dipping their toes into an actual lab.

The structure revolution has had two ingredients. First is a large corpus of already-solved structures of proteins of all kinds, together with oceans of sequence data of related proteins from all sorts of organisms, which provide a library of variations on each structural theme. Second is the modern neural networks from Google and other institutions that have solved so many other data-intensive problems, like language translation and image matching / searching. They are perfectly suited to this problem of "this thing is like something else, but not identical". This resulted in the AlphaFold program, which has pretty much solved the problem of determining the 3D structure of novel protein sequences.

"We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14), demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods."

The current authors realized that the determination of protein structures is not very different from the determination of complex structures- the structure of interfaces and combinations between different proteins. Many already-solved structures are complexes of several proteins, and more fundamentally, the way two proteins interact is pretty much the same as the way that a protein folds on itself- the same kinds of detailed secondary motif and atomic complementarity take place. So they used the exact AlphaFold core to create AF2Complex, which searches specifically through a corpus of protein sequences for those that interact in real life.

This turned out to be a very successful project, (though a supercomputer was required), and they now demonstrate it for the relatively simple case of bacterial protein export. The corpus they are working with is about 1500 E. coli periplasmic and membrane proteins. They proceed step by step, asking what interacts with the first protein in the sequence, then what interacts with the next one, etc., till they hit the exporter on the outer membrane. While this sequence has been heavily studied and several structures were already known, they reveal several new structures and interactions as they go along. 

Getting proteins from inside the cell to outside is quite complicated, since they have to traverse two membranes and the intermembrane space, (periplasm), all without getting fouled up or misdirected. This is done by an organized sequence of chaperone and transport proteins that hand the new proteins off to each other. Proteins are recognized by this machinery by virtue of sequence-encoded signals, typically at their front/leading ends. This "export signal" is recognized, in some instances, right as it comes out of the ribosome and captured by the SecA/B/E/Y/G machinery at the inner bacterial membrane. But most exported proteins are not recognized right away, but after they are fully synthesized.

The inner membrane (IM) is below, and the outer membrane (OM) is above, showing the steps of bacterial protein export to the outer membrane. The target protein being transported is the yellow thread, (OmpA), and the various exporting machines are shown in other colors, either in cartoon form or in ribbon structures from the auther's computer predictions. Notably, SurA is the main chaperone that carries OmpA in partially unfolded form across the periplasm to the outer membrane.

SecA is the ATP-using pump that forces the new protein through the SecY channel, which has several other accessory partners. SecB, for example, is thought to be mostly responsible for recognizing the export signal on the target protein. The authors start with a couple of accessory chaperones, PpiD and YfgM, which were strongly suspected to be part of the SecA/B/E/Y/G complex, and which their program easily identifies as interacting with each other, and gives new structures for. PpiD is an important chaperone that helps proline amino acids twist around, (a proline isomerase), which they do not naturally do, helping the exporting proteins fold correctly as they emerge. It also interacts with SecY, providing chaperone assistance (that is, helping proteins fold correctly) right as proteins pass out of SecY and into the periplasm. The second step the authors take is to ask what interacts with PpiD, and they find DsbA, with its structure. This is a disulfide isomerase, which performs another vital function of shuffling the cysteine bonds of proteins coming into the periplasmic space, (which is less reducing than the cytoplasm), and allows stable cysteine bonds to form. This is one more essential chaperone-kind of function needed for relatively complicated secreted proteins. Helping them form at the right places is the role of DsbA, which transiently docks right at the exit port from SecY. 

The author's (computers) generate structures for the interactions of the Sec complex with PpiD, YfgM, and the disulfide isomerase DbsA, illuminating their interactions and respective roles. DbsA helps refold proteins right when then come out of the transporter pore, from the cytoplasm.

Once the target protein has all been pumped through the SecY complex pore, it sticks to PpiD, which does its thing and then dissociates, allowing two other proteins to approach, the signal peptidase LepB, which cleaves off the export signal, and then SurA, which is the transporting chaperone that wraps the new protein around itself for the trip across the periplasm. Specific complex structures and contacts are revealed by the authors for all these interactions. Proteins destined for the outer membrane are characterized by a high proportion of hydrophobic amino acids, some of which seem to be specifically recognized by SurA, to distinguish them from other proteins whose destination is simply to swim around in the periplasm, such as the DsbA protein mentioned above. 

The author's (computers) spit out a ranking of predicted interactions using SurA as a query, and find itself as one protein that interacts (it forms a dimer), and also BamA, which is the central part of the outer membrane transporting pore. Nothing was said about the other high-scoring interacting proteins identified, which may not have had immediate interest.

"In the presence of SurA, the periplasmic domain [of transported target protein OmpA] maintains the same fold, but remarkably, the non-native β-barrel region completely unravels and wraps around SurA ... the SurA/OmpA models appear physical and provide a hypothetical basis for how the chaperone SurA could prevent a polypeptide chain from aggregating and present an unfolded polypeptide to BAM for its final assembly."

At the other end of the journey, at the outer membrane, there is another channel protein called BamA, where SurA docks, as was also found by the author's interaction hunting program. BamA is part of a large channel complex that evidently receives many other proteins via its other periplasmic-facing subunits, BamB, C, and D. The authors went on to do a search for proteins that interact with BamA, finding BepA, a previously unsuspected partner, which, by their model, wedges itself in between BamC and BamB. BepA, however, turns out to have a crucial function in quality control. Conduction of target proteins through the Bam complex seems to be powered only by diffusion, not by ATP or ion gradients. So things can get fouled up and stuck pretty easily. BepA is a protease, and appears, from its structure, to have a finger that gets flipped and turns the protease on when a protein transiting through the pore goes awry / sideways. 


The author's (computers) provide structures of the outer membrane Bam complex, where SurA binds with its cargo. The cargo , unstructured, is not shown here, but some of the detailed interface between SurA and BamA is shown at bottom left. The beta-barrel of BamA provides the obvious route out of the cell, or in some cases sideways into the membrane.

While filling in some new details of the outer membrane protein export system is interesting, what was really exciting about this paper was the ease with which this new way of doing biology went forth. Intimate physical interactions among proteins and other molecules are absolutely central to molecular biology, as this example illustrates. To have a new method that not only reveals such interactions in a reliable way, from sequences of novel proteins, but also presents structurally detailed views of them, is astonishing. Extending this to bigger genomes and collections of targets, vs the relatively small 1500 periplasmic-related proteins tested here remains a challenge, but doubtless one that more effort and more computers will be able to solve.


Saturday, December 31, 2022

Hand-Waving to God

A decade on, the Discovery Institute is still cranking out skepticism, diversion, and obfuscation.

A post a couple of weeks ago mentioned that the Discovery Institute offered a knowledgeable critique of the lineages of the Ediacaran fauna. They have raised their scientific game significantly, and so I wanted to review what they are doing these days, focusing on two of their most recent papers. The Discovery Institute has a lineage of its own, from creationism. It has adapted to the derision that entailed, by retreating to "intelligent design", which is creationism without naming the creators, nailing down the schedule of creation, or providing any detail of how and from where creation operates. Their review of the Ediacaran fauna raised some highly skeptical points about whether these organisms were animals or not. Particularly, they suggested that cholesterol is not really restricted to animals, so the chemical traces of cholesterol that were so clearly found in the Dickinsonia fossil layers might not really mean that these were animals- they might also be unusual protists of gigantic size, or odd plant forms, etc. While the critique is not unreasonable, it does not alter the balance of the evidence which does indeed point to an animal affinity. These fauna are so primitive and distant that it is fair to say that we can not be sure, and particularly we can not be sure that they had any direct ancestral relationship to any later organisms of the ensuing Cambrian period, when recognizable animals emerged.

Fair enough. But what of their larger point? The Discovery Institute is trying to make the point, I believe, about the sudden-ness of early Cambrian evolution of animals, and thus its implausibility under conventional evolutionary theory. But we are traversing tens of millions of years through these intervals, which is a long time, even in evolutionary terms. Secondly, the Ediacaran period, though now represented by several exquisite fossil beds, spanned a hundred million years and is still far from completely characterized paleontologically, even supposing that early true animals would have fossilized, rather than being infinitesimal and very soft-bodied. So the Cambrian biota could easily have predecessors in the Ediacaran that have or have not yet been observed- it is as yet not easy to say. But what we can not claim is the negative, that no predecessors existed before some time X- say the 540 MYA point at the base of the Cambrian. So the implication that the Discovery Institute is attempting to suggest has very little merit, particularly since everything that they themselves cite about the molecular and paleontological sequence is so clearly progressive and in proper time sequence, in complete accord with the overall theory of evolution.

For we should always keep in mind that an intelligent designer has a free hand, and can make all of life in a day (or in six, if absolutely needed). The fact that this designer works in the shadows of slightly altered mutation rates, or in a few million years rather than twenty million, and never puts fossils out of sequence in the sedimentary record, is an acknowledgement that this designer is a bit dull, and bears a strong resemblence to evolution by natural selection. To put it in psychological terms, the institute is in the "negotiation" stage of grief- over the death of god.

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, December 17, 2022

The Pillow Creatures That Time Forgot

Did the Ediacaran fauna lead to anything else, or was it a dead end?

While to a molecular biologist, the evolution of the eukaryotic cell is probably the greatest watershed event after the advent of life itself, most others would probably go with the rise of animals and plants, after about three billion years of exclusively microbial life. This event is commonly located at the base of the Cambrian, (i.e. the Cambrian explosion), which is where the fossils that Darwin and his contemporaries were familiar with began, about 540 million years ago. Darwin was puzzled by this sudden start of the fossil record, from apparently nothing, and presciently held (as he did in the case of the apparent age of the sun) that the data were faulty, and that the ancient character of life on earth would leave other traces much farther back in time.

That has indeed proved to be the case. There are signs of microbial life going back over three billion years, and whole geologies in the subsequent time dependent on its activity, such as the banded iron formations prevalent around two billion years ago that testify to the slow oxygenation of the oceans by photosynthesizing microbes. And there are also signs of animal life prior to the Cambrian, going back roughly to 600 million years ago that have turned up, after much deeper investigations of the fossil record. This immediately pre-Cambrian period is labeled the Ediacaran, for one of its fossil-bearing sites in Australia. A recent paper looked over this whole period to ask whether the evolution of proto-animals during this time was a steady process, or punctuated by mass extinction event(s). They conclude that, despite the patchy record, there is enough to say that there was a steady (if extremely slow) march of ecological diversification and specialization through the time, until the evolution of true animals in the Cambrian literally ate up all the Ediacaran fauna. 

Fossil impression of Dickinsonia, with trailing impressions that some think might be a trail from movement. Or perhaps just friends in the neighborhood.
 
For the difference between the Ediacaran fauna and that of the Cambrian is stark. The Ediacaran fauna is beautiful, but simple. There are no backbones, no sensory organs. No mouth, no limbs, no head. In developmental terms, they seem to have had only two embryological cell layers, rather than our three, which makes all the difference in terms of complexity. How they ate remains a mystery, but they are assumed to have simply osmosed nutrients from their environment, thanks to their apparently flat forms. A bit like sponges today. As they were the most complex animals at the time, (and some were large, up to 2 meters long), they may have had an easy time of it, simply plopping themselves on top of rich microbial mats, oozing with biofilms and other nutrients.

The paper provides a schematic view of the ecology at single locations, and also of longer-term evolution, from a sequence of views (i.e. fossils) obtained from different locations around the world of roughly ten million year intervals through the Ediacaran. One noticeable trend is the increasing development or prevalence of taller fern-like forms that stick up into the water over time, versus the flatter bottom-dwelling forms. This may reflect some degree of competition, perhaps after the bottom microbial mats have been over-"grazed". A second trend is towards slightly more complexity at the end of the period, with one very small form (form C (a) in the image below) even marked by shell remains, though what its animal inhabitant looked like is unknown. 

Schematic representation of putative animals observed during the Ediacaran epoch, from early, (A, ~570 MYA, Avalon assemblage), middle, (B, ~554 MYA, White River and other assemblages), and late (C, ~545 MYA, Nama assemblage). The A panel is also differentiated by successional forms from early to mature ecosystems, while the C panel is differentiated by ocean depth, from shallow to deep. The persistence of these forms is quite impressive overall, as is their common simplicity. But lurking somewhere among them are the makings of far more complicated animals.

Very few of these organisms have been linked to actual animals of later epochs, so virtually all of them seem to have been superceded by the wholly different Cambrian fauna- much of which itself remains perplexing. One remarkable study used mass-spec chemical analysis on some Dickinsonia fossils from the late Ediacaran to determine that they bore specific traces of cholesterol, marking them as probable animals, rather than overgrown protists or seaweed. But beyond that, there is little that can be said. (Note a very critical and informed review of all this from the Discovery Institute, of all places.) Their preservation is often remarkable, considering the age involved, and they clearly form the sole fauna known from pre-Cambrian times. 

But the core question of how the Cambrian (and later) animals came to be remains uncertain, at least as far as the fossil record is concerned. One relevant observation is that there is no sign of burrowing through the sediments of the Ediacaran epoch. So the appearance of more complex animals, while it surely had some kind of precedent deep in the Ediacaran, or even before, did not make itself felt in any macroscopic way then. It is evident that once the triploblastic developmental paradigm arose, out of the various geologic upheavals that occurred at the bases of both the Ediacaran and the Cambrian, its new design including mouths, eyes, spines, bones, plates, limbs, guts, and all the rest that we are now so very familiar with, utterly over-ran everything that had gone before.

Some more fine fossils from Canada, ~ 580 MYA.


  • A video tour of some of the Avalon fauna.
  • An excellent BBC podcast on the Ediacaran.
  • We need to measure the economy differently.
  • Deep dive on the costs of foreign debt.
  • Now I know why chemotherapy is so horrible.
  • Waste on an epic scale.
  • The problem was not the raids, but the terrible intelligence... by our intelligence agency.

Saturday, December 10, 2022

Mechanics of the ATP Synthesizing Machine

ATP sythase is a generator with two rotors, just like any other force-transducing generator.

Protein structural determination has progressed tremendously, with the advent of cryo-electron microscopy which allows much faster determinations of more complex structures than previously. One beneficiary is the enzyme at the heart of the mitochondrion that harnesses the proton motive force (pmf; difference of pH and charge across the inner mitochondrial membrane) to make ATP. The pmf is created by the electron transport chains of respiration, powered by the breakdown of our food, and ATP is the most general currency of energy in our cells. And in bacteria as well. The work discussed today was all done using E. coli, which in this ancient and highly conserved respect is a very close stand-in for our own biology.

The ATP synthase is rotary device. Just like a water wheel has one wheel that harnesses a running stream, linked by gears or other mechanism to a second wheel that grinds corn, or generates electricity, the ATP synthase has one wheel that is powered by protons flowing inwards, linked to another wheel that synthesizes ATP. The second wheel doesn't turn. Rather, the linking rotor from the proton wheel (called Fo) has an asymmetric cam at the end that pokes into the center of the ATP synthase wheel, (called F1), and deforms that second wheel as it rotates around inside. The deformations are what induces the ATP sythase to successively (1) bind ADP and phosphate, (2) close access and join them together into ATP, and lastly (3) release the ATP back out. This wheel has three sections, thus one turn yields three ATPs, and it takes 120 degrees of turn to create one ATP. This mechanism is nicely illustrated in a few videos.

The ATP synthase has several parts. The top rotor (yellow, orange; proton rotor, or "c" rotor) is embedded in the inner mitochondrial membrane, and rotates as it conducts protons from outside (top) inwards. The center rotor (white, red) is attached to it and also rotates as it sticks into the bottom ATP synthesizing subunits (green, khaki). That three-fold symmetric protein complex is static, (held in place by the non-moving stator subunits (blue, teal), and synthesizes ATP as its conformation is progressively banged around by the rotor. At the bottom are diagrams of the ATP generating strokes (three per rotation), with pauses (green) reflecting the strain of synthesizing ATP. All this was detected from the single molecules tracked by polarized light coming from the polarizing gold rods attached to the proton rotor (AuNR- gold nano rod).


Some recent papers focus on the other end of the machine- the proton rotor. It has ten subunits, (termed "c", so this is also called the c rotor), each of which binds a proton. Thus the ultimate stoichiometry is that 10 protons yield 3 ATP, for a 3.33 protons per ATP efficiency. (The pH difference needs to be about 3 units, or 1000 to 5000 fold in proton concentration, to create sufficient pmf.) But there are certain asymmetries involved. For one, there is a "stator" that holds the ATP synthetase stable vs the proton rotor and spans across them, attaching stably to the former and gliding along the rotations of the latter. This stator creates some variation in how the rotors at both ends operate. Also, the 10:3 ratio means that some power strokes that force the ATP sythase along will behave differently, either with more power at the beginning or at the end of the 120 degree arc. 

These papers posit that there is enough flexibility in the linkage to smooth out these ebbs and flows. Within the stator is a critical subunit ("a") which conducts the protons in both directions, both from outside onto the "c" rotor, and then off the "c" rotor and into the inner mitochondrial matrix. Interestingly, the protein rotor of "c" subunits ferries those protons all the way around, so that they come in and go back off at nearly the same point, at the "a" subunit surface. This means that they are otherwise stably bound to the proton rotor as it flies around in the membrane, a hydrophobic environment that presumably offers no encouragement for those protons to leave. So in summary, the protons from outside (the intermembrane space of the mitochondrion) enter by the outer "a" channel, then land on one of the proton rotor's "c" subunits, take one trip around the rotor, and then exit off via the inner "a" channel.

One question is the nature of these channels. There are, elsewhere in biology, channels that find ways to conduct protons in specific fashion, despite their extremely small size and similarity to other cations like sodium and potassium. But a more elegant way has been devised, called the Grotthuss mechanism. The current authors conduct extensive analysis of key mutations in these channels to show that this mechanism is used by the "a" subunit of the Fo protein. By this mechanism, a chain of water molecules are very carefully lined up through the protein. The natural hydrogen exchange property of water, by which the pH character and so many other properties of water occur, then allow an incoming proton to create a chain reaction of protonations and de-protonations along the water chain (nicely illustrated on the Wikipedia page) that, without really moving any of the water molecules, (or requiring much movement of the protons either), effectively conducts a net proton inwards with astonishing efficiency.

It is evident that the interface of the "a" and "c" subunits is such that a force-fed sequence of protons creates power that induces the rotation and eventually through the rotor linkage, the energy to synthesize ATP against its concentration gradient. It should be said parenthetically that this enzyme complex can be driven in reverse, and E. coli do occasionally use up ATP in reverse to re-establish their pmf gradient, which is used for many other processes.

One techical note is of interest. The authors of the main paper used single molecules of the whole ATP sythase, embedded in nano-membranes that they could observe optically and treat with different pH levels on each site to drive their activity. They also attached tiny gold bars (35 × 75 nm) to the top of each proton rotor to track its rotation by polarized light. This allowed very fine observations, which they used to look at the various pauses induced by the jump of each ATP synthesis event, and of each proton as it hopped on/off. Then they mutated selected amino acids in the supposed water channels that conduct proteins through the "a" subunit, which created greater delays, diagnostic of the Grotthuss mechanism. The channel is not lined with ions or ionizable groups, but is simply polar to accommodate a string of waters threading through the membrane and the "a" protein. Additionally, they estimate an "antenna" of considerable size composed of a "b" subunit and some of the "a" subunit of Fo that is exposed to the outside and by its negatively charged nature attracts and lines up plenty of protons, ready to transit through the rotor.

Another presentation of the proton rotor behavior. The stator "a" subunit is orange, and the "c" subunits are circles arranged in a rotor, seen from the top. The graph at right shows some of the matches or mismatches between the three-fold ATP synthesizing rotor (F1) and the ten-fold symmetric proton rotor (Fo, or "c"), leading to quite variable coupling of their power strokes. Yet there is enough elastic give in their coupling to allow continuous and reasonably rapid rotation (100 / sec).

In the end, incredible technical feats of optics, chemistry, and molecular biology are needed to decipher increasing levels of detail about the incredible feat of evolution that is embodied in this tiny powerhouse.


Saturday, December 3, 2022

Senescent, Cancerous Cannibals

Tumor cells not only escape normal cell proliferation controls, but some of them eat nearby cells.

Our cells live in an uneasy truce. Cooperation is prized and specialization into different cell types, tissues, and organs is pervasive. But deep down, each cell wants to grow and survive, prompting many mechanisms of control, such as cell suicide (apoptosis) and immunological surveillance (macrophages, killer T-cells). Cancer is the ultimate betrayal, not only showing disregard for the ruling order, but in its worst forms killing the whole organism in a pointless drive for growth.

A fascinating control mechanism that has come to prominence recently is cellular senescence. In petri dishes, cells can only be goosed along for a few dozen cycles of division until they give out, and become senescent. Which is to say, they cease replicating but remain alive. It was first thought that this was another mechanism to keep cancer under control, restricting replication to "stem" cells and their recent progeny. But a lot of confusing and interesting observations indicate that the deeper meaning of senescence lies in development, where it appears to function as an alternate form of cell suicide, delayed so that tissues are less disrupted. 

Apoptosis is used very widely during development to reshape tissues, and senescence is used extensively as well in these programs. Senescent cells are far from quiescent, however. They have high metabolic activity and are particularly notorious for secreting a witches' brew of inflammatory cytokines and other proteins- the senescence-associated secretory phenotype, or SASP. in the normal course of events, this attracts immune system cells which initiate repair and clearance operations that remove the senescent cells and make sure the tissue remains on track to fulfill its program. These SASP products can turn nearby cells to senescence as well, and form an inflammatory micro-environment that, if resolved rapidly, is harmless, but if persistent, can lead to bad, even cancerous local outcomes. 

The significance of senescent cells has been highlighted in aging, where they are found to be immensely influential. To quote the wiki site:

"Transplantation of only a few (1 per 10,000) senescent cells into lean middle-aged mice was shown to be sufficient to induce frailty, early onset of aging-associated diseases, and premature death."

The logic behind all this seems to be another curse of aging, which is that while we are young, senescent cells are cleared with very high efficiency. But as the immune system ages, a very small proportion of senescent cells are missed, which are, evolutionarily speaking, an afterthought, but gradually accumulate with age, and powerfully push the aging process along. We are, after all, anxious to reduce chronic inflammation, for example. A quest for "senolytic" therapies to clear senescent cells is becoming a big theme in academia and the drug industry and may eventually have very significant benefits. 

Another odd property of senescent cells is that their program, and the effects they have on nearby cells, resemble to some partial degree those of stem cells. That is, the prevention of cell death is a common property, as is the prevention of certain controls preventing differentiation. This brings us to tumor cells, which frequently enter senescence under stress, like that of chemotherapy. This fate is highly ambivalent. It would have been better for such cells to die outright, of course. Most senescent tumor cells stay in senescence, which is bad enough for their SASP effects in the local environment. But a few tumor cells emerge from senescence, (whether due to further mutations or other sporadic properties is as yet unknown), and they do so with more stem-like character that makes them more proliferative and malignant.

A recent paper offered a new wrinkle on this situation, finding that senescent tumor cells have a novel property- that of eating neighboring cells. As mentioned above, senescent cells have high metabolic demands, as do tumor cells, so finding enough food is always an issue. But in the normal body, only very few cells are empowered to eat other cells- i.e. those of the immune system. To find other cells doing this is highly unusual, interesting, and disturbing. It is one more item in the list of bad things that happen when senescence and cancer combine forces.

A senescent tumor cell (green) phagocytoses and digests a normal cell (red).


  • Shockingly, some people are decent.
  • Tangling with the medical system carries large risks.
  • Is stem cell therapy a thing?
  • Keep cats indoors.

Saturday, November 26, 2022

California Rooftop Revolution

Clawing back benefits for rooftop solar proves politically and programmatically perilous.

California is a leader in fighting climate change. You just have to look at a trace of an average day's electrical power sources to see what is going on:

One day in the life of the California electricity grid. Solar dominates during the day, but fossil fuels are still used heavily. Imports from other states are a mix of solar, nuclear, and fossil fuel sources, even some coal.

Solar power now dominates electricity production during mid-day periods. California has 1.5 million solar roof installations, (supplying 12 GW of capacity), which are, however, dwarfed by utility scale solar of about 24 GW in huge desert arrays. California doesn't have the kind of wind power that Texas and the Midwest have, but we have gone all in for solar, which supplies one fourth of our electricity.

But we are only beginning to reshape the grid. Addressing climate change means getting to zero fossil fuel use. In the graph above, you can see that, even at mid-day, we have natural gas plants humming along, supplying one third of electrical needs. It is the most efficient natural gas power plants (combined cycle) which can not be used as peaker plants, but need to be on full time, more or less, ironically. And then we have a huge fleet of peaker plants (and imported power from those in other states) to provide the mad rush of power required to transition every night from daytime solar to evening. The daily peak of power use comes not at noon, but at 6 PM. On top of this, decarbonizing transportation and heating/HVAC of buildings requires further electrical loads to be added to the grid, not all at convenient times.

California utilities are not state-run, but are heavily regulated. For instance, they make no profit from retail electricity sales, but from grid maintenance and power plant provision, via "decoupling" rules. The California Public Utilities Commission (CPUC) is the regulator, and has the job of planning this enormous transition. One of its biggest headaches is what to do about rooftop solar. Back in 2006, the state set its million solar roofs initiative, with the central component being net metering, or NEM. This said that retail customers get the same rate paid to them for electricity sent into the grid by their solar panels as they pay for electricity they take out of the grid. Back then, with solar barely viable and a tiny proportion of production, this subsidy made a lot of sense. And the program became increasingly attractive as solar costs came down.


But today, NEM is less economical. Solar customers make up 11.5% of total households, an increasingly significant share of the utility customer base. Solar power at mid-day is decreasingly valuable to the grid, failing to relieve the real crunch that happens into the evening. The lost revenue represents not only money for electrical generation, but for grid services and maintenance, which solar households make particularly significant use of, as the grid is essentially their battery and longer-term backup. And who pays the freight? The other rate payers, who on average are less rich than those who can foot the bill for solar. Extrapolating to all residences having solar power, the utilities would no longer have anyone paying for the system, at all. The CPUC made a proposal in 2021, inspired by these imbalances (not to mention the loathing utilities have for NEM), to drastically cut rates paid for exported solar energy, and also charge a grid connection fee of roughly $60 per month for solar households. 

This caused a firestorm, needless to say. In the proud American tradition of the powerful rebelling against any reduction of their ill-gotten gains or structural advantages, the proposal was mercilessly attacked in the press, and the governor forced the CPUC to go back to the drawing board. While the policy was indeed bad and would have destroyed the rooftop solar industry, the reaction was reminiscent of the Prop 13 "revolution", where property owners relieved themselves of taxes, or even the Boston tea party and American Revolution, where taxes meant to maintain the colonies were declared abhorrent impositions, later to be replaced by the far more ruinous costs of war and independence. 

In the newest iteration, just released, the CPUC follows the Prop 13 blueprint and grandfathers in all existing solar installations, selling only future installations down the river of reduced payback rates. It also drops the particularly galling grid connection fee, which was, in a policy sense, entirely appropriate. Someone needs to pay for the grid, after all. For my part, I would support a new grid electricity rate, charged on both import and export of energy. That way, solar users would be charged for the outgoing loads they put on the system, and encouraged to install (and use) batteries to reduce all loads on the grid. Extrapolated to universal participation, this would, instead of killing the grid by underfunding, make it smaller, as a backup resource, with funding aligned with usage.

Anyhow, in a masterful show of bureaucratic obfuscation, the CPUC in its newest proposal sets the future payback rate of solar to be a mysterious function of overall costs and climate impacts over a regional and hourly basis through the year- called the avoided cost calculator, or ACC. From the simplicity of NEM, we now go to a system that is both impenetrable and unpredictable, since the yearly calculations have been wildly varying, and result from a byzantine mechanism that takes 100 pages to explain.

How the ACC was set in three recent years. The variability is evident.

This seems like a case of the road to hell being paved with good intentions. It makes sense (in an ideal economic world) to relate the payback rate of exported solar power to the actual value of that power. The wholesale electricity market deals in such rates on a minute-to-minute basis. But the solar installation industry relies on predictability- the predictability of solar power coming out of solar panels for twenty or more years, and the predictability of its value, which was so neatly supplied by the NEM concept. This variable pricing proposal might be as dangerous for the rooftop solar industry as the grid connection fee was going to be. 

That may have been the plan, I don't know. But the CPUC needs to figure out what it wants at a deeper level. We can grant that on a global basis, residential rooftop solar is a relatively inefficient way to supply electricity and fight climate change. It is roughly 30% more expensive than utility-built mass solar installations. In compensation, the distributed nature of residential solar lowers the typical grid load, evens out production, and supplies important backup capability to customers, when paired with battery storage. Battery storage is something the CPUC aims to make default for solar installations, by using differential rates to offer low rates for electricity export at peak solar times, and paying much higher (if unknown!) rates at peak grid demand and demand growth times. The need to stabilize the grid and get rid of all those natural gas peaker plants is palpable. The CPUC plans for a future of utility scale battery installations that will provide that time-of-day balancing.

The California climate plan- how will we get there?

The key problem is that the state (and CPUC) plan is to grow residential solar, almost doubling penetration over the next decade, and more than tripling over the longer term. The combination of ever-expanding grid scale to electrify the state plus decarbonization means that everything needs to be pursued- utility scale, residential, and storage, all as fast as possible. That is not going to happen with reduced incentives and greater uncertainty for the residential solar sector. After all, the whole regulatory mechanism exists to not to precisely calibrate the lowest cost source of power- markets can do that. It exists to shift costs around, so that the future (and externalized) catastrophic planetary, social, and economic costs of climate change can be brought forward to influence today's choices. 

So there is a solution to this mess, which is to tax fossil fuels at much higher rates. Rather than fighting over marginal solar export rates and which kind of solar installation is better for the grid, we need to assess costs where the real costs lie- at the fossil fuels that are poisoning the biosphere and keeping us from a sustainable future. Since the CPUC is in control of the economics of electricity in the state, it should turn its formidable bureaucratic skills (in collaboration with the other entities that are in charge of fuel taxation, principally the legislature), in a more useful direction, such as raising the costs of the fuels we don't want while lowering those we do. That might mean charging a climate mitigation fee over all fossil fuels (beyond the very modest cap and trade fee California already participates in) that would pay for a moderated NEM bonus for residential solar, plus the batteries, grid strengthening, and everything else needed to put us on a deeper decarbonization path. Raising rates for electricity overall while redistributing prices in this way to favor clean energy would be difficult to sell, but quite justifiable. Especially if it did not penalize electricity use over the direct use of fossil fuels for heating and transportation, by taxing those fuels uniformly.


  • But taxing fossil fuels, even a little, is contentious.
  • Who among us is not corrupt, after all?
  • Crypto really doesn't make much sense.
  • Scientist rebellion.
  • How work from home is reshaping real estate (prices).
  • A universal influenza vaccine is on the way.

Sunday, November 20, 2022

Skype for Cells, and Valves for Mitochondria

Connexin is one of those proteins that get more roles the harder people look.

One reason genetics is hard is that genetic products (proteins, generally) can have complicated roles in the organism- i.e. phenotype. For every simple case like sickle cell anemia, there are ten complicated cases where mutations are covered by duplicative functions, or have effects only revealed under unusual circumstances, or have multiple effects that are hard to connect and understand. Today's post focuses on a protein called connexin, which joins up into hexameric (6-member) complexes to form membrane half-channels, or hemichannels. When two such hemichannels on neighboring cells meet up, then join to form a gap junction, which is a small pore, permeable to molecules up to about 1000 Daltons. So ions, water, hormones, and other small molecules get through, but not most proteins or nucleic acids. They also help align the electrical charge or electrical propagation of neighboring cells. These junctions are surprisingly common in our bodies, functioning a bit like Skype, keeping neighboring cells in close touch. They have big roles in development, adhesion, neuron activity, and even cancer. Cancer cells tend to turn their gap junctions off in order to go their own way.

Structures of some connexin complexes, as hemichannels (below), and as full gap junctions (above).  "In" denotes inside each cell, "Ex" denotes the extra-cellular space, and the lipid bilayer would be the plasma membrane in this normal case.

But recent work has shown that hemichannels by themselves have regulated conductance properties and a variety of roles, some of which do not even rely on their channel properties. One recent paper (though with precedent work going back to 2006) raised the prospect of connexin hemichannels functioning in mitochondria, on the inner membrane, as regulated potassium channels. The mitochondrial inner membrane is notoriously impermeable, in order to accumlate the proton-motive force (pmf), which is the product of respiration / catabolism of food, and used for the synthesis of ATP. Protons are pumped out of the innermost mitochondrial matrix and into the inter-membrane space, setting up a pH and charge gradient, usable as energy. The ultimate concentration of hydrogen is not terribly high, however, and the overall ionic (and osmotic) balance remains governed by more conventional gradients of ions like sodium, potassium, and chloride. Thus it is important to regulate those balances using channels that are specific for those ions and don't conduct (i.e. leak) protons. 

Aside from the connexin channel discussed here, there are numerous other potassium conducting channels in the mitochondrial inner membrane, with individual regulation.

For potassium, there are at least seven channels now known, each regulated differently. That is amazing, really, for a membrane that should be so impermeable, though most are present at low levels. There is a K+/H+ antiporter that extrudes K+ to maintain osmotic balance, and then the other channels are all leakage channels that allow K+ back in, under various regulated conditions. The logic seems to be that the electron transport chains of respiration can easily run too "hot", giving off poisonous reactive oxygen species like peroxide, instead of the coordinated O2 reduction to water and proton export, as intended. So the major transport chain stations appear to have associated potassium channels that are inducible by reactive oxygen species, among other things, in order to fine-tune the local membrane potential and respiration rate. It seems like a curious way to run things, reducing the efficiency of a system that would be better inhibited earlier in the respiration process. 

Another model is that the fine-tuning by these channels forestalls the more catastrophic activation of PTP channels (mitochondrial permeability transition pore). These are more like gap junctions, totally non-discriminating in their channel characteristics, and are induced by high levels of stress from reactive oxygen species. When induced, these can totally leak away the protonmotive force, and if sustained, can kill the mitochondrion and even the whole cell. This would lead to what the current researchers found, which was that genetic reduction of the levels of the connexin Cx43 caused all kinds of bad outcomes in cells treated with peroxide, such as lower proton motive force, lower electron transport chain coupling, and lower ATP production.

ATP production is raised by leaking K+ into the mitochondrial matrix, and lowered conversely by reduction of connexin levels. "CxKD" denotes a genetic knock-down, or reduced expression, of the Cx43 connexin protein. They note, however, that expression of one ATP synthase component is reduced as well in this setting, so there may be yet other (gene regulation) effects going on here.
 

At any rate, the finding that one connexin that participates in cell surface gap junctions, called Cx43, is also specifically transported to the inner membrane of mitochondria during reactive oxygen species stress, (a stress that has very wide-ranging occurrence and effects, at the cellular and organismal levels, not just in mitochondria), and then acts there as a regulated K+ leakage channel, is quite unexpected. While the gap junction is a promiscious, wide channel, this activity must be far more discriminating. Add to that that it associates specifically with the H+ consuming ATP synthase, at the matrix side of the inner mitochondrial membrane, and we have a protein with a double life.


Saturday, November 12, 2022

The Politics of Resentment

Ann Applebaum has seen where all this Trumpism is going ... in Eastern Europe.

Liberals in America are baffled. How could anyone vote for Republican candidates at this point? How could anyone, let alone half the electorate, vote for Trump? We are befuddled and anxious for the future of America, which, far from becoming great again, is turning into a banana republic before our eyes, if, hopefully, not worse. We in California are particularly dissociated, as Democrats run the whole state, and Republican voter registration continues to decline year after year and is now under one quarter of the electorate. What does the rest of the country see that we do not? Or vice versa?

Ann Applebaum has written a trenchant book on the matter, "Twilight of Democracy". She lives in Poland, so has had a front-row seat to the illiberalization of a political system, both in Poland and in nearby Hungary, which seems farther advanced. Eastern Europe has more reason than most, perhaps to be disillusioned with the capitalist orthodoxy, after their rather rough transition from Communism. But this is a world-wide phenomenon, sweeping fringe rightists into power from Brazil to Sweden. What is going on? Applebaum posits that the whole structure of meritocratic representative democracy, with its open competition for (good) public policy, and use of educated expertise over vast areas of state interests from foreign affairs to monetary regulation and education policy, have come under fundamental critique. And this critique comes partly from those who have been shut out of that system: the not-well-educated, not-bicoastal, not-rich, not-acronymed-minority, not-hopeful about the American future. It is, in short, a politics of resentment.

How have the elites done over the post-world war 2 period? They won the cold war, but lost virtually every battle in it, from Vietnam to Afghanistan. They let the lower classes of the US sink into relative poverty and powerlessness vs business and the well-educated classes, in a rather brutal system of collegiate competition, de-unionization, off-shoring and worker suppression. They have let the economy fester through several crushing recessions, particularly the malaise of the 70's and the real estate meltdown of 2008. While the US has done pretty well overall, the lower middle and poor classes have not done well, and live increasingly precarious lives that stare homelessness in the face daily. In the heartland, parents at best saw their children fly off to coastal schools and cultures, becoming different people who would not dream of coming home again to live.

America is heavily red, geographically.

And the elite-run state has become increasingly sclerotic, continually self-criticizing and regulating its way to inaction. A thousand well-meaning regulations have paved the way to a bloated government that can not build a high-speed rail line in California, or solve the homelessness crisis. Everyone is a critic, including yours truly- it is always easier to raise objections, cover one's ass, and not get anything done. So one can sympathize with evident, if inchoate, desires for strength- for someone to break the barriers, bring the system to heel, and build that wall. Or get Brexit done. Or whatever the baying right wing media want at the moment.

The elite party in this sense is the Democratic party - capturing the coastal and well-educated, plus public employee unions. The Republican party, the party of money and the rich, (not the elite at all!), has conversely become the party of the downtrodden, feeding them anti-immigrant, anti-elite, anti-state red meat. It was a remarkably easy transformation, that required only shamelessness and lying to make hay out of the vast reserves of resentment seething in middle America. 

But Applebaum's point is not that the elites have messed things up and it may be time to do things differently. No, she suggests that the new protofascists have reframed the situation fundamentally. The elites in power have, through the hard work of meritocratic institutions, set up pipelines and cultures that reproduce their position in power almost as hermetically as the ancien régime of France and its nobility. That anyone can (theoretically) enter this elite and that it is at least somewhat vetted for competence and rationality is disregarded, or actively spat upon as "old" thinking- definitely not team thinking. The path to power now is to stoke resentment, overturn the old patterns of respect for competence and empathy, discard this meritocratic system in favor of one based on loyalty and fealty, and so bring about a new authoritarianism that brooks no "softness", exercises no self-criticism, has no respect for the enemy or for compromise, and has no room for intellectuals. 

But Hungary is way ahead of us, in the one-party rule department.

A second angle on all this is that conservatives feel resentful for another good reason- that they have lost the culture war. Despite all their formal power, winning the presidency easily half the time, and regularly running legislative branches and judicial branches in the US, their larger cultural project to keep progress at bay, fight moral "decadence" and all the other hobby horses, have gone nowhere. The US is increasingly woke, diverse, and cosmopolitan, and the "blood and soil" types (including especially conservative Catholics and Evangelicals), are despondent about it. Or apoplectic, or rabid, etc., depending on temperament. Their triumph in overturning Roe may allow some backwater states to turn back the clock, but on the whole, it looks like a rearguard action.

This is what feeds disgust with the system, and with democracy itself. Republicans who used to sing the praises of the US government, the flag, and democracy now seem to feel the opposite, that the US is a degenerate wasteland, no better than other countries, not exceptional, not dedicated to serious ideals that others should also aspire to. Democracy has failed, for them. And Applebaum points out how this feeling licenses the loss of civility, the lying, the anything-goes demagoguery which characterizes our new right-wing politics. Naturally the internet and its extremism-feeding algorithms have a lot to do with it as well. Applebaum is conservative herself. She spent a career working in the Tory media in Britain, but is outraged at what Tory-ism, and conservatism internationally, has become. She sees a dramatic split in conservatism, between those that still buy into the democratic, liberal system, and those who have become its opponents, in their revolutionary, Trumpy fervor. In the US, the fever may possibly have broken, after a very close brush with losing our institutions during the last administration, as election after election has made losers of the far right.

Over the long haul, Applebaum sees this as a cyclical process, with ample precedent from ancient Egyptian times through today, with a particularly interesting stop in the viciously polarized Drefussard period in France. But I see one extra element, which is our planetary and population crisis. We had very good times over the last few centuries building the human population and its comforts on the back of colonization, fossil fuels, and new technologies. The US of the mid to late-20th century exemplified the good times of such growth. Now the ecological bells are ringing, and the party is coming to an end. Denial has obviously been the first resort of the change-averse, and conservatives have distinguished themselves in their capabilities in that department. But as reality gradually sets in, something more sinister and competitive may be in the offing, as exemplified by the slogan "America First". Not first as in a leader of international institutions, liberal democracies and enlightenment values, but first as in looking out for number one, and devil take the rest. 

Combined with a rejuvinated blood and soil nationalism, which we see flourishing in so many places, these attitudes threaten to send us back into a world resembling that before world war 1 or 2, (and, frankly, all the rest of history), when nationalism was the coin of international relations, and national competition knew no boundaries- mercantile or military. We are getting a small foretaste of this in Russia's war on Ukraine, which is a product of precisely this Russia-first, make Russia great again mind-set. Thankfully, it is accompanied by large helpings of stupidity and mismanagement, which may save us yet. 


Saturday, November 5, 2022

LPS: Bacterial Shield and Weapon

Some special properties of the super-antigen lipopolysaccharide.

Bacteria don't have it easy in their tiny Brownian world. While we have evolved large size and cleverness, they have evolved miracles of chemistry and miniaturization. One of the key classifications of bacteria is between Gram positive and negative, which refers to the Gram stain. This chemical stains the peptidoglycan layer of all bacteria, which is their "cell wall", wrapped around the cell membrane and providing structural support against osmotic pressure. For many bacteria, a heavy layer of peptidoglycan is all they have on the outside, and they stain strongly with the Gram stain. But other bacteria, like the paradigmatic E. coli, stain weakly, because they have a thin layer of peptidoglycan, outside of which is another membrane, the outer membrane (OM, whereas the inner membrane is abbreviated IM).

Structure of the core of LPS, not showing the further "poly" saccharide tails that would go upwards, hitched to the red sugars. At bottom are the lipid tails that form a strong membrane barrier. These, plus the blue sugar core, form the lipid-A structure that is highly antigenic.

This outer membrane doesn't do much osmotic regulation or active nutrient trafficking, but it does face the outside world, and for that, Gram-negative bacteria have developed a peculiar membrane component called lipopolysaccharide, or LPS for short. The outer membrane is assymetric, with normal phospholipids used for the inner leaflet, and LPS used for the outside leaflet. Maintaining such assymetry is not easy, requiring special "flippases" that know which side is which and continually send the right lipid type to its correct side. LPS is totally different from other membrane lipids, using a two-sugar core to hang six lipid tails (a structure called lipid-A), which is then decorated with chains of additional sugars (the polysaccharide part) going off in the other direction, towards the outside world.

The long, strange trip that LPS takes to its destination. Synthesis starts on the inner leaflet of the inner membrane, at the cytoplasm of the bacterial cell. The lipid-A core is then flipped over to the outer leaflet, where extra sugar units are added, sometimes in great profusion. Then a train of proteins (Lpt-A,B,C,D,E,D) extract the enormous LPS molecule out of the inner membrane, ferry it through the periplasm, through the peptidoglycan layer, and through to the outer leaflet of the outer membrane.

A recent paper provided the structural explanation behind one transporter, LptDE, from Neisseria gonerrhoeae. This is the protein that receives LPS from a its synthesis inside the cell, after prior transport through the inner membrane and inter-membrane space (including the peptidoglycan layer), and places LPS on the outer leaflet of the outer membrane. It is an enormous barrel, with a dynamic crack in its side where LPS can squeeze out, to the right location. It is a structure that explains neatly how directionality can be imposed on this transport, which is driven by ATP hydrolysis (by LtpB) at the inner membrane, that loads a sequence of transporters sending LPS outward.

Some structures of LptD (teal or red), and LPS (teal, lower) with LptE (yellow), an accessory protein that loads LPS into LptD. This structure is on the outer leaflet of the outer membrane, and releases LPS (bottom center) through its "lateral gate" into the right position to join other LPS molecules on the outer leaflet.

LPS shields Gram-negative bacteria from outside attack, particularly from antibiotics and antimicrobial peptides. These are molecules made by all sorts of organisms, from other bacteria to ourselves. The peptides typically insert themselves into bacterial membranes, assemble into pores, and kill the cell. LPS is resistant to this kind of attack, due to its different structure from normal phospholipids that have only two lipid tails each. Additionally, the large, charged sugar decorations outside fend off large hydrophobic compounds. LPS can be (and is) altered in many additional ways by chemical modifications, changes to the sugar decorations, extra lipid attachments, etc. to fend off newly evolved attacks. Thus LPS is the result of a slow motion arms race, and differs in its detailed composition between different species of bacteria. One way that LPS can be further modified is with extra hydrophobic groups such as lipids, to allow the bacteria to clump together into biofilms. These are increasingly understood as a key mode of pathogenesis that allow bacteria to both physically stick around in very dangerous places (such as catheters), and also form a further protective shield against attack, such as by antibiotics or whatever else their host throws at them.

In any case, the lipid-A core has been staunchly retained through evolution and forms a super-antigen that organisms such as ourselves have evolved to sense at incredibly low levels. We encode a small protein, called LY96 (or MD-2), that binds the lipid-A portion of LPS very specifically at infinitesimal concentrations, complexes with cell surface receptor TLR4, and sets off alarm bells through the immune system. Indeed, this chemical was originally called "endotoxin", because cholera bacteria, even after being killed, caused an enormous and toxic immune response- a response that was later, through painstaking purification and testing, isolated to the lipid-A molecule.

LPS (in red) as it is bound and recognized by human protein MD-2 (LY96) and its complex partner TLR4. TLR4 is one of our key immune system alarm bells, detecting LPS at picomolar levels. 

LPS is the kind of antigen that is usually great to detect with high sensitivity- we don't even notice that our immune system has found, moved in, and resoved all sorts of minor infections. But if bacteria gain a foothold in the body and pump out a lot of this antigen, the result can be overwhelmingly different- cytokine storm, septic shock, and death. Rats and mice, for instance, have a fraction of our sensitivity to LPS, sparing them from systemic breakdown from common exposures brought on by their rather more gritty lifestyles.


  • Econometrics gets some critique.
  • Clickbait is always bad information, but that is the business model.
  • Monster bug wars.
  • Customer-blaming troll due to lose a great deal of money.