Showing posts with label chemistry. Show all posts
Showing posts with label chemistry. Show all posts

Saturday, October 18, 2025

When the Battery Goes Dead

How do mitochondria know when to die?

Mitochondria are the energy centers within our cells, but they are so much more. They are primordial bacteria that joined with archaea to collaborate in the creation of eukaryotes. They still have their own genomes, RNA transcription and protein translation. They play central roles in the life and death of cells, they divide and coalesce, they motor around the cell as needed, kiss other organelles to share membranes, and they can get old and die. When mitochondria die, they are sent to the great garbage disposal in the sky, the autophagosome, which is a vesicle that is constructed as needed, and joins with a lysosome to digest large bits of the cell, or of food particles from the outside.

The mitochondrion spends its life (only a few months) doing a lot of dangerous reactions and keeping an electric charge elevated over its inner membrane. It is this charge, built up from metabolic breakdown of sugars and other molecules, that powers the ATP-producing rotary enzyme. And the decline of this charge is a sign that the mitochondrion is getting old and tired. A recent paper described how one key sensor protein, PINK1, detects this condition and sets off the disposal process. It turns out that the membrane charge does not only power ATP synthesis, but it powers protein import to the mitochondrion as well. Over the eons, most of the mitochondrion's genes have been taken over by the nucleus, so all but a few of the mitochondrion's proteins arrive via import- about 1500 different proteins in all. And this is a complicated process, since mitochondria have inner and outer membranes, (just as many bacteria do), and proteins can be destined to any of these four compartments- in either membrane, in the inside (matrix), or in the inter-membrane space. 

Figure 12-26. Protein import by mitochondria.
Textbook representation of mitochondrial protein import, with a signal sequence (red) at the front (N-terminus) of the incoming protein (green), helping it bind successively to the TOM and TIM translocators. 

The outer membrane carries a protein import complex called TOM, while the inner membrane carries an import complex called TIM. These can dock to each other, easing the whole transport process. The PINK1 protein is a somewhat weird product of evolution, spending its life being synthesized, transported across both mitochondrial membranes, and then partially chopped up in the mitochondrial matrix before its remains are exported again and fully degraded. That is when everything is working correctly! When the mitochondrial charge declines, PINK1 gets stuck, threaded through TOM, but unable to transit the TIM complex. PINK1 is a kinase, which phosphorylates itself as well as ubiquitin, so when it is stuck, two PINK1 kinases meet on the outside of the outer membrane, activate each other, and ultimately activate another protein, PARKIN, whose name derives from its importance in parkinson's disease, which can be caused by an excess of defective mitochondria in sensitive tissues, specifically certain regions and neurons of the brain. PARKIN is a ubiquitin ligase, which attaches the degradation signal ubiquitin to many proteins on the surface of the aged mitochondrion, thus signaling the whole mess to be gobbled up by an autophagosome.

A data-rich figure 1 from the paper shows purification of the tagged complex (top), and then the EM structure at bottom. While the purification (B, C) show the presence of TIM subunits, they did not show up in the EM structures, perhaps becuase they were not stable enough or frequent enough in proportion to the TOM subunits. But the PINK1+TOM_VDAC2 structures are stunning, helping explain how PINK1 dimerized so easily when it translocation is blocked.

The current authors found that PINK1 had convenient cysteine residues that allowed it to be experimentally crosslinked in the paired state, and thus freeze the PARKIN-activating conformation. They isolated large amounts of such arrested complexes from human cells, and used electon microscopy to determine the structure. They were amazed to see, not just PINK1 and the associated TOM complex, but also VDAC2, which is the major transporter that lets smaller molecules easily cross the outer membrane. The TOM complexes were beautifully laid out, showing the front end (N-terminus) of PINK1 threaded through each TOM complex, specifically the TOM40 ring structure.

What was missing, unfortunately, was any of the TIM complex, though some TIM subunits did co-purify with the whole complex. Nor was PARKIN or ubiquitin present, leaving out a good bit of the story. So what is VDAC2 doing there? The authors really don't know, though they note that reactive oxygen byproducts of mitochondrial metabolism would build up during loss of charge, acting as a second signal of mitochondrial age. These byproducts are known to encourage dimerization of VDAC channels, which naturally leads by the complex seen here to dimerization and activation of the PINK1 protein. Additionally, VDACs are very prevalent in the outer membrane and prominent ubiquitination targets for autophagy signaling.

To actually activate PARKIN ubiquitination, PINK1 needs to dissociate again, a process that the authors speculate may be driven by binding of ubiquitin by PINK1, which might be bulky enough to drive the VDACs apart. This part was quite speculative, and the authors promise further structural studies to figure out this process in more detail. In any case, what is known is quite significant- that the VDACs template the joining of two PINK1 kinases in mid-translocation, which, when the inner membrane charge dies away, prompts the stranded PINK1 kinases to activate and start the whole disposal cascade. 

Summary figure from the authors, indicating some speculative steps, such as where the reactive oxygen species excreted by VDAC2 sensitise PINK1, perhaps by dimerizing the VDAC channel itself. And where ubiquitin binding by PINK1 and/or VDAC prompts dissociation, allowing PARKIN to come in and get activated by PINK1 and spread the death signal around the surface of the mitochondrion.

It is worth returning briefly to the PINK1 life cycle. This is a protein whose whole purpose, as far as we know, is to signal that mitochondria are old and need to be given last rites. But it has a curiously inefficient way of doing that, being synthesized, transported, and degraded continuously in a futile and wasteful cycle. Evolution could hardly have come up with a more cumbersome, convoluted way to sense the vitality of mitochondria. Yet there we are, doubtless trapped by some early decision which was surely convenient at the time, but results today in a constant waste of energy, only made possible by the otherwise amazingly efficient and finely tuned metabolic operations of PINK1's target, the mitochondrion.


Note that at the glacial maxima, sea levels were almost 500 feet (150 meters) lower than today. And today, we are hitting a 3 million year peak level.

Saturday, September 6, 2025

How to Capture Solar Energy

Charge separation is handled totally differently by silicon solar cells and by photosynthetic organisms.

Everyone comes around sooner or later to the most abundant and renewable form of energy, which is the sun. The current administration may try to block the future, but solar power is the best power right now and will continue to gain on other sources. Likewise, life started by using some sort of geological energy, or pre-existing carbon compounds, but inevitably found that tapping the vast powers streaming in from the sun was the way to really take over the earth. But how does one tap solar energy? It is harder than it looks, since it so easily turns into heat and lost energy. Some kind of separation and control are required, to isolate the power (that is to say, the electron that was excited by the photon of light), and harness it to do useful work.

Silicon solar cells and photosynthesis represent two ways of doing this, and are fundamentally, even diametrically, different solutions to this problem. So I thought it would be interesting to compare them in detail. Silicon is a semiconductor, torn between trapping its valence electrons in silicon atoms, or distributing them around in a conduction band, as in metals. With elemental doping, silicon can be manipulated to bias these properties, and that is the basis of the solar cell.

Schematic of a silicon solar cell. A static voltage exists across the N-type to P-type boundary, sweeping electrons freed by the photoelectric effect (light) up to the conducting electrode layer.


Solar cells have one side doped to N status, and the bulk set to P doping status. While the bulk material is neutral on both sides, at the boundary, a static charge scheme is set up where electrons are attracted into the P-side, and removed from the N-side. This static voltage has very important effects on electrons that are excited by incoming light and freed from their silicon atoms. These high energy electrons enter the conduction band of the material, and can migrate. Due to the prevailing field, they get swept towards the N side, and thus are separated and can be siphoned off with wires. The current thus set up can exert a pressure of about 0.6 volt. That is not much, nor is it equivalent to the 2 to 3 electron volts received from each visible photon. So a great deal of energy is lost as heat.

Solar cells do not care about capturing each energized electron in detail. Their purpose is to harvest a bulk electrical voltage + current with which to do some work in our electrical grids. Photosynthesis takes an entirely different approach, however. This may be mostly for historical and technical reasons, but also because part of its purpose is to do chemical work with the captured electrons. Biology tends to take a highly controlling approach to chemistry, using precise shapes, functional groups, and electrical environments to guide reactions to exact ends. While some of the power of photosynthesis goes toward pumping protons out of the membrane, setting up a gradient later used to make ATP, about half is used for other things like splitting water to replace lost electrons, and making reducing chemicals like NADPH.

A portion of a poster about the core processes of photosynthesis. It provides a highly accurate portrayal of the two photosystems and their transactions with electrons and protons.

In plants, photosynthesis is a chain of processes focused around two main complexes, photosystems I and II, and all occurring within membranes- the thylakoid membranes of the chloroplast. Confusingly, photosystem II comes first, accepting light, splitting water, pumping some protons, and sending out a pair of electrons on mobile plastoquinones, which eventually find their way to photosystem I, which jacks up their energy again using another quantum of light, to produce NADPH. 

Photosystem II is full of chlorophyll pigments, which are what get excited by visible photons. But most of them are "antenna" chlorophylls, passing the excitation along to a pair of centrally located chlorophylls. Note that the light energy is at this point passed as a molecular excitation, not as a free electron. This passage may happen by Förster resonance energy transfer, but is so fast and efficient that stronger Redfield coupling may be involved as well. Charge separation only happens at the reaction center, where an excited electron is popped out to a chain of recipients. The chlorophylls are organized so that the pair at the reaction center have a slightly lower energy of excitation, thus serve as a funnel for excitation energy from the antenna system. These transfers are extremely rapid, on the picosecond time scale.

It is interesting to note tangentially that only red light energy is used. Chlorophylls have two excitation states, excited by red light (680 nm = 1.82 eV) and blue light (400-450 nm, 2.76 eV) (note the absence of green absorbance). The significant extra energy from blue light is wasted, radiated away to let it (the excited electron) relax to the lower excitation state, which is then passed though the antenna complex as though it had come from red light. 

Charge separation is managed precisely at the photosystem II reaction center through a series of pigments of graded energy capacity, sending the excited electron first to a neighboring chlorophyll, then to a pheophytin, then to a pair of iron-coordinated quinones, which then pass two electrons to a plastoquinone that is released to the local membrane, to float off to the cytochrome b6f complex. In photosystem II, another two photons of light are separately used to power the splitting of one water molecule, (giving two electrons and pumping two protons). So the whole process, just within photosystem II, yields, per four light quanta, four protons pumped from one side of the membrane to the other. Since the ATP sythetase uses about three protons per ATP, this nets just over one ATP per four photons. 

Some of the energetics of photosystem II. The orientations and structures of the reaction center paired chlorophylls (Pd1, Pd2), the neighboring chlorophyll (Chl), and then the pheophytin (Ph) and quinones (Qa, Qb) are shown in the inset. Energy of the excited electron is sacrifice gradually to accomplish the charge separation and channeling, down to the final quinone pairing, after which the electrons are released to a plastoquinone and send to another complex in the chain.

So the principles of silicon and biological solar cells are totally different in detail, though each gives rise to a delocalized field, one of electrons flowing with a low potential, and the other of protons used later for ATP generation. Each energy system must have a way to pop off an excited electron in a controlled, useful way that prevents it from recombining with the positive ion it came from. That is why there is such an ornate conduction pathway in photosystem II to carry that electron away. Overall, points go to the silicon cell for elegance and simplicity, and we in our climate crisis are the beneficiaries, if we care to use it. 

But the photosynthetic enzymes are far, far older. A recent paper pointed out that no only are photosystems II and I clearly cousins of each other, but it is likely that, contrary to the consensus heretofore, photosystem II is the original version, at least of the various photosystems that currently exist. All the other photosystems (including those in bacteria that lack oxygen stripping ability) carry traces of the oxygen evolving center. It makes sense that getting electrons is a fundamental part of the whole process, even though that chemistry is quite challenging. 

That in turn raises a big question- if oxygen evolving photosystems are primitive (originating very roughly with the last common ancestor of all life, about four billion years ago) then why was earth's atmosphere oxygenated only from two billion years ago onward? It had been assumed that this turn in Earth history marked the evolution of photosystem II. The authors point out additionally that there is also evidence for the respiratory use of oxygen from these extremely early times as well, despite the lack of free oxygen. Quite perplexing, (and the authors decline to speculate), but one gets the distinct sense that possibly life, while surprisingly complex and advanced from early times, was not operating at the scale it does today. For example, colonization of land had to await the buildup of sufficient oxygen in the atmosphere to provide a protective ozone layer against UV light. It may have taken the advent of eukaryotes, including cyanobacterial-harnessing plants, to raise overall biological productivity sufficiently to overcome the vast reductive capacity of the early earth. On the other hand, speculation about the evolution of early life based on sequence comparisons (as these authors do) is notoriously prone to artifacts, since what evolves at vanishingly slow rates today (such as the photosystem core proteins) must have originally evolved at quite a rapid clip to attain the functions now so well conserved. We simply can not project ancient ages (at the four billion year time scales) from current rates of change.


Saturday, August 2, 2025

The Origin of Life

What do we know about how it all began? Will we ever know for sure?

Of all the great mysteries of science, the origin of life is maybe the one least likely to ever be solved. It is a singular event that happened four billion years ago in a world vastly different from ours. Scientists have developed a lot of ideas about it and increased knowledge of this original environment, but in the end, despite intense interest, the best we will be able to do is informed speculation. Which is, sure, better than uninformed speculation, (aka theology), but still unsatisfying. 

A recent paper about sugars and early metabolism (and a more fully argued precursor) piqued my interest in this area. It claimed that there are non-enzymatic ways to generate most or all of the core carbohydrates of glycolysis and CO2 fixation around pentose sugars, which are at the core of metabolism and the supply of sugars like ribose that form RNA, ATP, and other key compounds. The general idea is that at the very beginning of life, there were no enzymes and proteins, so our metabolism is patterned on reactions that originally happened naturally, with some kind of kick from environmental energy sources and mineral catalysts, like iron, which was very abundant. 

That is wonderful, but first, we had better define what we mean by life, and figure out what the logical steps are to cross this momentous threshold. Life is any chemical process that can accomplish Darwinian evolution. That is, it replicates in some fashion, and it has to encode those replicated descendants in some way that is subject to mutation and selection. With those two ingredients, we are off to the races. Without them, we are merely complex minerals. Crystals replicate, sometimes quite quickly, but they do not encode descendent crystals in a way that is complex at all- you either get the parent crystal, or you get a mess. This general theory is why the RNA world hypothesis was, and remains, so powerful. 

The RNA world hypothesis is that RNA is likely the first genetic material, before DNA (which is about 200 times more stable) was devised. RNA also has catalytic capabilities, so it could encode in its own structure some of the key mechanisms of life, therefore embodying both of the critical characteristics of life specified above. The fact that some key processes remain catalyzed by RNA today, such as ribosomal synthesis of proteins, spliceosomal re-arrangement of RNAs, and cutting of RNAs by RNAse P, suggest that proteins (as well as DNA) were the Johnny-come-latelies of the chemistry of life, after RNA had, in its lumbering, inefficient way, blazed the trail. 


In this image of the ribosome, RNA is gray, proteins are yellow. The active site is marked with a bright light. Which came first here-
protein or RNA?


But what kind of setting would have been needed for RNA to appear? Was metabolism needed? Does genetics come first, or does metabolism come first? If one means a cyclic system of organic transformations encoded by protein or RNA enzymes, then obviously genetics had to come first. But if one means a mess of organic chemicals that allowed some RNA to be made and provide modest direction to its own chemical fate, and to a few other reactions, then yes, those chemicals had to come first. A great deal of work has been done speculating what kind of peculiar early earth conditions might have been conducive to such chemistries. Hydrothermal vents, with their constant input of energy, and rich environment of metallic catalysts? Clay particles, with their helpful surfaces that can faux-crystalize formation of RNAs? Warm ponds, hot ponds, UV light.... the suggestions are legion. The main thing to realize is that early earth was surely highly diverse, had a lot of energy, and had lots of carbon, with a CO2-rich atmosphere. UV would have created a fair amount of carbon monoxide, which is the feedstock of the Fischer-Tropsch reactions that create complex organic compounds, including lipids, which are critical for formation of cells. Early earth very likely had pockets that could produce abundant complex organic molecules.

Thus early life was surely heterotrophic, taking in organic chemicals that were given by the ambient conditions for free. And before life really got going, there was no competition- there was nothing else to break those chemicals down, so in a sort of chemical pre-Darwinian setting, life could progress very slowly (though RNA has some instability in water, so there are limits). Later, when some of the scarcer chemicals were eaten up by other already-replicating life forms, then the race was on to develop those enzymes, of what we now recognize as metabolism, which could furnish those chemicals out of more common ingredients. Onwards the process then went, hammering out ever more extensive metabolic sequences to take in what was common and make what was precious- those ribose sugars, or nucleoside rings that originally had arrived for free. The first enzymes would have been made of RNA, or metals, or whatever was at hand. It was only much later that proteins, first short, then longer, came on the scene as superior catalysts, extensively assisted by metals, RNAs, vitamins, and other cofactors.

Where did the energy for all this come from? To cross the first threshold, only chemicals (which embodied outside energy cycles) were needed, not energy. Energy requirements accompanied the development of metabolism, as the complex chemicals become scarcer and they needed to be made internally. Only when the problem of making complex organic chemicals from simpler ones presented itself did it also become important to find some separate energy source to do that organic chemistry. Of course, the first complex chemicals absolutely needed were copies of the original RNA molecules. How that process was promoted, through some kind of activated intermediates, remains particularly unclear.

All this happened long before the last universal common ancestor, termed "LUCA", which was already an advanced cell just prior to the split into the archaeal and bacterial lineages, (much later to rejoin to create the most amazing form of life- eukaryotes). There has been quite a bit of analysis of LUCA to attempt to figure out the basic requirements of life, and what happened at the origin. But this ("top-down") approach is not useful. The original form of life was vastly more primitive, and was wholly re-written in countless ways before it became the true bacterial cell, and later still, LUCA. Only the faintest traces remain in our RNA-rich biochemistry. Just think about the complexity of the ribosome as an RNA catalyst, and one can appreciate the ragged nature of the RNA world, which was probably full of similar lumbering catalysts for other processes, each inefficient and absurdly wasteful of resources. But it could reproduce in Darwinian fashion, and thus it could improve. 

Today we find on earth a diversity of environments, from the bizarre mineral-driven hydrothermal vents under the ocean to the hot springs of Yellowstone. The geology of earth is wondrously varied, making it quite possible to credit one or more of the many theories of how complex organic molecules may have become a "soup" somewhere on the early Earth. When that soup produces ribose sugars and the other rudiments of RNA, we have the makings of life. The many other things that have come to characterize it, such as lipid membranes and metabolism of compounds are fundamentally secondary, though critically important for progress beyond that so-pregnant moment. 


Saturday, July 12, 2025

What Happens When You Go on Calorie Restriction?

New research pinpoints some very odd metabolites that communicate between the gut and other tissues.

The only surefire way to extend lifespan, at least so far, is to go on a calorie restricted diet. Not just a little restricted- really restricted. The standard protocol is seventy percent of a normal, at-will diet. In most organisms tested, this maintains health and extends lifespan by significant amounts, ten to forty percent. There has naturally been a great deal of interest in the mechanism. Is it due to clearance of senescent cells? Increased autophagy and disposal of sub-optimal cell components? Prevention of cancer? Enhancement or moderation of the immune system? The awesome austerities of the ancient yogis and other religious hermits no longer look so far out.

A recent pair of papers took a meticulous approach to finding one of the secret ingredients that conveys calorie restriction to the body. They find, to begin with, that blood serum from calorie-restricted mice can confer longevity properties to cultured cells. Blood from calorie-restricted mice can also confer relevant properties to other mice. This serum could be heated to 133 degrees without detriment, indicating that the factor involved is not a large protein. On the other hand, dialysis to remove small molecules renders it inactive. Lastly, they put the serum though a lipid-binding purification system, which diminished its activity only slightly. Thus they were looking for a mostly water-soluble metabolite in the blood.

Traditionally, the next step would have been to conduct further fractionation and purification of the various serum components. But times have changed, and their next step was to conduct a vast chemical (mass-spectrometry) analysis of blood from calorie restricted vs control mice. Out of that came over a thousand differential metabolites. Taking the most differential and water-soluble metabolites, they assayed them over their cultured cells for a major hallmark of calorie restriction, activation of the AMPK kinase, a master metabolic regulator. And out of this came only one metabolite that had this effect at concentrations that are found in the calorie restricted serum: lithocholic acid.

Now this is a very odd finding. Lithocholic acid is what is known as a secondary bile acid. It is not natively made by us at all, but is a byproduct of bacterial metabolism in our guts, from a primary bile salt secreted by the liver, chenodeoxycholic acid. These bile salts are detergent-like molecules made from cholesterol that have charged groups stuck on them, and can thus emulsify fats in our digestive tract. Bile salts are extensively recycled through the liver, so that we make a little each day, but use a lot of them in digestion. How such a chemical became the internal signal of starvation is truly curious.

The researchers then demonstrate that lithocholic acid can be administered directly to mice to extend lifespan and have all the expected intermediary effects, including decreased blood glucose levels. Indeed, it did so in flies and worms as well, even though these species do not even see lithocholic acid naturally- their digestion is different from ours. They evidently still have key shared receptors that can recognize this metabolite, perhaps because they use a very similar metabolite internally. 

And what are those receptors and the molecular pathway of this effect? The second paper in this pair shows that lithocholic acid binds to TULP3, a member of the tubby protein family of transcription regulators, which binds to lipids in the plasma membrane and shuttles back and forth to the nucleus to provide feedback on the membrane condition or other events at the cell surface. TULP3 also binds to sirtuins, which are notoriously involved in metabolic regulation, aging, and cell suicide. These latter were all the rage when red wine was thought to extend life span, as sirtuins are activated by resveratrol. At any rate, sirtuins inhibit (by de-acetylation) the lysosomal acidification pump that has been known (curiously enough) to go on to inhibit AMPK, and play basic roles on cell energy balance regulation.

So we are finally at AMPK, a protein kinase that responds to AMP, which is the low-energy converse of ATP, the energy currency of the cell. High levels of AMP indicate the cell is low on energy, and the kinase thus phosphorylates and thus regulates a huge variety of targets, which generally increase energy uptake from the blood, (glucose and fats), reduce internal energy storage, and increase internal recycling of materials and energy, among much else. At any rate, the researchers above show that AMPK activation is absolutely required for the effects of lithocholic acid, and also for life extension more generally from calorie restriction. On the other hand, lithocholic acid does not touch the AMP level in the cell, but, as mentioned above, activates through a much more circuitous route- one that goes through lysosomes, which are sort of the stomach of the cell, and are increasingly recognized as a central player in its energy regulation.

Some beautiful antibody staining to different types of myosin- the key motor protein in muscles- in muscle cells from various places in the mouse body (left to right). The antibodies are to slow-twitch myosin (red), which marks the more efficient slow muscle fibers, and blue and green, which mark fast twitch myosins- generally less efficient and markers of aging. The bottom two rows are from a AMPK knockout mouse, where there is little difference between the two rows. The pairs of rows are from normal (top) and (bottom) a specific mutant of the lysosomal proton pump that has its acetylation sites all mutated to inactivity, thus mimicking in strongest form the action of the sirtuin/lithocholic acid pathway. Here, (second row, especially last frame), there is a higher level of red vs blue, and of blue vs green staining/fibers, suggesting that these muscles have had a durable uptick in efficiency and, implicitly, reduced atrophy and higher longevity.

These are some long and winding roads to get to positive effects from calorie restriction. What is going on? Firstly, biology does not owe it to us to be concise or easy to understand. Lithocholic acid is going to always be around, given that we eat food, but has mostly, heretofore, been regarded as toxic and even carcinogenic. It, along with the other bile acids, bind to dedicated transcription regulators that provide feedback control over their synthesis in the liver. So they are no strangers to our physiology. The authors do not delve into why calorie restriction durably raises the level of lithocholic acid in the gut or the blood, but that is an important place to start. It might well be that our digestive system squeezes every last drop of nutrition out of meagre meals by ramping up bile acid production. It is a recyclable resource, after all. This could have then become a general starvation signal to tissues, in advance of actual energy deficits, which one would rather not face if at all possible. So it is basically a matter of forewarned is forearmed. One is yet again amazed by the depth of biological systems, which have complexity and robustness far beyond our current knowledge, let alone our ability to replicate or model in artificial terms.


Saturday, July 5, 2025

Water Sensing by WNKs

WNK kinases sense osmotic condition as well as chloride concentration to keep us hydrated.

"Water, water, everywhere, nor any drop to drink." This line from Coleridge evokes the horror of thirst on the ghost ship, as its crew can not drink salt water. Other species can, but ocean water is too strong for us, roughly four times as salty as our blood. Nevertheless, our bodies have exquisite mechanisms to manage salt concentrations, with each cell managing its own traffic, and the kidneys managing most electrolytes in the blood. It is a very difficult task that has led to clever evolutionary solutions like counter-current exchange across the nephron loops, and stark differences in those nephron cell membranes, over water or salt permeability, to maximize use of passive ion gradients. But at the heart of the system, one has to know what is going on- one has to monitor all of the electrolyte levels and overall osmotic stress.

One such monitoring thermostat for chemical balances turns out to be the WNK kinases- a family of four proteins in humans that control (by phosphorylating them) a secondary set of regulators, which in turn control many salt transporters, such as SLC12A2 and SLC12A4. These latter are passive, though regulated, co-transporters that allow chloride across the membrane when combined with a matching cation like sodium or potassium. The cations drive the process, because they are normally kept (pumped) to strong gradients across cell membranes, with high sodium outside, and high potassium inside. Thus when these co-transporters are turned on (or off), they use the cation gradients to control the chloride level in the cell, in either direction, depending on the particular transporter involved. Since the sodium and potassium levels are held at relatively static, pumped levels, it is the chloride level that helps control the overall osmotic pressure in a finely tuned way. 

A few of the ionic transactions done in the kidney.


The WNK kinases were discovered genetically, in families that showed hypertension and raised levels of chloride and potassium in the blood. These syndromes mirrored complementary syndromes caused by mutations in SLC12A2, the Na/Cl co-transporter, indicating the WNK kinases inhibit SLC12A2. It turns out that WNK, which are named for an unusual catalytic site (with no lysine [K]) are sensors for both chloride, which inhibit them, and for osmotic pressure, which activates them. They are expressed in different locations and have slightly different activities, (and control many more transporters and processes than discussed here), but I will treat them interchangeably here. The logic of all this is that, if osmotic pressure is low, that means that internal salt levels are low, and chloride needs to be let into the cell, by activating the cation/chloride co-transporters. Likewise, if chloride levels inside the cell are high, the WNK kinase needs to be inhibited, reducing chloride influx. 

A recent paper (and prior work from the same lab) discussed structures of the WNK regulators that explain some of this behavior. WNK kinases are dimers at rest, and in that state mutually inhibit their auto-phosphorylation. It is separation and auto-phosphorylation that turns them on, after which they can then phosphorylate their target proteins, such as the secondary kinases STK39 and OSR1. The authors had previously found a chloride binding site right at the active site of the enzyme that promotes dimerization. In the current paper, they reveal a couple of clusters of water molecules which similarly affect the dimerization, and thus activity, of the enzyme.

Location of the inhibitory chloride (green) binding site in WNK1. This is right in the heart of the protein, near the active kinase site and dimerization interface with the other WNK1 partner.

While X-ray crystal structures rarely show or care much about water molecules, (they are extremely small and hard to track), here, those waters were hypothesized to be important, since WNK kinases are responsive to osmotic pressure. One way to test this is to add PEG400 to the reaction. This is a polymer (400 molecular weight) that is water-like and inert, but large in a way that crowds out water molecules from the solution. At 15% or 25% of the volume, PEG400 displaces a lot of water, lowers the water activity of a solution, and thus increases the osmotic pressure- that is its tendency to draw water in from outside. Plants use osmotic pressure as turgor pressure, and our cells, not having cells walls, need to always be at an osmotic pressure similar to the outside, lest they swell up, or conversely shrink away. Anyhow, WKN kinases can be switched from an inactive to active state just by adding PEG400- a sure sign that they are sensors for osmotic pressure.


Water network (blue dots) within the WNK1 kinase protein. Most of the protein is colored teal, while the active site kinase area is red, and a tiny amount of the dimer partner is colored green. When this crystal is osmotically challenged, the water network collapses from 14 waters to 5, changing the structure and promoting dissociation of the dimer. In B is show a sequence alignment over a wide evolutionary range where the amino acids that coordinate the water network (yellow) are clearly very well conserved, thus quite important.

Above is shown a closeup of the WNK1 protein, showing in teal the main backbone, including the catalytic loop. In red is the activation loop of the kinase, and in green is a little bit from the other WNK1 protein in the dimer pair. The chloride, if bound, would be located right at top center, at K375. Shown in blue are a series of fourteen water molecules that make up one so-called water network. Another smaller one was found at the interface between the two WNK1 proteins. The key finding was that, if crystalized with PEG400, this water network collapsed to only five water molecules, thereby changing the structure of the protein significantly and accounting for the dissolution of the dimer. 

Superposition of WNK1 with PEG400 (purple) and activated vs WNK1 without, in an inactive state (teal). Most of the blue waters would be gone in the purple state as well. This shows the significant structural transition, particularly in the helixes above the active site, which induce (in the purple state) dissociation of the dimer, auto-phosphorylation, and activation.

Thus there is a delicate network of water molecules tentatively held together within this protein that is highly sensitive to the ambient water activity (aka osmotic pressure). This dynamic network provides the mechanism by which the WNK proteins sense and transmit the signal that the cell requires a change in ionic flows. Generally the point is to restore homeostatic balance, but in the kidney these kinases are also used to control flows for the benefit of the organism as a whole, by regulating different transporters in different parts of the same cell- either on the blood side, or the urine side.


Saturday, June 14, 2025

Sensing a Tiny Crowd

DCP5 uses a curious phase transition to know when things are getting tight inside a cell.

Regulation is the name of the game in life. Once the very bares bones of metabolism and replication were established, the race was on to survive better. And that often means turning on a dime- knowing when to come, when to go, when to live it up, and when to hunker down. Plants can't come and go, so they have rather acute problems of how to deal with the hand (that is, the location) they have been dealt. A big aspect of which is getting water and dealing with lack of water. All life forms have ways to adapt to osmotic stress- that is, any imbalance of water that can be caused by too much salt outside, or drought, or membrane damage. Membranes are somewhat permeable to water, so accommodation to osmotic stress is all about regulating balances of salts and larger ions using active pumps. For example, plant cells usually have pretty high turgor pressure, (often to 30 psi), to pump up their strong cell walls, which helps them stay upright. 

Since cells are filled with large, impermeable molecules like proteins, nucleic acids, and metabolites, the default setting is inward water migration, and thus some turgor pressure. But if excess salts build up outside, or a drought develops, the situation changes. A smart plant cell needs to know what is going on, so it can pump salts inwards (or build up other osmolyte balances) to restore the overall water balance. While osmosensing in other kinds of cells like yeast is understood to some extent, what does so in plants has been somewhat mysterious. (Lots of temperature sensors are known, however.) Yet a recent article laid out an interesting mechanism, based on protein aggregation.

Some cells have direct sensors of membrane tension. Others have complex signaling systems with GPCR proteins that sense osmotic stress. But this new plant mechanism is rather simple, if also trendy. The protein DCP5 is a protein that looks like an RNA-binding protein that participates in stress responses and RNA processing / storage. But in plants, it appears to have an additional power- that of rapid hyper-aggregation when crowded by loss of turgor pressure and cell volume. These aggregates are now understood to be a unique phase of matter, a macromolecular condensate, that is somewhere between liquid and solid. And importantly, they segregate from their surroundings, like oil droplets do from water. The authors did not really discuss what got them interested in this, but this is a known stress-related protein, so once they labeled and visualized it, they must have been struck by its dynamic behavior.

"This condensation was highly dynamic; newly assembled condensates became apparent within 2 min of stress exposure, and the condensation extent is positively correlated with stress severity. ... In cells subjected to continuous hyperosmotic-isosmotic cycles, DCP5 repeatedly switched between condensed and dispersed states."

Titration of an external large molecule, (polyethylene glycol of weight 8000 Daltons, or PEG), which draws water out of a plant cell. The green molecule is DCP5, labeled with GFP, and shows its dramatic condensation with water loss.

Well, great. So the researchers have found a protein in plants that dramatically aggregates under osmotic stress, due to an interesting folding / bistable structure that it has, including a disordered region. Does that make it a water status sensor? And if so, how does it then regulate anything else? 

"In test tube, recombinant DCP5 formed droplets under various artificial crowding conditions generated by polymeric or proteinic crowders. DCP5 droplets emerged even at very low concentrations of PEG8000 (0.01 to 1%), indicating an unusual crowding sensitivity of DCP5."

It turns out that DCP5, aside from binding with its own kind, also drags other proteins into its aggregates, some of which in turn bind to key mRNAs. The ultimate effect is to shield key proteins such as transcription regulators from getting into the nucleus, and shield various mRNAs from translation. In this way, the aggregation system rapidly reprograms the cell to adapt to the new stress.

As one often says in biology and evolution ... whatever it takes! Whether the regulation is by phosphorylation, or proteolysis, or sequestration, or building an enormous brain, any and all roads to regulation are used somewhere to get to where we want to go, which is exquisite sensitivity and responsiveness to our environment, so that we can adapt to it ever faster- far faster than the evolutionary process does.


Saturday, January 18, 2025

Eeking Out a Living on Ammonia

Some archaeal microorganisms have developed sophisticated nano-structures to capture their food: ammonia.

The earth's nitrogen cycle is a bit unheralded, but critical to life nonetheless. Gaseous nitrogen (N2) is all around us, but inert, given its extraordinary chemical stability. It can be broken down by lightning, but little else. It must have been very early in the history of life that the nascent chemical-biological life forms tapped out the geologically available forms of nitrogen, despite being dependent on nitrogen for countless critical aspects of organic chemistry, particularly of nucleic acids, proteins, and nucleotide cofactors. The race was then on to establish a way to capture it from the abundant, if tenaciously bound, dinitrogen of the air. It was thus very early bacteria that developed a way (heavily dependent, unsurprisingly, on catalytic metals like molybdenum and iron) to fix nitrogen, meaning breaking up the triple N≡N bond, and making ammonia, NH3 (or ammonium, NH4+). From there, the geochemical cycle of nitrogen is all down-hill, with organic nitrogen being oxidized to nitric oxide (NO), nitrite (NO2-), nitrate (NO3), and finally denitrification back to N2. Microorganisms obtain energy from all of these steps, some living exclusively on either nitrite or nitrate, oxidizing them as we oxidize carbon with oxygen to make CO2. 

Nitrosopumilus, as imaged by the authors, showing its corrugated exterior, a layer entirely composed of ammonia collecting elements (can be hexameric or pentameric). Insets show an individual hexagonal complex, in face-on and transverse views. Note also the amazing resolution of other molecules, such as the ribosomes floating about.

A recent paper looked at one of these denizens beneath our feet, an archaeal species that lives on ammonia, converting it to nitrite, NO2. It is a dominant microbe in its field, in the oceans, in soils, and in sewage treatment plants. The irony is that after we spend prodigious amounts of fossil fuels fixing huge amounts of nitrogen for fertilizer, most of which is wasted, and which today exceeds the entire global budget of naturally fixed nitrogen, we are faced with excess and damaging amounts of nitrogen in our effluent, which is then processed in complex treatment plants by our friends the microbes down the chain of oxidized states, back to gaseous N2.

Calculated structure of the ammonia-attracting pore. At right are various close-up views including the negatively charged amino acids (D, E) concentrated at the grooves of the structure, and the pores where ammonium can transit to the cell surface. 

The Nitrosopumilus genus is so successful because it has a remarkable way to capture ammonia from the environment, a way that is roughly two hundred times more efficient than that of its bacterial competitors. Its surface is covered by a curious array of hexagons, which turn out to be ammonia capture sites. In effect, its skin is an (relatively) enormous chemical antenna for ammonia, which is naturally at low concentration in sea water. These authors do a structural study, using the new methods of particle electron microscopy, to show that these hexagons have intensely negatively charged grooves and pores, to which positively charged ammonium ions are attracted. Within this outer shell, but still outside the cell membrane, enzymes at the cell surface transform the captured ammonium to other species such as hydroxylamine, which enforces the ammonium concentration gradient towards the cell surface, and which are then pumped inside.

Cartoon model of the ammonium attraction and transit mechanisms of this cell wall. 

It is a clever nano-material and micro-energetic system for concentrating a specific chemical- a method that might inspire human applications for other chemicals that we might need- chemicals whose isolation demands excessive energy, or whose geologic abundance may not last forever.


Sunday, September 15, 2024

Road Rage Among the Polymerases

DNA polymerase is faster than RNA polymerase. RNA polymerase also leaves detritus in its wake. What happens when they collide?

DNA is a country road- one lane, two directions. Yet in our cells it can be extremely busy, with transcription (RNA synthesis) happening all the time, and innumerable proteins hanging on as signposts, chemical modifications, and even RNA hybridized into sections, creating separated DNA structures called R-loops. When it is time for DNA replication, what happens when all these things collide? One might think that biology had worked all this out by now, but these collisions can be quite dangerous, sending the RNA polymerase careering into the other (new) DNA strand, causing the DNA polymerase to stall or miss sections, and causing DNA breaks, which activate loud cellular alarm bells and mutations.

Despite decades of work, this area of biology is still not yet very well understood, since the conditions are difficult to reproduce and study. So I can only give a few hints of what is going from current work in the field. A couple of decades ago, a classic experiment showed that in bacteria, DNA polymerases can be stopped cold by a collision with an RNA polymerase going in the opposite direction. However, this stall is alleviated by a DNA helicase enzyme, which can pry apart the DNA strands and anything attached, and the DNA replication complex sails through, after a pause of a couple of seconds. The RNA polymerase, meanwhile, is not thrown off completely, but switches its template from the complementary strand it was using previously to the newly synthesized DNA strand just made by the passing DNA polymerase. This was an amazing result, since the elongating RNA polymerase is a rather tightly attached complex. But here, it jumps ship to the new DNA strand, even though the old DNA strand remains present, and will shortly be replicated by the lagging strand DNA polymerase complex.

General schematic of encounters between replication forks and RNA polymerases (pink, RNAP). Only co-directional, not head-on, collisions are shown here. Ribosomes (yellow) in bacteria operate directly on the nascent mRNA, and can helpfully nudge the RNA polymerase along. In this scheme, DNA damage happens after the nascent RNA is used as a primer by a new DNA polymerase (bottom), which will require special repair. 

The ability of the RNA polymerase to switch template strands, along with the nascent RNA it was making, suggests very intriguing flexibility in the system. Indeed, DNA polymerases that come up from behind the RNA polymerase (using the same strand as their template) have a much easier time of it, passing with hardly a pause, and only temporarily displacing the RNA polymerase. But things are different when the RNA polymerase has just found an error and has back-tracked to fix it. Then, the DNA polymerase complex is seriously impeded. It may even use the nascent RNA hanging off the polymerase and hybridized to the local DNA as a primer to continue synthesis, after it has bumped off the RNA polymerase that made it. This leads in turn to difficulties in repair and double strand breaks in that DNA, which is the worst kind of mutation. 

The presence of RNA in the mix, in the form of single strands of RNA hybridized to one of the DNA strands, (that is, R-loops), turns out to be a serious problem. These can arise either from nascent transcription, as above, or from hybridization of non-coding RNAs that are increasingly recognized as significant gene regulators. RNA forms a slightly stronger hybrid with DNA than DNA itself does, in fact. Such R-loops (displacing one DNA strand) are quite common over active genomes, and apparently present a block to replication complexes. One would think that such fork complexes would be supplied with the kinds of helicases that could easily plow through such structures, but that is not quite the case. R-loops cause replication complex stalling, and can invoke DNA damage responses, for reasons that are not entirely clear yet. 

A recent paper that piqued my interest in all this studied an ATPase motor protein that occurs at stalled replication forks and helps them restart, presumably by acting as a DNA or RNA pump of some kind, and forcing the replication complex through obstructions. It is named WRNIP1, for WRN interacting protein, for it also interacts with Werner syndrome protein, another interesting protein at the replication fork. This is another ATPase that is a helicase and also a backwards 3' -> 5' exonuclease that cleans up DNA ends around DNA repair sites, helping to remove mismatched and damaged DNA so the repair can be as accurate as possible. As one can guess, mutations in this gene cause Werner Syndrome, a striking progeria syndrome of early aging and susceptibility to cancer. 

While the details of R-loop toxicity and repair are still being worked out, it is fascinating that such conflicts still exist after several billion years to figure them out. It is apparent that the design of DNA, while exceedingly elegant, results in intrinsic conflicts between expression and replication that are resolved amicably most of the time. But when either process gets overly congested, or encounters unexpected roadblocks, then tempers can flare, and an enormous apparatus of DNA damage signaling and repair is called in, sirens blaring, to do what it can to cut through the mess.


  • Who really believes in climate change?
  • The very strong people of the GOP. 
  • The ancient Easter Islanders mixed with South Americans.

Saturday, June 8, 2024

A Membrane Transistor

Voltage sensitive domains can make switches out of ion channels, antiporters, and other enzymes.

The heart of modern electronics is the transistor. It is a valve or switch, using a small electrical signal to control the flow of other electrical signals. We have learned that the simple logic this mechanism enables can be elaborated into hugely complex, even putatively intelligent, computers, databases, applications, and other paraphernalia of modernity. The same mechanism has a very long history in biology, quite apart from its use in neurons and brains, since membranes are typically charged, well-poised to be sensitive to changes in charge for all sorts of signaling.

The voltage sensitive domain (VSD) in proteins is an ancient (going back to archaea) bundle of four alpha helices that were first found attached to voltage-sensitive ion channels, including sodium, potassium, and calcium channels. But later it became fascinatingly apparent that it can control other protein activities as well. A recent paper discussed the mechanism and structure of a sodium/hydrogen antiporter with a role in sperm navigation, which uses a VSD to control its signaling. But there are also voltage-sensitive phosphatases, and other kinds of effectors hooked up to VSD domains. 

Schematic of a basic VSD, with helix 4 in pink, moving against the other three helices colored teal. Imagine a membrane going horizontally over these embedded proteins. When voltage across the local membrane changes, (hyperpolarized or de-polarized), helix 4 can plunge by one helical repeat unit in either direction, up or down.

One of the helixes (#4) in the VSD bundle has positive charges, while the others have specifically positioned negative charges. This creates a structure where changes in the ambient voltage across the membrane it sits in can cause helix #4 to plunge down by one or two steps (that is, turns of the alpha helix) versus its partners. This movement can then be propagated out along extensions of helix #4 to other domains of the protein in order to switch on or off their activities.

The helices of numerous proteins that have a VSD domain (in red) are drawn out, showing the diversity of how this domain is used.

While the studied protein, SLC9C1, is essential in mammalian sperm for motility, the paper studied its workings in sea urchin sperm, a common model system. The logic (as illustrated below) is that (female) chemoattractants bind to receptors on the sperm surface. These receptors generate cyclic GMP, which turns on potassium channels that increase the voltage across the membrane. This broadcasts the signal locally, and is received by the SLC9C1 transporter, which does two things. It activates a linked soluble adenylate cyclase enzyme, making the further signaling molecule cAMP. And it also activates the transporter itself, pumping protons out (in return 1:1 for sodium ions in) and causing cytoplasmic alkalinization. The cAMP activates sodium ion channels to cancel the high membrane voltage (a fast process), and the alkalinization activates calcium channels that direct the sperm directional swimming responses- the ultimate response. The latter is relatively slow, so the whole cascade has timing characteristics that allow the signal to be dampened, but the response to persist a bit longer as the sperm moves through a variable and stochastic gradient.

A schematic of the logic of this pathway, and of the SLC9C1 anti-porter. At top, the transport mechanism is crudely illustrated as a rocking motion that ensures that only one H+ is exchanged for one Na+ for each cycle of transport. The transport is driven thermodynamically by the higher concentration of Na+ outside.


But these researchers weren't interested in what the sperm were thinking, but rather how this widely used protein domain became hitched to this unusual protein and how it works there, turning on a sodium/hydrogen antiporter rather than the usual ion channel. They estimate that the #4 helix of the VSD moves by 10 angstroms, or 1 nm, upon voltage activation, which is a substantial movement, roughly equivalent to the width of these helices. In their final model, this movement significantly reshapes the intracellular domain of the transporter, which in turn releases its hold on the transporter's throat, allowing it to move cyclically as it needs to exchange hydrogen ions for sodium ions. This protein is known to bind and activate an adenylyl cyclase, which produces cAMP, which is one key next actor in the signaling cascade. This activation may be physically direct, or it may be through the local change in pH- that part is as yet unknown. cAMP also, incidentally, binds to and turns up the activity of this transporter, providing a bit of positive feedback.

Model of the SLC9C1 protein, with the VSD in teal and a predicted activation mechanism illustrated (only the third panel is activated/open). Upon voltage activation, the very long helix 4 dips down and changes orientation, dramatically opening the intracellular portion of the transporter (purple and orange portion). This in turn lets go of the bottom of the actual transporter portion of the protein (gray), allowing alkalinization of the cytoplasm to go forth. At the bottom sides, in brown, is the cAMP binding domain, which lowers the voltage threshold for activation.

There are a variety of interesting lessons from this work. One is that useful protein domains like VSD are often duplicated and propagated to unexpected places to regulate new processes. Another is that the new cryo-electron microscopy methods have made structural biology like this far easier and more common than it used to be, especially for membrane proteins, which are exceedingly difficult to crystalize. A third is that signaling systems in biology are shockingly complex. One would think that getting sperm cells to where they are going would take a bare minimum of complexity, yet we are studying a five or more part cascade involving two cyclic nucleotides, four ions, intricate proteins to manage them all, and who knows what else into the mix. It is difficult to account for all this, other than to say that when you have a few billion years to tinker with things, and have eons of desperate races to the egg for selective pressure, they tend to get more ornate. And a fourth is that it is regulatory switches all the way down.


Saturday, May 25, 2024

Nascent Neurons in Early Animals

Some of the most primitive animals have no nerves or neurons... how do they know what is going on?

We often think of our brains as computers, but while human-made computers are (so far) strictly electrical, our brains have a significantly different basis. The electrical component is comparatively slow, and confined to conduction along the membranes of single cells. Each of these neurons communicate with others using chemicals, mostly at specialized synapses, but also via other small compounds, neuropeptides, and hormones. That is why drugs have so many interesting effects, from anesthesia to anti-depression and hallucination. These properties suggest that the brain and its neurons began, evolutionarily speaking, as chemically excitable cells, before they became somewhat reluctant electrical conductors.

Thankfully, a few examples of early stages of animal evolution still exist. The main branches of the early divergence of animals are sponges (porifera), jellies and corals (ctenophora, cnidiaria), bilaterians (us), and an extremely small family of placozoa. Neural-type functions appear to have evolved independently in each of these lineages, from origins that are clearest in what appears to be the most primitive of them, the placozoa. These are pancake-like organisms of three cell layers, hardly more complex than a single-celled paramecium. They have about six cell types in all, and glide around using cilia, engulfing edible detritus. They have no neurons, let alone synaptic connections between them, yet they have excitable cells that secrete what we would call neuropeptides, that tell nearby cells what to do. Substrances like enkephalins, vasopressin, neurotensin, and the famous glucagon-like peptide are part of the managerie of neuropeptides at work in our own brains and bodies.

A placozoan, about a millimeter wide. They are sort of a super-amoeba, attaching to and gliding over surfaces underwater and eating detritus. They are heavily ciliated, with only a few cell types divided in top, middle, and bottom cell layers. The proto-neural peptidergic cells make up ~13% of cells in this body.


The fact is that excitable cells long predate neurons. Even bacteria can sense things from outside, orient, and respond to them. As eukaryotes, placozoans inherited a complex repertoire of sense and response systems, such as G-protein coupled receptors (GPCRs) that link sensation of external chemicals with cascades of internal signaling. GPCRs are the dominant signaling platforms, along with activatable ion channels, in our nervous systems. So a natural hypothesis for the origin of nervous systems is that they began with chemical sensing and inter-cell chemical signaling systems that later gained electrical characteristics to speed things up, especially as more cells were added, body size increased, and local signaling could not keep up. Jellies, for instance, have neural nets that are quite unlike, and evolutionarily distinct from, the centralized systems of animals, yet use a similar molecular palette of signaling molecules, receptors, and excitation pathways. 

Placozoans, which date to maybe 800 million years ago, don't even have neurons, let alone neural nets or nervous systems. A recent paper labored to catalog what they do have, however, finding a number of pre-neural characteristics. For example, the peptidergic cell type, which secretes peptides that signal to neighboring cells, expresses 25 or more GPCRs, receptors for those same peptides and other environmental chemicals. They state that these GPCRs are not detectably related to those of animals, so placozoans underwent their own radiation, evolving/diversifying a primordial receptor into hundreds that exist in its genome today. The researchers even go so far as to employ the AI program Alpha Fold to model which GPCRs bind to which endogenously produced peptides, in an attempt to figure out the circuitry that these organisms employ.

This peptidergic cell type also expresses other neuron-like proteins, like neuropeptide processing enzymes, transcription regulators Sox, Pax, Jun, and Fos, a neural-specific RNA polyadenylation enzyme, a suite of calcium sensitive channels and signaling components, and many components of the presynaptic scaffold, which organizes the secretion of neuropeptides and other transmitters in neurons, and in placozoa presumably organizes its secretion of its quasi-neuropeptides. So of the six cell types, the peptidergic cell appears to be specialized for signaling, is present in low abundance, and expresses a bunch of proteins that in other lineages became far more elaborated into the neural system. Peptidergic cells do not make synapses or extended cell processes, for example. What they do is to offer this millimeter-sized organism a primitive signaling and response capacity that, in response to environmental cues, prompts it to alter its shape and movement by distributing neuropeptides to nearby effector cells that do the gliding and eating that the peptidergic cells can't do.

A schematic of neural-like proteins expressed in placozoa, characteristic of more advanced presynaptic secretory neural systems. These involve both secretion of neuropeptides (bottom left and middle), the expression of key ion channels used for cell activation (Ca++ channels), and the expression of cell-cell adhesion and signaling molecules (top right).

Why peptides? The workhorse of our brain synapses are simpler chemicals like serotonin, glutamate, and norepinephrine. Yet the chemical palette of such simple compounds is limited, and each one requires its own enzymatic machinery for synthesis. Neuropeptides, in contrast, are typically generated by cleavage of larger proteins encoded from the genome. Thus the same mechanism (translation and cleavage) can generate a virtually infinite variety of short and medium sized peptide sequences, each of which can have its own meaning, and have a GPCR or other receptor tailored to detecting it. The scope of experimentation is much greater, given normal mutation and duplication events through evolutionary time, and the synthetic pipeline much easier to manage. Our nervous systems use a wide variety of neuropeptides, as noted above, and our immune system uses an even larger palette of cytokines and chemokines, upwards of a hundred, each of which have particular regulatory meanings.


An evolutionary scheme describing the neural and proto-neural systems observed among primitive animals.


The placozoan relic lineages show that nervous systems arose in gradual fashion from already-complex systems of cell-cell signaling that focused on chemical rather than electrical signaling. But very quickly, with the advent of only slighly larger and more complex body plans, like those of hydra or jellies, the need for speed forced an additional mode of signaling- the propagation of electrical activity within cells, (the proto-neurons), and their physical extension to capitalize on that new mode of rapid conduction. But never did nervous systems leave behind their chemical roots, as the neurons in our brains still laboriously conduct signals from one neuron to the next via the chemical synapse, secreting a packet of chemicals from one side, and receiving that signal across the gap on the other side.


  • The mechanics of bombing a population back into the stone age.
  • The Saudis and 9/11.
  • Love above all.
  • The lower courts are starting to revolt.
  • Brain worms, Fox news, and delusion.
  • Notes on the origins of MMT, as a (somewhat tedious) film about it comes out.