Showing posts with label chemistry. Show all posts
Showing posts with label chemistry. Show all posts

Saturday, February 14, 2026

We Have Rocks in Our Heads ... And Everywhere Else, Too

On the evolution and role of iron-sulfur complexes.

Some of the more persuasive ideas about the origin of life have it beginning in the rocks of hydrothermal vents. Here was a place with plenty of energy, interesting chemistry, and proto-cellular structures available to host it. Some kind of metabolism would by this theory have come first, followed by other critical elements like membranes and RNA coding/catalysis. This early earth lacked oxygen, so iron was easily available, not prone to oxidation as now. Thus life at this early time used many minerals in its metabolic processes, as they were available for free. Now, on today's earth, they are not so free, and we have complex processes to acquire and manage them. One of the major minerals we use is the iron-sulfur complex, (similar to pyrite), which comes in a variety of forms and is used by innumerable enzymes in our cells. 

The more common iron-sulfur complexes, with sulfur in yellow, iron in orange.


The principle virtue of the iron-sulfur complex is its redox flexibility. With the relatively electronically "soft" sulfur, iron forms semi-covalent-style bonds, while being able to absorb or give up an electron safely, without destroying nearby chemicals as iron alone typically does. Depending on the structure and liganding, the voltage potential of such complexes can be tuned all over the (reduction potential) map, from -600 to +400 mV. Many other cofactors and metals are used in redox reactions, but iron-sulfur is the most common by far.

Reduction potentials (ability to take up an electron, given an electrical push) of various iron-sulfur complexes.

Researchers had assumed that, given the abundance of these elements, iron-sulfur complexes were essentially freely acquired until the great oxidation event, about two to three billion years ago, when free oxygen started rising and free iron (and sulfur) disappeared, salted away into vast geological deposits. Life faced a dilemma- how to reliably construct minerals that were now getting scarce. The most common solution was a three enzyme system in mitochondria that 1) strips a sulfur from the amino acid cysteine, a convenient source inside cells, 2) scaffolds the construction of the iron-sulfur complex, with iron coming from carrier proteins such as frataxin, and 3) employs several carrier proteins to transfer the resulting complexes to enzymes that need them. 

But a recent paper described work that alters this story, finding archaeal microbes that live anaerobically and make do with only the second of these enzymes. A deep phylogenetic analysis shows that the (#2) assembly/scaffold enzymes are the core of this process, and have existed since the last common ancestor of all life. So they are incredibly ancient, and it turns out to that iron-sulfur complexes can not just be gobbled up from the environment, at least not by any reasonably advanced life form. Rather, these complexes need to be built and managed under the care of an enzyme.

The presented structures of the dimer of SmsB (orange) and SmsC (blue) that dimerize again to make up a full iron-sulfur scaffolding and production enzyme in the archaean Methanocaldococcus jannaschii. Note the reaction scheme where ATP comes in and evicts the iron-sulfur cluster. On right is shown how ATP fits into the structure, and how it nudges the iron-sulfur binding area (blue vs green tracing).

A recent paper from this group extended their analysis to the structure of the assembly/scaffold enzyme. They find that, though it is a symmetrical dimer of a complex of two proteins, it only deals with one iron-sulfur complex at at time. It also binds and cleaves ATP. But ATP seems to have more of an inhibitory role than one that stimulates assembly directly. The authors suggest that high levels of ATP signal that less iron-sulfur complex is needed to sustain the core electron transport chains of metabolism, making this ATP inhibition an allosteric feedback control mechanism in these archaeal cells. I might add, however, that ATP binding may well also have a role in extricating the assembled iron-sulfur cluster from the enzyme, as that complex is quite well coordinated, and could use a push to pop out into the waiting arms of target enzymes.

"These ancestral systems were kept in archaea whereas they went through stepwise complexification in bacteria to incorporate additional functions for higher Fe-S cluster synthesis efficiency leading to SUF, ISC and NIF." - That is, the three-component systems present in eukaryotes, which come in three types.

In the author's structure, the iron-sulfur complex, liganded by three cysteines within the SmsC protein. But note how, facing the viewer, the complex is quite exposed, ready to be taken up by some other enzyme that has a nice empty spot for it.

Additionally, these archaea, with this simple one-step iron cluster formation pathway, get their sulfur not from cysteine, but from ambient elemental sulfur. Which is possible, as they live only in anaerobic environments, such as deep sea hydrothermal vents. So they represent a primitive condition for the whole system as may have occurred in the last common ancestor of all life. This ancestor is located at the split between bacteria and archaea, so was a fully fledged and advanced cell, far beyond the earlier glimmers of abiogenesis, the iron sulfur world, and the RNA world.


Saturday, January 31, 2026

How do Anesthetics Work?

Ask a simple question, and get an answer that gets weirder the deeper you dig.

Anesthesia is wonderful. Quite simply, it makes bearable, even unnoticeable, what would be impossible or excruciating. It is also one of the most mysterious phenomena short of consciousness itself. All animals can be anesthetized, from bacteria on up. All sorts of chemicals can be used, from xenon to simple ethers, to complex fluoride-substituted forever chemicals, all with similar effects. Yet there are also complex sub-branches of anesthesia, like pain relief, muscular immobilization, shutdown of consciousness, and amnesia against remembering what happened, that chemicals affect differentially. It resembles sleep, and shares some circuitry with that process, but is of course is induced quite differently.

The first red herring was the Meyer–Overton rule, established back in 1899, that showed that anesthetic potency correlates closely with the lipophilicity of the chemical, from nitrogen (not very good) to xenon (pretty good) to chloroform (very good). All the forever (heavily fluorinated) chemicals used as modern anesthetics, like isoflurane and sevoflurane, have extremely high lipophilicity. This suggested that the mechanism of action was simply mixing into membranes somehow, altering their structure, and thus neuronal action.. something along that line. 

Structures of several general anesthetics. 8 is isoflurane.

But when researchers looked more closely there were some chemical differences that did not track with this hypothesis. Chiral enantiomers behaved differently in some cases, indicating that these chemicals do bind to something (that is, a protein) specifically. Also, variant genes started cropping up that conferred resistance to anesthesia or were found to bind particular anesthetics at working concentrations. Also, more complex, injectable anesthetics like fentanyl and propofol have slightly more defined targets and modes of action. So while anesthetics clearly partition to membranes, and the binding sites are often at protein-membrane interfaces, the modern theory of how they work is that they bind to ion channels and neurotransmitter receptors and affect their functions. Proteins generally have hydrophobic interiors, so the lipophilicity of these chemicals may track with binding / disrupting protein interiors as much as membrane interiors. And other proteins such as microtubules have been drawn into the discussion as well (leading indirectly to some very unfortunate theories about consciousness). 

But which key protein do they bind? Here again, mysteries abound, as they do not bind just one, but many. And not just that, they turn some of their targets on, others off. One target is the GABA receptor, which characterizes the major inhibitory neurons of the central nervous system. These are turned on. At high concentrations, anesthetics can even turn these receptors on without any GABA neurotransmitter present. Another is the NMDA receptor, which is the target of opioids, and of ketamine. These receptors are turned off. So, for some reason, still somewhat obscure, the net result of many specific bindings to an array of channels by an array of chemicals results in ... anesthesia.

A recent paper raised my interest in this area, as its authors demonstrated yet another target for inhaled anesthetics like isoflurane, and dove with exquisite detail into its mechanism. They were working on the ryanodine receptor, which isn't even a cell surface protein, but sits in the endoplasmic reticulum (or sarcoplasmic reticulum in muscles) and conducts calcium out of these organelles. This receptor is huge- the largest known- coding over five thousand amino acids (RYR1 of humans), due to numerous built-in regulatory structures. For example, it is sensitive to caffeine, but in a different location than where it is sensitive to isoflurane. Calcium is a very important signal within cells, key to muscle activity, and also to neuronal activation. The endoplasmic reticulum serves as a storehouse of calcium, from which signals can be sparked as needed by outside signals, including a spike in calcium itself (thus creating a positive feedback loop). These receptors (a family of three in human) are named for an obscure chemical (indeed a poison) that activates these channels, and all three are expressed in the brain. 

The authors were led to this receptor because mutations were known to cause malignant hyperthermia, a side effect of a few of the common anesthesia drugs where body temperature rises uncontrollably, driven from muscle tissue, where ryanodine receptors in the sarcoplasmic reticulum are particularly common and heavily used to regulate muscle activity and metabolism. That suggested that anesthetics such as isoflurane might bind to this receptor directly, turning it on. That was indeed the case. They started with cultured cells expressing each receptor family member in turn, and tested each receptor's response to isoflurane. Internal (cytoplasmic) calcium rose especially with the family member RYR1. That led to various control experiments and a hunt (by mutating and doctoring the RYR1 protein) for the particular region being bound by the anesthetic. After a lengthy search, they found residue 4000 was a critical one, as a mutation from methionine to phenylalanine reduced the isoflurane response about ten-fold. This is part of a binding pocket as shown below.

Structure of isoflurane, (B), bound to the RYR1 protein pocket. This is a pocket that happens to also bind another activator of this channel, 4-CMC. A layout of the whole active binding pocket is given on the right. At bottom are calcium channel responses of the wild-type and point mutant forms of RYR1, showing the dramatic effect these single site mutations have on isoflurane response.

Fine, but what about anesthesia? The next step was to test this mutation in whole mice, where, lo and behold, isoflurane anesthesia of otherwise normal mice was made slightly more difficult by this mutant form of RYR1. Additionally, these mice had no other observable problems- not in behavior, not in sleep. That is remarkable as a finding about anesthesia, but the effect was quite small- about 10% or so shift in the needed concentration of isoflurane. They go on to mention that this is similar in scale to knockouts or mutations in other known targets of anesthetic drugs:

  • 10% shift in the curve from the M4000F mutation of RYR1
  • 14% shift in the isoflurane curve from a mutation in GABA receptor, GABAAR.
  • 5% shift in the isoflurane curve (though a 20% or more shift for halothane) for mutations in KCNK9, a potassium channel.

What this is telling us is that there are many targets for anesthetic drugs. They are spread over many neurotransmitter and physiological systems. They each contribute modestly (and variably, depending on the drug) to the net effect of any one drug. The various affected channels and membrane receptors curiously combine to achieve anesthesia across all animals and even microorganisms, which naturally also rely on channels and transmembrane receptors for their various sensing and motion activation needs. We are left with a blended hypothesis where yes, there are specific protein targets for each anesthetic that mediate their action. On the other hand, these targets are far from unique, spread across many proteins, yet are also highly conserved, looking almost like they are implicit in the nature of transmembrane proteins in general. 

One gets the distinct impression that there should be endogenous equivalents, as there are for opioids and cannabinoids- some internal molecule that provides sedation when needed, such as for deep illness or end-of-life crisis. That molecule has not yet been found, but the natural world abounds in sedatives, (alcohol is certainly one), so the logic of anesthesia becomes one of biological and evolutionary logic, as much as one of chemical mechanism.


Sunday, December 28, 2025

Lipid Pumps and Fatty Shields- Asymmetry in the Plasma Membrane

The two faces constituting eukaryotic cell membranes are asymmetric.

Membranes are one of those incredibly elegant things in biology. Simple chemicals forces are harnessed to create a stable envelope for the cell, with no need to encode the structure in complicated ways. Rather, it self-assembles, using the oil-vs-water forces of surface tension to form a huge structure with virtually no instruction. Eukaryotes decided to take membranes to the max, growing huge cells with an army of internal membrane-bound organelles, individually managed- each with its own life cycle and purposes.

Yet, there are complexities. How do proteins get into this membrane? How do they orient themselves? Does it need to be buttressed against rough physical insult, with some kind of outer wall? How do nutrients get across, while the internal chemistry is maintained as different from the outside? How does it choose which other cells to interact with, preventing fusion with some, but pursuing fusion with others? For all the simplicity of the basic structure, the early history of life had to come up with a lot of solutions to tough problems, before membrane management became such a snap that eukaryotes became possible.

The authors present their model (in atomic simulation) of a plasma membrane. Constituents are cholesterol, sphingomyelin (SM), phosphatidyl choline (PC) phosphatidyl serine (PS), phosphatidyl ethanolamine (PE), and phosphotidyl ethanolamine plasmalogen. Note how in this portrayal, there is far more cholesterol in the outer leaflet (top), facing the outside world, than there is in the inner leaflet (bottom).

The major constituents of the lipid bilayer are cholesterol, phospholipids, and sphingomyelin. The latter two have charged head groups and long lipid (fatty) tails. The head groups keep that side of the molecule (and the bilayer) facing water. The tails hate water and like to arrange themselves in the facing sheets that make up the inner part of the bilayer. Cholesterol, on the other hand, has only a mildly polar hydroxyl group at one end, and a very hydrophobic, stiff, and flat multi-ring body, which keeps strictly with the lipid tails. The lack of a charged head group means that cholesterol can easily flip between the bilayer leaflets- something that the other molecules with charged headgroups find very difficult. It has long been known that our genomes code for flippases and floppases: ATP-driven enzymes that can flip the charged phospholipids and sphingomyelin from one leaflet to the other. Why these enzymes exist, however, has been a conundrum.

Pumps that drive phospholipids against their natural equilibrium distribution, into one or the other leaflet.

It is not immediately apparent why it would be helpful to give up the natural symmetry and fluidity of the natural bilayer, and invest a lot of energy in keeping the compositions of each leaflet different. But that is the way it is. The outer leaflet of the plasma membrane tends to have more sphingomyelin and cholesterol, and the inner leaflet has more phospholipids. Additionally, those phospholipids tend to have unsaturated tails- that is, they have double bonds that break up the straight fatty tails that are typical in sphingomyelin. Membrane asymmetry has a variety of biological effects, especially when it is missing. Cells that lose their asymmetry are marked for cell suicide, intervention of the immune system, and also trigger coagulation in the blood. It is a signal that they have broken open or died. But these are doubtless later (maybe convenient) organismal consequences of universal membrane asymmetry. They do not explain its origin. 

A recent paper delved into the question of how and why this asymmetry happens, particularly in regard to cholesterol. Whether cholesterol even is asymmetric is controversial in the field, since measuring their location is very difficult. Yet these authors carefully show that, by direct measurement, and also by computer simulation, cholesterol, which makes up roughly forty percent of the membrane (its most significant single constituent, actually), is highly asymmetric in human erythrocyte membranes- about three fold more abundant in the outer leaflet than in the cytoplasmic leaflet. 

Cholesterol migrates to the more saturated leaflet. B shows a simulation where a poly-unstaturated (DAPC) phospholipid with 4 double bonds (blue) is contrasted with a saturated phospholipid (DPPC) with staight lipid tails (orange). In this simulation, cholesterol naturally migrates to the DPPC side as more DAPC is loaded, relieving the tension (and extra space) on the inner leaflet. Panel D shows that in real cells, scrambling the leaflet composition leads to greater cholesterol migration to the inner leaflet. This is a complex experiment, where the fluorescent signal (on the right-side graph) comes from a dye in an introduced cholesterol analog, which is FRET-quenched by a second dye that the experimenters introduced which is confined to the outer membrane. In the natural case (purple), signal is more quenched, since more cholesterol is in the outer leaflet, while after phospholipid scrambling, less quenching of the cholesterol signal is seen. Scrambling is verified (left side) by fluorescently marking the erythrocytes for Annexin 5, which binds to phosphatidylcholine, which is generally restricted to the inner leaflet. 

But no cholesterol flippase is known. Indeed, such a thing would be futile, since cholesterol equilibrates between the leaflets so rapidly. (The rate is estimated at milliseconds, in very rough terms.) So what is going on? These authors argue via experiment and chemical simulation that it is the pumped phospholipids that drive the other asymmetries. It is the straight lipid tails of sphingomyelin that attract the cholesterol, as a much more congenial environment than the broken/bent tails of the other phospholipids that are concentrated in the cytoplasmic leaflet. In turn, the cholesterol also facilitates the extreme phospholipid asymmetry. The authors show that without the extra cholesterol in the outer leaflet, bilayers of that extreme phospholipid composition break down into lipid globs.

When treated (time course) with a chemical that scrambles the plasma membrane leaflet lipid compositions, a test protein (top series) that normally (0 minutes) attaches to the inner leaflet floats off and distributes all over the cell. The bottom series shows binding of a protein (from outside these cells) that only binds phosphatidylcholine, showing that scrambling is taking place.

This sets up the major compositional asymmetry between the leaflets that creates marked differences in their properties. For example, the outer leaflet, due to the straight sphingomyelin tails and the cholesterol, is much stiffer, and packed much tighter, than the cytoplasmic leaflet. It forms a kind of shield against the outside world, which goes some way to explain the whole phenomenon. It is also almost twice as impermeable to water. Conversely, the cytoplasmic leaflet is more loosely packed, and indeed frequently suffers gaps (or defects) in its lipid integrity. This has significant consequences because many cellular proteins, especially those involved in signaling from the surface into the rest of the cytoplasm, have small lipid tails or similar anchors that direct them (temporarily) to the plasma membrane. The authors show that such proteins localize to the inner leaflet precisely because that leaflet has this loose, accepting structure, and are bound less well if the leaflets are scrambled / homogenized.

When the fluid mosaic model of biological membranes was first conceived, it didn't enter into anyone's head that the two leaflets could be so different, or that cells would have an interest in making them so. Sure, proteins in those membranes are rigorously oriented, so that they point in the right direction. But the lipids themselves? What for? Well, they do and now there are some glimmerings of reasons why. Whether this has implications for human disease and health is unknown, but just as a matter of understanding biology, it is deeply interesting.


Saturday, October 18, 2025

When the Battery Goes Dead

How do mitochondria know when to die?

Mitochondria are the energy centers within our cells, but they are so much more. They are primordial bacteria that joined with archaea to collaborate in the creation of eukaryotes. They still have their own genomes, RNA transcription and protein translation. They play central roles in the life and death of cells, they divide and coalesce, they motor around the cell as needed, kiss other organelles to share membranes, and they can get old and die. When mitochondria die, they are sent to the great garbage disposal in the sky, the autophagosome, which is a vesicle that is constructed as needed, and joins with a lysosome to digest large bits of the cell, or of food particles from the outside.

The mitochondrion spends its life (only a few months) doing a lot of dangerous reactions and keeping an electric charge elevated over its inner membrane. It is this charge, built up from metabolic breakdown of sugars and other molecules, that powers the ATP-producing rotary enzyme. And the decline of this charge is a sign that the mitochondrion is getting old and tired. A recent paper described how one key sensor protein, PINK1, detects this condition and sets off the disposal process. It turns out that the membrane charge does not only power ATP synthesis, but it powers protein import to the mitochondrion as well. Over the eons, most of the mitochondrion's genes have been taken over by the nucleus, so all but a few of the mitochondrion's proteins arrive via import- about 1500 different proteins in all. And this is a complicated process, since mitochondria have inner and outer membranes, (just as many bacteria do), and proteins can be destined to any of these four compartments- in either membrane, in the inside (matrix), or in the inter-membrane space. 

Figure 12-26. Protein import by mitochondria.
Textbook representation of mitochondrial protein import, with a signal sequence (red) at the front (N-terminus) of the incoming protein (green), helping it bind successively to the TOM and TIM translocators. 

The outer membrane carries a protein import complex called TOM, while the inner membrane carries an import complex called TIM. These can dock to each other, easing the whole transport process. The PINK1 protein is a somewhat weird product of evolution, spending its life being synthesized, transported across both mitochondrial membranes, and then partially chopped up in the mitochondrial matrix before its remains are exported again and fully degraded. That is when everything is working correctly! When the mitochondrial charge declines, PINK1 gets stuck, threaded through TOM, but unable to transit the TIM complex. PINK1 is a kinase, which phosphorylates itself as well as ubiquitin, so when it is stuck, two PINK1 kinases meet on the outside of the outer membrane, activate each other, and ultimately activate another protein, PARKIN, whose name derives from its importance in parkinson's disease, which can be caused by an excess of defective mitochondria in sensitive tissues, specifically certain regions and neurons of the brain. PARKIN is a ubiquitin ligase, which attaches the degradation signal ubiquitin to many proteins on the surface of the aged mitochondrion, thus signaling the whole mess to be gobbled up by an autophagosome.

A data-rich figure 1 from the paper shows purification of the tagged complex (top), and then the EM structure at bottom. While the purification (B, C) show the presence of TIM subunits, they did not show up in the EM structures, perhaps becuase they were not stable enough or frequent enough in proportion to the TOM subunits. But the PINK1+TOM_VDAC2 structures are stunning, helping explain how PINK1 dimerized so easily when it translocation is blocked.

The current authors found that PINK1 had convenient cysteine residues that allowed it to be experimentally crosslinked in the paired state, and thus freeze the PARKIN-activating conformation. They isolated large amounts of such arrested complexes from human cells, and used electon microscopy to determine the structure. They were amazed to see, not just PINK1 and the associated TOM complex, but also VDAC2, which is the major transporter that lets smaller molecules easily cross the outer membrane. The TOM complexes were beautifully laid out, showing the front end (N-terminus) of PINK1 threaded through each TOM complex, specifically the TOM40 ring structure.

What was missing, unfortunately, was any of the TIM complex, though some TIM subunits did co-purify with the whole complex. Nor was PARKIN or ubiquitin present, leaving out a good bit of the story. So what is VDAC2 doing there? The authors really don't know, though they note that reactive oxygen byproducts of mitochondrial metabolism would build up during loss of charge, acting as a second signal of mitochondrial age. These byproducts are known to encourage dimerization of VDAC channels, which naturally leads by the complex seen here to dimerization and activation of the PINK1 protein. Additionally, VDACs are very prevalent in the outer membrane and prominent ubiquitination targets for autophagy signaling.

To actually activate PARKIN ubiquitination, PINK1 needs to dissociate again, a process that the authors speculate may be driven by binding of ubiquitin by PINK1, which might be bulky enough to drive the VDACs apart. This part was quite speculative, and the authors promise further structural studies to figure out this process in more detail. In any case, what is known is quite significant- that the VDACs template the joining of two PINK1 kinases in mid-translocation, which, when the inner membrane charge dies away, prompts the stranded PINK1 kinases to activate and start the whole disposal cascade. 

Summary figure from the authors, indicating some speculative steps, such as where the reactive oxygen species excreted by VDAC2 sensitise PINK1, perhaps by dimerizing the VDAC channel itself. And where ubiquitin binding by PINK1 and/or VDAC prompts dissociation, allowing PARKIN to come in and get activated by PINK1 and spread the death signal around the surface of the mitochondrion.

It is worth returning briefly to the PINK1 life cycle. This is a protein whose whole purpose, as far as we know, is to signal that mitochondria are old and need to be given last rites. But it has a curiously inefficient way of doing that, being synthesized, transported, and degraded continuously in a futile and wasteful cycle. Evolution could hardly have come up with a more cumbersome, convoluted way to sense the vitality of mitochondria. Yet there we are, doubtless trapped by some early decision which was surely convenient at the time, but results today in a constant waste of energy, only made possible by the otherwise amazingly efficient and finely tuned metabolic operations of PINK1's target, the mitochondrion.


Note that at the glacial maxima, sea levels were almost 500 feet (150 meters) lower than today. And today, we are hitting a 3 million year peak level.

Saturday, September 6, 2025

How to Capture Solar Energy

Charge separation is handled totally differently by silicon solar cells and by photosynthetic organisms.

Everyone comes around sooner or later to the most abundant and renewable form of energy, which is the sun. The current administration may try to block the future, but solar power is the best power right now and will continue to gain on other sources. Likewise, life started by using some sort of geological energy, or pre-existing carbon compounds, but inevitably found that tapping the vast powers streaming in from the sun was the way to really take over the earth. But how does one tap solar energy? It is harder than it looks, since it so easily turns into heat and lost energy. Some kind of separation and control are required, to isolate the power (that is to say, the electron that was excited by the photon of light), and harness it to do useful work.

Silicon solar cells and photosynthesis represent two ways of doing this, and are fundamentally, even diametrically, different solutions to this problem. So I thought it would be interesting to compare them in detail. Silicon is a semiconductor, torn between trapping its valence electrons in silicon atoms, or distributing them around in a conduction band, as in metals. With elemental doping, silicon can be manipulated to bias these properties, and that is the basis of the solar cell.

Schematic of a silicon solar cell. A static voltage exists across the N-type to P-type boundary, sweeping electrons freed by the photoelectric effect (light) up to the conducting electrode layer.


Solar cells have one side doped to N status, and the bulk set to P doping status. While the bulk material is neutral on both sides, at the boundary, a static charge scheme is set up where electrons are attracted into the P-side, and removed from the N-side. This static voltage has very important effects on electrons that are excited by incoming light and freed from their silicon atoms. These high energy electrons enter the conduction band of the material, and can migrate. Due to the prevailing field, they get swept towards the N side, and thus are separated and can be siphoned off with wires. The current thus set up can exert a pressure of about 0.6 volt. That is not much, nor is it equivalent to the 2 to 3 electron volts received from each visible photon. So a great deal of energy is lost as heat.

Solar cells do not care about capturing each energized electron in detail. Their purpose is to harvest a bulk electrical voltage + current with which to do some work in our electrical grids. Photosynthesis takes an entirely different approach, however. This may be mostly for historical and technical reasons, but also because part of its purpose is to do chemical work with the captured electrons. Biology tends to take a highly controlling approach to chemistry, using precise shapes, functional groups, and electrical environments to guide reactions to exact ends. While some of the power of photosynthesis goes toward pumping protons out of the membrane, setting up a gradient later used to make ATP, about half is used for other things like splitting water to replace lost electrons, and making reducing chemicals like NADPH.

A portion of a poster about the core processes of photosynthesis. It provides a highly accurate portrayal of the two photosystems and their transactions with electrons and protons.

In plants, photosynthesis is a chain of processes focused around two main complexes, photosystems I and II, and all occurring within membranes- the thylakoid membranes of the chloroplast. Confusingly, photosystem II comes first, accepting light, splitting water, pumping some protons, and sending out a pair of electrons on mobile plastoquinones, which eventually find their way to photosystem I, which jacks up their energy again using another quantum of light, to produce NADPH. 

Photosystem II is full of chlorophyll pigments, which are what get excited by visible photons. But most of them are "antenna" chlorophylls, passing the excitation along to a pair of centrally located chlorophylls. Note that the light energy is at this point passed as a molecular excitation, not as a free electron. This passage may happen by Förster resonance energy transfer, but is so fast and efficient that stronger Redfield coupling may be involved as well. Charge separation only happens at the reaction center, where an excited electron is popped out to a chain of recipients. The chlorophylls are organized so that the pair at the reaction center have a slightly lower energy of excitation, thus serve as a funnel for excitation energy from the antenna system. These transfers are extremely rapid, on the picosecond time scale.

It is interesting to note tangentially that only red light energy is used. Chlorophylls have two excitation states, excited by red light (680 nm = 1.82 eV) and blue light (400-450 nm, 2.76 eV) (note the absence of green absorbance). The significant extra energy from blue light is wasted, radiated away to let it (the excited electron) relax to the lower excitation state, which is then passed though the antenna complex as though it had come from red light. 

Charge separation is managed precisely at the photosystem II reaction center through a series of pigments of graded energy capacity, sending the excited electron first to a neighboring chlorophyll, then to a pheophytin, then to a pair of iron-coordinated quinones, which then pass two electrons to a plastoquinone that is released to the local membrane, to float off to the cytochrome b6f complex. In photosystem II, another two photons of light are separately used to power the splitting of one water molecule, (giving two electrons and pumping two protons). So the whole process, just within photosystem II, yields, per four light quanta, four protons pumped from one side of the membrane to the other. Since the ATP sythetase uses about three protons per ATP, this nets just over one ATP per four photons. 

Some of the energetics of photosystem II. The orientations and structures of the reaction center paired chlorophylls (Pd1, Pd2), the neighboring chlorophyll (Chl), and then the pheophytin (Ph) and quinones (Qa, Qb) are shown in the inset. Energy of the excited electron is sacrifice gradually to accomplish the charge separation and channeling, down to the final quinone pairing, after which the electrons are released to a plastoquinone and send to another complex in the chain.

So the principles of silicon and biological solar cells are totally different in detail, though each gives rise to a delocalized field, one of electrons flowing with a low potential, and the other of protons used later for ATP generation. Each energy system must have a way to pop off an excited electron in a controlled, useful way that prevents it from recombining with the positive ion it came from. That is why there is such an ornate conduction pathway in photosystem II to carry that electron away. Overall, points go to the silicon cell for elegance and simplicity, and we in our climate crisis are the beneficiaries, if we care to use it. 

But the photosynthetic enzymes are far, far older. A recent paper pointed out that no only are photosystems II and I clearly cousins of each other, but it is likely that, contrary to the consensus heretofore, photosystem II is the original version, at least of the various photosystems that currently exist. All the other photosystems (including those in bacteria that lack oxygen stripping ability) carry traces of the oxygen evolving center. It makes sense that getting electrons is a fundamental part of the whole process, even though that chemistry is quite challenging. 

That in turn raises a big question- if oxygen evolving photosystems are primitive (originating very roughly with the last common ancestor of all life, about four billion years ago) then why was earth's atmosphere oxygenated only from two billion years ago onward? It had been assumed that this turn in Earth history marked the evolution of photosystem II. The authors point out additionally that there is also evidence for the respiratory use of oxygen from these extremely early times as well, despite the lack of free oxygen. Quite perplexing, (and the authors decline to speculate), but one gets the distinct sense that possibly life, while surprisingly complex and advanced from early times, was not operating at the scale it does today. For example, colonization of land had to await the buildup of sufficient oxygen in the atmosphere to provide a protective ozone layer against UV light. It may have taken the advent of eukaryotes, including cyanobacterial-harnessing plants, to raise overall biological productivity sufficiently to overcome the vast reductive capacity of the early earth. On the other hand, speculation about the evolution of early life based on sequence comparisons (as these authors do) is notoriously prone to artifacts, since what evolves at vanishingly slow rates today (such as the photosystem core proteins) must have originally evolved at quite a rapid clip to attain the functions now so well conserved. We simply can not project ancient ages (at the four billion year time scales) from current rates of change.


Saturday, August 2, 2025

The Origin of Life

What do we know about how it all began? Will we ever know for sure?

Of all the great mysteries of science, the origin of life is maybe the one least likely to ever be solved. It is a singular event that happened four billion years ago in a world vastly different from ours. Scientists have developed a lot of ideas about it and increased knowledge of this original environment, but in the end, despite intense interest, the best we will be able to do is informed speculation. Which is, sure, better than uninformed speculation, (aka theology), but still unsatisfying. 

A recent paper about sugars and early metabolism (and a more fully argued precursor) piqued my interest in this area. It claimed that there are non-enzymatic ways to generate most or all of the core carbohydrates of glycolysis and CO2 fixation around pentose sugars, which are at the core of metabolism and the supply of sugars like ribose that form RNA, ATP, and other key compounds. The general idea is that at the very beginning of life, there were no enzymes and proteins, so our metabolism is patterned on reactions that originally happened naturally, with some kind of kick from environmental energy sources and mineral catalysts, like iron, which was very abundant. 

That is wonderful, but first, we had better define what we mean by life, and figure out what the logical steps are to cross this momentous threshold. Life is any chemical process that can accomplish Darwinian evolution. That is, it replicates in some fashion, and it has to encode those replicated descendants in some way that is subject to mutation and selection. With those two ingredients, we are off to the races. Without them, we are merely complex minerals. Crystals replicate, sometimes quite quickly, but they do not encode descendent crystals in a way that is complex at all- you either get the parent crystal, or you get a mess. This general theory is why the RNA world hypothesis was, and remains, so powerful. 

The RNA world hypothesis is that RNA is likely the first genetic material, before DNA (which is about 200 times more stable) was devised. RNA also has catalytic capabilities, so it could encode in its own structure some of the key mechanisms of life, therefore embodying both of the critical characteristics of life specified above. The fact that some key processes remain catalyzed by RNA today, such as ribosomal synthesis of proteins, spliceosomal re-arrangement of RNAs, and cutting of RNAs by RNAse P, suggest that proteins (as well as DNA) were the Johnny-come-latelies of the chemistry of life, after RNA had, in its lumbering, inefficient way, blazed the trail. 


In this image of the ribosome, RNA is gray, proteins are yellow. The active site is marked with a bright light. Which came first here-
protein or RNA?


But what kind of setting would have been needed for RNA to appear? Was metabolism needed? Does genetics come first, or does metabolism come first? If one means a cyclic system of organic transformations encoded by protein or RNA enzymes, then obviously genetics had to come first. But if one means a mess of organic chemicals that allowed some RNA to be made and provide modest direction to its own chemical fate, and to a few other reactions, then yes, those chemicals had to come first. A great deal of work has been done speculating what kind of peculiar early earth conditions might have been conducive to such chemistries. Hydrothermal vents, with their constant input of energy, and rich environment of metallic catalysts? Clay particles, with their helpful surfaces that can faux-crystalize formation of RNAs? Warm ponds, hot ponds, UV light.... the suggestions are legion. The main thing to realize is that early earth was surely highly diverse, had a lot of energy, and had lots of carbon, with a CO2-rich atmosphere. UV would have created a fair amount of carbon monoxide, which is the feedstock of the Fischer-Tropsch reactions that create complex organic compounds, including lipids, which are critical for formation of cells. Early earth very likely had pockets that could produce abundant complex organic molecules.

Thus early life was surely heterotrophic, taking in organic chemicals that were given by the ambient conditions for free. And before life really got going, there was no competition- there was nothing else to break those chemicals down, so in a sort of chemical pre-Darwinian setting, life could progress very slowly (though RNA has some instability in water, so there are limits). Later, when some of the scarcer chemicals were eaten up by other already-replicating life forms, then the race was on to develop those enzymes, of what we now recognize as metabolism, which could furnish those chemicals out of more common ingredients. Onwards the process then went, hammering out ever more extensive metabolic sequences to take in what was common and make what was precious- those ribose sugars, or nucleoside rings that originally had arrived for free. The first enzymes would have been made of RNA, or metals, or whatever was at hand. It was only much later that proteins, first short, then longer, came on the scene as superior catalysts, extensively assisted by metals, RNAs, vitamins, and other cofactors.

Where did the energy for all this come from? To cross the first threshold, only chemicals (which embodied outside energy cycles) were needed, not energy. Energy requirements accompanied the development of metabolism, as the complex chemicals become scarcer and they needed to be made internally. Only when the problem of making complex organic chemicals from simpler ones presented itself did it also become important to find some separate energy source to do that organic chemistry. Of course, the first complex chemicals absolutely needed were copies of the original RNA molecules. How that process was promoted, through some kind of activated intermediates, remains particularly unclear.

All this happened long before the last universal common ancestor, termed "LUCA", which was already an advanced cell just prior to the split into the archaeal and bacterial lineages, (much later to rejoin to create the most amazing form of life- eukaryotes). There has been quite a bit of analysis of LUCA to attempt to figure out the basic requirements of life, and what happened at the origin. But this ("top-down") approach is not useful. The original form of life was vastly more primitive, and was wholly re-written in countless ways before it became the true bacterial cell, and later still, LUCA. Only the faintest traces remain in our RNA-rich biochemistry. Just think about the complexity of the ribosome as an RNA catalyst, and one can appreciate the ragged nature of the RNA world, which was probably full of similar lumbering catalysts for other processes, each inefficient and absurdly wasteful of resources. But it could reproduce in Darwinian fashion, and thus it could improve. 

Today we find on earth a diversity of environments, from the bizarre mineral-driven hydrothermal vents under the ocean to the hot springs of Yellowstone. The geology of earth is wondrously varied, making it quite possible to credit one or more of the many theories of how complex organic molecules may have become a "soup" somewhere on the early Earth. When that soup produces ribose sugars and the other rudiments of RNA, we have the makings of life. The many other things that have come to characterize it, such as lipid membranes and metabolism of compounds are fundamentally secondary, though critically important for progress beyond that so-pregnant moment. 


Saturday, July 12, 2025

What Happens When You Go on Calorie Restriction?

New research pinpoints some very odd metabolites that communicate between the gut and other tissues.

The only surefire way to extend lifespan, at least so far, is to go on a calorie restricted diet. Not just a little restricted- really restricted. The standard protocol is seventy percent of a normal, at-will diet. In most organisms tested, this maintains health and extends lifespan by significant amounts, ten to forty percent. There has naturally been a great deal of interest in the mechanism. Is it due to clearance of senescent cells? Increased autophagy and disposal of sub-optimal cell components? Prevention of cancer? Enhancement or moderation of the immune system? The awesome austerities of the ancient yogis and other religious hermits no longer look so far out.

A recent pair of papers took a meticulous approach to finding one of the secret ingredients that conveys calorie restriction to the body. They find, to begin with, that blood serum from calorie-restricted mice can confer longevity properties to cultured cells. Blood from calorie-restricted mice can also confer relevant properties to other mice. This serum could be heated to 133 degrees without detriment, indicating that the factor involved is not a large protein. On the other hand, dialysis to remove small molecules renders it inactive. Lastly, they put the serum though a lipid-binding purification system, which diminished its activity only slightly. Thus they were looking for a mostly water-soluble metabolite in the blood.

Traditionally, the next step would have been to conduct further fractionation and purification of the various serum components. But times have changed, and their next step was to conduct a vast chemical (mass-spectrometry) analysis of blood from calorie restricted vs control mice. Out of that came over a thousand differential metabolites. Taking the most differential and water-soluble metabolites, they assayed them over their cultured cells for a major hallmark of calorie restriction, activation of the AMPK kinase, a master metabolic regulator. And out of this came only one metabolite that had this effect at concentrations that are found in the calorie restricted serum: lithocholic acid.

Now this is a very odd finding. Lithocholic acid is what is known as a secondary bile acid. It is not natively made by us at all, but is a byproduct of bacterial metabolism in our guts, from a primary bile salt secreted by the liver, chenodeoxycholic acid. These bile salts are detergent-like molecules made from cholesterol that have charged groups stuck on them, and can thus emulsify fats in our digestive tract. Bile salts are extensively recycled through the liver, so that we make a little each day, but use a lot of them in digestion. How such a chemical became the internal signal of starvation is truly curious.

The researchers then demonstrate that lithocholic acid can be administered directly to mice to extend lifespan and have all the expected intermediary effects, including decreased blood glucose levels. Indeed, it did so in flies and worms as well, even though these species do not even see lithocholic acid naturally- their digestion is different from ours. They evidently still have key shared receptors that can recognize this metabolite, perhaps because they use a very similar metabolite internally. 

And what are those receptors and the molecular pathway of this effect? The second paper in this pair shows that lithocholic acid binds to TULP3, a member of the tubby protein family of transcription regulators, which binds to lipids in the plasma membrane and shuttles back and forth to the nucleus to provide feedback on the membrane condition or other events at the cell surface. TULP3 also binds to sirtuins, which are notoriously involved in metabolic regulation, aging, and cell suicide. These latter were all the rage when red wine was thought to extend life span, as sirtuins are activated by resveratrol. At any rate, sirtuins inhibit (by de-acetylation) the lysosomal acidification pump that has been known (curiously enough) to go on to inhibit AMPK, and play basic roles on cell energy balance regulation.

So we are finally at AMPK, a protein kinase that responds to AMP, which is the low-energy converse of ATP, the energy currency of the cell. High levels of AMP indicate the cell is low on energy, and the kinase thus phosphorylates and thus regulates a huge variety of targets, which generally increase energy uptake from the blood, (glucose and fats), reduce internal energy storage, and increase internal recycling of materials and energy, among much else. At any rate, the researchers above show that AMPK activation is absolutely required for the effects of lithocholic acid, and also for life extension more generally from calorie restriction. On the other hand, lithocholic acid does not touch the AMP level in the cell, but, as mentioned above, activates through a much more circuitous route- one that goes through lysosomes, which are sort of the stomach of the cell, and are increasingly recognized as a central player in its energy regulation.

Some beautiful antibody staining to different types of myosin- the key motor protein in muscles- in muscle cells from various places in the mouse body (left to right). The antibodies are to slow-twitch myosin (red), which marks the more efficient slow muscle fibers, and blue and green, which mark fast twitch myosins- generally less efficient and markers of aging. The bottom two rows are from a AMPK knockout mouse, where there is little difference between the two rows. The pairs of rows are from normal (top) and (bottom) a specific mutant of the lysosomal proton pump that has its acetylation sites all mutated to inactivity, thus mimicking in strongest form the action of the sirtuin/lithocholic acid pathway. Here, (second row, especially last frame), there is a higher level of red vs blue, and of blue vs green staining/fibers, suggesting that these muscles have had a durable uptick in efficiency and, implicitly, reduced atrophy and higher longevity.

These are some long and winding roads to get to positive effects from calorie restriction. What is going on? Firstly, biology does not owe it to us to be concise or easy to understand. Lithocholic acid is going to always be around, given that we eat food, but has mostly, heretofore, been regarded as toxic and even carcinogenic. It, along with the other bile acids, bind to dedicated transcription regulators that provide feedback control over their synthesis in the liver. So they are no strangers to our physiology. The authors do not delve into why calorie restriction durably raises the level of lithocholic acid in the gut or the blood, but that is an important place to start. It might well be that our digestive system squeezes every last drop of nutrition out of meagre meals by ramping up bile acid production. It is a recyclable resource, after all. This could have then become a general starvation signal to tissues, in advance of actual energy deficits, which one would rather not face if at all possible. So it is basically a matter of forewarned is forearmed. One is yet again amazed by the depth of biological systems, which have complexity and robustness far beyond our current knowledge, let alone our ability to replicate or model in artificial terms.


Saturday, July 5, 2025

Water Sensing by WNKs

WNK kinases sense osmotic condition as well as chloride concentration to keep us hydrated.

"Water, water, everywhere, nor any drop to drink." This line from Coleridge evokes the horror of thirst on the ghost ship, as its crew can not drink salt water. Other species can, but ocean water is too strong for us, roughly four times as salty as our blood. Nevertheless, our bodies have exquisite mechanisms to manage salt concentrations, with each cell managing its own traffic, and the kidneys managing most electrolytes in the blood. It is a very difficult task that has led to clever evolutionary solutions like counter-current exchange across the nephron loops, and stark differences in those nephron cell membranes, over water or salt permeability, to maximize use of passive ion gradients. But at the heart of the system, one has to know what is going on- one has to monitor all of the electrolyte levels and overall osmotic stress.

One such monitoring thermostat for chemical balances turns out to be the WNK kinases- a family of four proteins in humans that control (by phosphorylating them) a secondary set of regulators, which in turn control many salt transporters, such as SLC12A2 and SLC12A4. These latter are passive, though regulated, co-transporters that allow chloride across the membrane when combined with a matching cation like sodium or potassium. The cations drive the process, because they are normally kept (pumped) to strong gradients across cell membranes, with high sodium outside, and high potassium inside. Thus when these co-transporters are turned on (or off), they use the cation gradients to control the chloride level in the cell, in either direction, depending on the particular transporter involved. Since the sodium and potassium levels are held at relatively static, pumped levels, it is the chloride level that helps control the overall osmotic pressure in a finely tuned way. 

A few of the ionic transactions done in the kidney.


The WNK kinases were discovered genetically, in families that showed hypertension and raised levels of chloride and potassium in the blood. These syndromes mirrored complementary syndromes caused by mutations in SLC12A2, the Na/Cl co-transporter, indicating the WNK kinases inhibit SLC12A2. It turns out that WNK, which are named for an unusual catalytic site (with no lysine [K]) are sensors for both chloride, which inhibit them, and for osmotic pressure, which activates them. They are expressed in different locations and have slightly different activities, (and control many more transporters and processes than discussed here), but I will treat them interchangeably here. The logic of all this is that, if osmotic pressure is low, that means that internal salt levels are low, and chloride needs to be let into the cell, by activating the cation/chloride co-transporters. Likewise, if chloride levels inside the cell are high, the WNK kinase needs to be inhibited, reducing chloride influx. 

A recent paper (and prior work from the same lab) discussed structures of the WNK regulators that explain some of this behavior. WNK kinases are dimers at rest, and in that state mutually inhibit their auto-phosphorylation. It is separation and auto-phosphorylation that turns them on, after which they can then phosphorylate their target proteins, such as the secondary kinases STK39 and OSR1. The authors had previously found a chloride binding site right at the active site of the enzyme that promotes dimerization. In the current paper, they reveal a couple of clusters of water molecules which similarly affect the dimerization, and thus activity, of the enzyme.

Location of the inhibitory chloride (green) binding site in WNK1. This is right in the heart of the protein, near the active kinase site and dimerization interface with the other WNK1 partner.

While X-ray crystal structures rarely show or care much about water molecules, (they are extremely small and hard to track), here, those waters were hypothesized to be important, since WNK kinases are responsive to osmotic pressure. One way to test this is to add PEG400 to the reaction. This is a polymer (400 molecular weight) that is water-like and inert, but large in a way that crowds out water molecules from the solution. At 15% or 25% of the volume, PEG400 displaces a lot of water, lowers the water activity of a solution, and thus increases the osmotic pressure- that is its tendency to draw water in from outside. Plants use osmotic pressure as turgor pressure, and our cells, not having cells walls, need to always be at an osmotic pressure similar to the outside, lest they swell up, or conversely shrink away. Anyhow, WKN kinases can be switched from an inactive to active state just by adding PEG400- a sure sign that they are sensors for osmotic pressure.


Water network (blue dots) within the WNK1 kinase protein. Most of the protein is colored teal, while the active site kinase area is red, and a tiny amount of the dimer partner is colored green. When this crystal is osmotically challenged, the water network collapses from 14 waters to 5, changing the structure and promoting dissociation of the dimer. In B is show a sequence alignment over a wide evolutionary range where the amino acids that coordinate the water network (yellow) are clearly very well conserved, thus quite important.

Above is shown a closeup of the WNK1 protein, showing in teal the main backbone, including the catalytic loop. In red is the activation loop of the kinase, and in green is a little bit from the other WNK1 protein in the dimer pair. The chloride, if bound, would be located right at top center, at K375. Shown in blue are a series of fourteen water molecules that make up one so-called water network. Another smaller one was found at the interface between the two WNK1 proteins. The key finding was that, if crystalized with PEG400, this water network collapsed to only five water molecules, thereby changing the structure of the protein significantly and accounting for the dissolution of the dimer. 

Superposition of WNK1 with PEG400 (purple) and activated vs WNK1 without, in an inactive state (teal). Most of the blue waters would be gone in the purple state as well. This shows the significant structural transition, particularly in the helixes above the active site, which induce (in the purple state) dissociation of the dimer, auto-phosphorylation, and activation.

Thus there is a delicate network of water molecules tentatively held together within this protein that is highly sensitive to the ambient water activity (aka osmotic pressure). This dynamic network provides the mechanism by which the WNK proteins sense and transmit the signal that the cell requires a change in ionic flows. Generally the point is to restore homeostatic balance, but in the kidney these kinases are also used to control flows for the benefit of the organism as a whole, by regulating different transporters in different parts of the same cell- either on the blood side, or the urine side.


Saturday, June 14, 2025

Sensing a Tiny Crowd

DCP5 uses a curious phase transition to know when things are getting tight inside a cell.

Regulation is the name of the game in life. Once the very bares bones of metabolism and replication were established, the race was on to survive better. And that often means turning on a dime- knowing when to come, when to go, when to live it up, and when to hunker down. Plants can't come and go, so they have rather acute problems of how to deal with the hand (that is, the location) they have been dealt. A big aspect of which is getting water and dealing with lack of water. All life forms have ways to adapt to osmotic stress- that is, any imbalance of water that can be caused by too much salt outside, or drought, or membrane damage. Membranes are somewhat permeable to water, so accommodation to osmotic stress is all about regulating balances of salts and larger ions using active pumps. For example, plant cells usually have pretty high turgor pressure, (often to 30 psi), to pump up their strong cell walls, which helps them stay upright. 

Since cells are filled with large, impermeable molecules like proteins, nucleic acids, and metabolites, the default setting is inward water migration, and thus some turgor pressure. But if excess salts build up outside, or a drought develops, the situation changes. A smart plant cell needs to know what is going on, so it can pump salts inwards (or build up other osmolyte balances) to restore the overall water balance. While osmosensing in other kinds of cells like yeast is understood to some extent, what does so in plants has been somewhat mysterious. (Lots of temperature sensors are known, however.) Yet a recent article laid out an interesting mechanism, based on protein aggregation.

Some cells have direct sensors of membrane tension. Others have complex signaling systems with GPCR proteins that sense osmotic stress. But this new plant mechanism is rather simple, if also trendy. The protein DCP5 is a protein that looks like an RNA-binding protein that participates in stress responses and RNA processing / storage. But in plants, it appears to have an additional power- that of rapid hyper-aggregation when crowded by loss of turgor pressure and cell volume. These aggregates are now understood to be a unique phase of matter, a macromolecular condensate, that is somewhere between liquid and solid. And importantly, they segregate from their surroundings, like oil droplets do from water. The authors did not really discuss what got them interested in this, but this is a known stress-related protein, so once they labeled and visualized it, they must have been struck by its dynamic behavior.

"This condensation was highly dynamic; newly assembled condensates became apparent within 2 min of stress exposure, and the condensation extent is positively correlated with stress severity. ... In cells subjected to continuous hyperosmotic-isosmotic cycles, DCP5 repeatedly switched between condensed and dispersed states."

Titration of an external large molecule, (polyethylene glycol of weight 8000 Daltons, or PEG), which draws water out of a plant cell. The green molecule is DCP5, labeled with GFP, and shows its dramatic condensation with water loss.

Well, great. So the researchers have found a protein in plants that dramatically aggregates under osmotic stress, due to an interesting folding / bistable structure that it has, including a disordered region. Does that make it a water status sensor? And if so, how does it then regulate anything else? 

"In test tube, recombinant DCP5 formed droplets under various artificial crowding conditions generated by polymeric or proteinic crowders. DCP5 droplets emerged even at very low concentrations of PEG8000 (0.01 to 1%), indicating an unusual crowding sensitivity of DCP5."

It turns out that DCP5, aside from binding with its own kind, also drags other proteins into its aggregates, some of which in turn bind to key mRNAs. The ultimate effect is to shield key proteins such as transcription regulators from getting into the nucleus, and shield various mRNAs from translation. In this way, the aggregation system rapidly reprograms the cell to adapt to the new stress.

As one often says in biology and evolution ... whatever it takes! Whether the regulation is by phosphorylation, or proteolysis, or sequestration, or building an enormous brain, any and all roads to regulation are used somewhere to get to where we want to go, which is exquisite sensitivity and responsiveness to our environment, so that we can adapt to it ever faster- far faster than the evolutionary process does.