Showing posts with label evolution. Show all posts
Showing posts with label evolution. Show all posts

Saturday, October 21, 2023

One Pump to Rule ... a Tiny Vesicle

Synaptic vesicles are powered by a single pump that has two speeds- on and off.

While some neural circuits are connected by direct electrical contact, via membrane pores, most use a synapse, where the electrical signal stops, gets turned into secretion of a neurotransmitter molecule, which crosses to the next cell, where receptors pick it up and boot up a new electrical signal. A slow and primitive system, doubtless thanks to some locked-in features of our evolutionary history. But it works, thanks to a lot of improvements and optimization over the eons.

The neurotransmitters, of which there are many types, sit ready and waiting at the nerve terminals in synaptic vesicles, which are tiny membrane bags that are specialized to hold high concentrations of their designated transmitter, and to fuse rapidly with the (pre-) synaptic membrane of their nerve terminal, to release their contents when needed, into the synaptic cleft between the two neurons. While the vesicle surfaces are mostly composed of membranes, it is the suite of proteins on their surfaces that provide all the key functions, such as transport of neurotransmitters, sensing of the activating nerve impulse (voltage), fusing with the plasma membrane, and later retrieval of the fused membrane patches/proteins and recycling into new synaptic vesicles.

Experimental scheme- synaptic vesicles are loaded with a pH-sensitive fluorescent dye that tells how the V-ATPase (pink) is doing pumping protons in, powered by ATP from the cytoplasm. The proton gradient is then used by the other transporters in the synaptic vesicle (brown) to load it with its neurotransmitter.

The neurotransmitters of whatever type are loaded into synaptic vesicles by proton antiporter pumps. That is, one or two protons are pumped out in exchange for a molecule of the transmitter being pumped in. They are all proton-powered. And there is one source of that power, an ATP-using proton pump called a V-type ATPase. These ATPases are deeply related to the F-type ATP synthase that does the opposite job, in mitochondria, making ATP from the proton gradient that mitochondria set up from our oxygen-dependent respiration / burning of food. Both are rotors, which spin around as they carefully let protons go by, while a separate domain of the protein- attached via stator and rotor segments- makes or breaks down ATP, depending on the direction of rotation. Both enzymes can go in either direction, as needed, to pump protons either in or out, and traverse the reaction ADP <=> ATP. It is just an evolutionary matter of duplication and specialization that the V-type and F-type enzymes have taken separate paths and turn up where they do.

Intriguingly, synaptic vesicles are each served by one V-type ATPase. One is enough. That means that one molecule has to flexibly respond to variety of loads, from the initial transmitter loading, to occasional replenishment and lots of sitting around. A recent paper discussed the detailed function of the V-type ATPase, especially how it handles partial loads and resting states. For the vesicles spend most of their time full, waiting for the next nerve impulse to come along. The authors find that this ATPase has three states it switches between- pumping, resting, and leaking. 

Averaging over many molecules/vesicles, the V-type ATPase pump operates as expected. Add ATP, and it acidifies its vesicle. The Y-axis is the fluorescent signal of proton accumulation in the vesicle. Then when a poison of the ATPase is added (bafilomycin), the gradient dissipates in a few minutes.

They isolate synaptic vesicles directly from rat brains and then fuse them with smaller experimental vesicles that contain a fluorescent tracer that is sensitive to pH- just the perfect way to monitor what is going on in each vesicle, given a powerful enough microscope. The main surprise was the stochastic nature of the performance of single pumps. Comparing the average of hundreds of vesicles (above) with a trace from a single vesicle (below) shows a huge difference. The single vesicle comes up to full acidity, but then falls back for long stretches of time. These vesicles are properly loaded and maintained on average, but individually, they are a mess, falling back to pH / chemical baseline with alarming frequency.


On the other hand, at the single molecule level, the pump is startlingly stochastic. Over several hours, it pumps its vesicle full of protons, then quits, then restarts several times.

The authors checked that the protons had no other way out that would look like this stochastic unloading event, and concluded that the loss of protons was monotonic, thus due to general leakage, not some other channel that occasionally opens to let out a flood of protons. But then they added an inhibitor that blocks the V-ATPase, which showed that particularly (and peculiarly) rapid events of proton leakage come from the V-ATPase, not general membrane leakage. They have a hard time explaining this, discounting various theories such that it represents ATP synthesis (a backwards reaction, in the face of overwhelming ratios of ATP/ADP in their experiment), or that the inactive mode of the pump can switch to a leakage mode, or that the pump naturally leaks a bit while it operates in the forward direction. It appears that only while the pump is on and churning through ATP, it can occasionally fail catastrophically and leak out a flood of protons. But then it can go on as if nothing had happened and either keep pumping or take a rest break.

Regulation by ATP is relatively minor, with a flood of ATP helping keep the pump more active longer. But physiological concentrations tend to be stable, so not very influential for pumping rates. These are two separate individual pumps/vesicles shown, top and bottom. It is good to see the control- the first segment of time when no ATP was present and the pump could not run at all. But then look at the bottom middle trace- plenty of ATP, but nothing going on- very odd. Lastly, the sudden unloading seen in some of these traces (bottom right) is attributed to an extremely odd leakage state of the same V-ATPase pump. Not something you want to see, generally.

The main finding is that this pump has quite long dwell times (3 minutes or so) under optimal conditions, and switches with this time period between active pumping and an inactive resting state. And that the pumping dwell time is mostly regulated, not by the ambient ATP concentration, but by the proton gradient, which is expressed by some combination of the charge differential across the vesicle membrane and the relative proton concentration gradient (the chemical gradient). It is a bit like a furnace, which has only two speeds- on or off, though in this case the thermostat is pretty rough. They note that other researchers have noted that synaptic vesicles seem to have quite variable amounts of transmitter, which must derive from the variability of this pump seen here. But averaged over the many vesicles fused during each neuronal firing, this probably isn't a big deal.

The behavior of this pump is a bit weird, however, since most machines that we are familiar with show more gradual breakdowns under stress, straining and slowing down. But here, the pump just decides to shut down for long periods of time, generally when the vesicle is fully charged up, but sometimes when it is not. It is a reflection that we are near the quantum level here, dealing with molecules that are very large in some molecular sense, but still operating at the atomic scale, particularly at the key choke points of this kind of protein that surely involve subtle shifts of just a few atoms that impart this regulatory shift, from active to inactive. What is worse, the pump sometimes freaks out completely and, while in its on state, switches to a leaking state that lets out protons ten times faster than the passive leakage through the rest of the vesicle membrane. The authors naturally urge deeper structural studies of what might be going on!


Saturday, October 7, 2023

Empty Skepticism at the Discovery Institute

What makes a hypothesis scientific, vs a just-so story, or a religious fixation?

"Intelligent" design has fallen on hard times, after a series of court cases determined that it was, after all, a religious idea and could not be foisted on unsuspecting schoolchildren, at least in state schools and under state curricula. But the very fact of religious motivation leads to its persistence in the face of derision, evidence, and apathy. The Discovery Institute, (which, paranthetically, does not make any discoveries), remains the vanguard of intelligent design, promoting "skepticism", god, alternative evolutionary theories, and, due to the paucity of ways to attack evolution, tangential right-wingery such as anti-vaccine agitation. By far their most interesting author is Günter Bechly, who delves into the paleontological record to heap scorn on other paleontologists and thereby make room for the unmentioned alternative hypothesis ... which is god.

A recent post discussed the twists and turns of ichthyosaur evolution. Or should we say biological change through time, with unknown causes? Ichthyosaurs flourished from about 250 million years ago (mya) to 100 mya, with the last representatives dated to 90 mya. They were the reptile analogs of whales and dophins, functioning as apex predators in the ocean. They were done in by various climate crises well-prior to the cometary impact that ended the Cretaceous and the reign of dinosaurs in general.

Bechly raises two significant points. First is the uncertain origins of Ichthyosaurs. As is typical with dramatic evolutionary transitions like that from land to water in whales, the time line is compressed, since there are a lot of adaptations that are desirable for the new environment that might have been partially pre-figured, but get fleshed out extensively with the new ecological role and lifestyle. Selection is presumably intense and transitional fossils are hard to find. This was true for whales, though beautiful transitional fossils have been found more recently. And apparently this is true for the Ichthyosaurs as well, where none have been found, yet. There is added drama stemming from the time of origin, which is right after the Permian exinction, perhaps the greatest known extinction event in the history of the biosphere. Radiations after significant extinction events tend to be rapid, with few transitional fossils, for the same reason of new niches opening and selection operating rapidly.

Ichthyosaur

Bechly and colleagues frequently make hay out of gaps in the fossil record, arguing that something (we decline to be more specific!) else needs to be invoked to explain such lack of evidence. It is a classic god of the gaps argument. But since the fossils are never out of sequence, and we are always looking at millions of years of time going by with even the slimmest layers of rock, this is hardly a compelling argument. One thing that we learned from Darwin's finches, and the whole argument around punctuated equilibrium, is that evolution is typically slow because selection is typically not directional but conservative. But when selection is directional, evolution by natural selection can be startlingly fast. This is an argument made very explicitly by Darwin through his lengthy discussions of domestic species, whose changes are, in geological terms, instant. 

But Bechly makes an additional interesting argument- that a specific hypothesis made about ichthyosaurs is a just-so story, a sort of hypothesis that evolutionary biologists are very prone to make. Quite a few fossils have been found of ichthyosaurs giving birth, and many of them find that the baby comes out not only live (not as an egg, as is usual with reptiles), but tail-first. Thus some scientists have made the argument that each are adaptations to aquatic birth, allowing the baby to be fully borne before starting to breathe. Yet Bechly cites a more recent scientific review of the fossil record that observes that tail-first birth is far from universal, and does not follow any particular phylogenetic pattern, suggesting that it is far from necessary for aquatic birth, and thus is unlikely to be, to any significant extent, an adaptation. 

Ha! Just another story of scientists making up fairy tales and passing them off as "science" and "evolutionary hypotheses", right?  

"Evolutionary biology again and again proves to be an enterprise in imaginative story-telling rather than hard science. But when intelligent design theorists question the Darwinist paradigm based on empirical data and a rational inference to the best explanation, they are accused of being science deniers. Which science?" ... "And we will not let Darwinists get away with a dishonest appeal to the progress of science when they simply rewrite their stories every time conflicting evidence can no longer be denied."

Well, that certainly is a damning indictment. Trial and sentencing to follow! But let's think a little more about what makes an explanation and a hypothesis, on the scientific, that is to say, empirical, level. Hypotheses are always speculative. That is the whole point. They try to connect observations with some rational or empirically supported underlying mechanism / process to account for (that is, explain) what is observed. Thus the idea that aquatic birth presents a problem for mammals who have to breathe represents a reasonable subject for an hypothesis. Whether headfirst or tailfirst, the baby needs to get to the surface post haste, as soon as its breathing reflex kicks in. While the direction of birth doesn't seem to the uninitiated (and now, apparently to experts with further data at hand) to make much difference, thinking it does is a reasonable hypothesis, based on obvious geometric arguments and biological assumptions, that it is possible that the breathing reflex is tied to emergence of the head during birth, in which case coming out tailfirst might delay slightly the time it takes between needing to breathe and being able to breathe. 

This argument combines a lot of known factors- the geometry of birth, the necessity of breathing, the phenomenon of the breathing reflex initiating in all mammals very soon after birth, by mechanisms that doubtless are not entirely known, but at the same time clearly the subject of evolutionary tuning. And also the paleontological record. Good or bad, the hypothesis is based on empirical data. What characterizes science is that it follows a disciplined road from one empirically supported milestone to the next, using hypotheses about underlying mechanisms, whether visible or not, which abide by all the known/empirical mechanisms. Magic is only allowed if you know what is going on behind the curtain. Unknown mechanisms can be invoked, but then immediately become subjects of further investigation, not of protective adulation and blind worship.

In contrast, the intelligent design hypothesis, implicit here but clear enough, is singularly lacking in any data at all. It is not founded on anything other than the sentiment that what has clearly happened over the long course of the fossil record operates by unknown mechanisms, by god operating pervasively to carry out the entire program of biological evolution, not by natural selection (a visible and documented natural process) but by something else, which its proponents have never been able to demonstrate in the least degree, on short time scales or long. Faith does not, on its own, warrant novel empirical mechanisms, and nor does skeptical disbelief warrant them. Nor does one poor, but properly founded, hypothesis that is later superceded by more careful analysis of the data impugn the process of science generally or the style of evolutionary thinking specifically.

Imagine, for example, if our justice system operated at this intellectual level. When investigating crimes, police could say that, if the causes were not immediately obvious, an unnamed intelligent designer was responsible, and leave it there. No cold cases, no presumption of usual natural causality, no dogged pursuit of "the truth" by telegenic detectives. Faith alone would furnish the knowledge that the author of all has (inscrutibly) rendered "his" judgement. It would surely be a convenient out for an over-burdened and under-educated police force!

Evolution by natural selection requires a huge amount of extrapolation from what we know about short time scales and existing biology to the billions of years of life that preceeded us. On the other hand, intelligent design requires extrapolation from nothing at all- from the incredibly persistent belief in god, religion, and the rest of the theological ball of wax not one element of which has ever been pinned down to an empirical fact. Believers take the opposite view solely because religious propaganda has ceaselessly drilled the idea that god is real and "omnipotent" and all-good, and whatever else wonderful, as a matter of faith. With this kind of training, then yes, "intelligent" design makes all kinds of sense. Otherwise not. Charles Darwin's original hypothesis was so brilliant because it drew on known facts and mechanisms to account (with suitable imagination and extrapolation) for the heretofore mysterious history of biology, with its painfully slow yet inexorable evolution from one species to another, one epoch to another. Denying that one has that imagination is a statement about one's intelligence, no matter how it was designed.

  • Only god can give us virulent viruses.
  • The priest who knew it so well, long ago.
  • A wonderful Native American Film- Dance me outside.
  • With a wonderful soundtrack, including NDN Kars.
  • We need to come clean on Taiwan.
  • Appeasers, cranks, and fascist wannabes.
  • Vaccines for poor people are not profitable.
  • California is dumbing down math, and that will not help any demographic.

Saturday, September 30, 2023

Are we all the Same, or all Different?

Refining diversity.

There has been some confusion and convenient logic around diversity. Are we all the same? Conventional wisdom makes us legally the same, and the same in terms of rights, in an ever-expanding program of level playing fields- race, gender, gender preference, neurodiversity, etc. At the same time, conventional wisdom treasures diversity, inclusion, and difference. Educational conventional wisdom assumes all children are the same, and deserve the same investments and education, until some magic point when diversity flowers, and children pursue their individual dreams, applying to higher educational institutions, or not. But here again, selectiveness and merit are highly contested- should all ethnic groups be equally represented at universities, or are we diverse on that plane as well?

It is quite confusing, on the new political correctness program, to tell who is supposed to be the same and who different, and in what ways, and for what ends. Some acute social radar is called for to navigate this woke world and one can sympathize, though not too much, with those who are sick of it and want to go back to simpler times of shameless competition; black and white. 

The fundamental tension is that a society needs some degree of solidarity and cohesion to satisfy our social natures and to get anything complex done. At the same time, Darwinian and economic imperatives have us competing with each other at all levels- among nations, ethnicities, states, genders, families, work groups, individuals. We are wonderfully sensitive to infinitesimal differences, which form the soul of Darwinian selection. Woke efforts clearly try to separate differences that are essential and involuntary, (which should in principle be excluded from competition), from those that are not fixed, such as personal virtue and work ethic, thus forming the proper field of education and competition.

But that is awfully abstract. Reducing that vague principle to practice is highly fraught. Race, insofar as it can be defined at all, is clearly an essential trait. So race should not be a criterion for any competitive aspect of the society- job hunting, education, customer service. But what about "diversity" and what about affirmative action? Should the competition be weighted a little to make up for past wrongs? How about intelligence? Intelligence is heritable, but we can't call it essential, lest virtually every form of competition in our society be brought to a halt. Higher education and business, and the general business of life, is extremely competitive on the field of intelligence- who can con whom, who can come up with great ideas, write books, do the work, and manage others.

These impulses towards solidarity and competition define our fundamental political divides, with Republicans glorying in the unfairness of life, and the success of the rich. Democrats want everyone to get along, with care for unfortunate and oppressed. Our social schizophrenia over identity and empathy is expressed in the crazy politics of today. And Republicans reflect contemporary identity politics as well, just in their twisted, white-centric way. We are coming apart socially, and losing key cooperative capacity that puts our national project in jeopardy. We can grant that the narratives and archetypes that have glued the civic culture have been fantasies- that everyone is equal, or that the founding fathers were geniuses that selflessly wrought the perfect union. But at the same time, the new mantras of diversity have dangerous aspects as well.


Each side, in archetypal terms, is right and each is an essential element in making society work. Neither side's utopia is either practical or desirable. The Democratic dream is for everyone to get plenty of public services and equal treatment at every possible nexus of life, with morally-informed regulation of every social and economic harm, and unions helping to run every workplace. In the end, there would be little room for economic activity at all- for the competition that undergirds innovation and productivity, and we would find ourselves poverty-stricken, which was what led other socialist/communist states to drastic solutions that were not socially progressive at all.

On the other hand is a capitalist utopia where the winners take not just hundreds of billions of dollars, but everything else, such as the freedom of workers to organize or resist, and political power as well. The US would turn into a moneyed class system, just like the old "nobility" of Europe, with serfs. It is the Nietzschian, Randian ideal of total competition, enabling winners to oppress everyone else in perpetuity, and, into the bargain, write themselves into the history books as gods.

These are not (and were not, historically) appetizing prospects, and we need the tension of mature and civil political debate between them to find a middle ground that is both fertile and humane. Nature is, as in so many other things, an excellent guide. Cooperation is a big theme in evolution, from the assembly of the eukaryotic cell from prokaryotic precusors, to its wonderous elaboration into multicellular bodies and later into societies such as our own and those of the insects. Cooperation is the way to great accomplishments. Yet competition is the baseline that is equally essential. Diversity, yes, but it is competition and selection among that diversity and cooperative enterprise that turns the downward trajectory of entropy and decay (as dictated by physics and time) into flourishing progress.


  • Identity, essentialism, and postmodernism.
  • Family structure, ... or diversity?
  • Earth in the far future.
  • Electric or not, cars are still bad.
  • Our non-political and totally not corrupt supreme court.
  • The nightmare of building in California.

Saturday, September 9, 2023

Keeping Cellular Signals Straight

Cells often use the same signaling systems for different inputs and purposes. Scaffolds come to the rescue.

Eukaryotic cells are huge, at least compared with their progenitors, bacteria. Thanks to their mitochondria and other organizational apparatus, the typical eukaryotic cell has about 100,000 times the volume of a bacterium. These cells are virtual cities- partially organized by their membrane organelles, but there is a much more going on, with tremendous complexity to manage. One issue that was puzzling over the decades was how signals were kept straight. Eukaryotic cells use a variety of signaling systems, proto-typicaly starting with a receptor at the cell surface, linking to a kinase (or series of kinases) that then amplifies and broadcasts the signal inside the cell, ending up with the target phosphorylated proteins entering the nucleus and changing the transcription program of the cell. 

While our genome does have roughly 500 kinases, and one to two thousand receptors, a few of them (especially some kinases and their partners, which form "intracellular signaling systems") tend to crop up frequently in different systems and cell types, like the MAP kinase cascade, associated with growth and stress responses, and the AKT kinase, associated with nutrient sensing and growth responses. Not only do many different receptors turn these cellular signaling hubs on, but their effects can often be different as well, even from unrelated signals hitting the same cell.

If all these proteins were diffusable all over the cell, such specificity of signaling would be impossible. But it turns out that they are usually tethered in particular ways, by organizational helpers called scaffold proteins. These scaffolds may localize the signaling to some small volume within the larger cell, such as a membrane "raft" domain. They may also bind multiple actors of the same signaling cascade, bringing several proteins (kinases and targets) together to make signaling both efficient and (sterically) insulated from outside interference. And, in a recent paper, they can also tweak their binding targets allosterically to insulate them from outside interference.

What is allostery vs stery? If one protein (A) binds another (B) such that a phosphorylation or other site is physically hidden from other proteins, such as a kinase (C) that would activate it, that site is said to be sterically hidden- that is, by geometry alone. On the other hand, if that site remains free and accessible, but the binding of A re-arranges protein B such that it no longer binds C very well, blocking the kinase event despite the site of phosphorylation being available, then A has allosterically regulated B. It has altered the shape of B in some subtle way that alters its behavior. While steric effects are dominant and occur everywhere in protein interactions and regulation, allostery comes up pretty frequently as well, proteins being very flexible gymnasts. 

GSK3 is part of insulin signaling. It is turned off by phosphorylation, which affects a large number of downstream functions, such as turning on glycogen synthase.

The current case turns on the kinase GSK3, which, according to Wikipedia... "has been identified as a protein kinase for over 100 different proteins in a variety of different pathways. ... GSK-3 has been the subject of much research since it has been implicated in ... diseases, including type 2 diabetes, Alzheimer's disease, inflammation, cancer, addiction and bipolar disorder." GSK3 was named for its kinase activity targeting glycogen synthase, which inactivates the synthase, thus shutting down production of glycogen, which is a way to store sugar for later use. Connected with this homeostatic role, the hormone insulin turns GSK3 off by phosphorylation by a pathway downstream of the membrane-resident insulin receptor called the PI3 kinase / protein kinase B pathway. Insulin thus indirectly increases glycogen synthesis, mopping up excess blood sugar. The circuit reads: insulin --> kinases --| GSK3 --| glycogen synthase --> more glycogen.

GSK3 also functions in this totally different pathway, downstream of WNT and Frizzled. Here, GSK3 phosphorylates beta-catenin and turns it off, most of the time. WNT (like insulin) turns GSK3 off, which allows beta-catenin to accumulate and do its gene regulation in the nucleus. Cross-talk between these pathways would be very inopportune, and is prevented by the various functions of Axin, a scaffold protein. 


Another well-studied role of GSK3 is in a developmental signal, called WNT, which promotes developmental decisions of cells during embryogenisis, wound repair, and cancer, cell migration, proliferation, etc. GSK3 is central here for the phosphorylation of beta-catenin, which is a transcription regulator, among other things, and when active migrates to the nucleus to turn its target genes on. But when phosphorylated, beta-catenin is diverted to the proteosome and destroyed, instead. This is the usual state of affairs, with WNT inactive, GSK3 active, and beta-catenin getting constantly made and then immediately disposed of. This complex is called a "destruction" complex. But an incoming WNT signal, typically from neighboring cells carrying out some developmental program, alters the activity of a key scaffold in this pathway, Axin, which is destroyed and replaced by Dishevelled, which turns GSK3 off.

How is GSK3 kept on all the time for the developmental purposes of the WNT pathway, while allowing cells to still be responsive to insulin and other signals that also use GSK3 for their intracellular transmission? The current authors found that the Axin scaffold has a special property of allosterically preventing the phosphorylation of its bound GSK3 by other upstream signaling systems. They even re-engineered Axin to an extremely minimal 26 amino acid portion that binds GSK3, and this still performed the inhibition, showing that the binding doesn't sterically block phosphorylation by insulin signals, but blocks allosterically. 

That is great, but what about the downstream connections? Keeping GSK3 on is great, but doesn't that make a mess of the other pathways it participates in? This is where scaffolds have a second job, which is to bring upstream and downstream components together, to keep the whole signal flow isolated. Axin also binds beta-catenin, the GSK3 substrate in WNT signaling, keeping everything segregated and straight. 

Scaffold proteins may not "do" anything, as enzymes or signaling proteins in their own right, but they have critical functions as "conveners" of specific, channeled communication pathways, and allow the re-use of powerful signaling modules, over evolutionary time, in new circuits and functions, even in the same cells.


  • The oceans need more help, less talk.
  • Is Trump your church?
  • Can Poland make it through?

Sunday, August 27, 2023

Better Red Than Dead

Some cyanobacteria strain for photosynthetic efficiency at the red end of the light spectrum.

The plant world is green around us- why green, and not some other color, like, say, black? That plants are green means that they are letting green light through (or out by reflection), giving up some energy. Chlorophyll absorbs both red light and blue light, but not green, though all are near the peak of solar output. Some accessory pigments within the light-gathering antenna complexes can extend the range of wavelenghts absorbed, but clearly a fair amount of green light gets through. A recent theory suggests that this use of two separated bands of light is an optimal solution to stabilize power output. At any rate, it is not just the green light- the extra energy of the blue light is also thrown away as heat- its excitation is allowed to decay to the red level of excitation, within the antenna complex of chlorophyll molecules, since the only excited state used in photosynthesis is that at ~690 nm. This forms a uniform common denominator for all incoming light energy that then induces charge separation at the oxygen reaction center, (stripping water of electrons and protons), and sends newly energized electrons out to quinone molecules and on into the biosynthetic apparatus.

The solar output, which plants have to work with.

Fine. But what if you live deeper in the water, or in the veins of a rock, or in a mossy, shady nook? What if all you have access to is deeper red light, like at 720 nm, with lower energy than the standard input? In that case, you might want to re-engineer your version of photosynthesis to get by with slightly lower-energy light, while getting the same end results of oxygen splitting and carbon fixation. A few cyanobacteria (the same bacterial lineage that pioneered chlorophyll and the standard photosynthesis we know so well) have done just that, and a recent paper discusses the tradeoffs involved, which are of two different types.

The chlorophylls with respective absorption spectra and partial structures. Redder light is toward the right. Chlorophyll a is one used most widely in plants and cyanobacteria. Chlorophyll b is also widely used in these organisms as an additional antenna pigment that extends the range of absorbed light. Chlorophylls d and f are red-shifted and used in specialized species discussed here. 

One of the species, Chroococcidiopsis thermalis, is able to switch states, from bright/white light absorbtion with normal array of pigments, to a second state where it expresses chlorophylls d and f, which absorb light at the lower energy 720 nm, in the far red. This "facultative" ability means that it can optimize the low-light state without much regard to efficiency or photo-damage protection, which it can address by switching back to the high energy wavelength pigment system. The other species is Acaryochloris marina, which has no bright light system, but only chlorophyll d. This bacterium lives inside the cells of bigger red algae, so has a relatively stable, if shaded, environment to deal with.

What these and prior researchers found was that the ultimate quantum energy used to split water to O2, and to send energized electrons off the photosystem I and carbon compound synthesis, is the same as in any other chlorophyll a-using system. The energetics of those parts of the system apparently can not be changed. The shortfall needs to be made up in the front end, where there is a sharp drop in energy from that absorbed- 1.82 electron volts (eV) from photons at 680 nm (but only 1.72 eV from far-red photons)- and that needed at the next points in the electron transport chains (about 1.0 eV). This difference plays a large role in directing those electrons to where the plant wants them to go- down the gradient to the oxygen-evolving center, and to the quinones that ferry energized electrons to other synthetic centers. While it seems like more waste, a smaller difference allows the energized electrons to go astray, forming chemical radicals and other products dangerous to the cell. 

Summary diagram, described in text. Energy levels are described for photon excitation of chlorophyll (Chl, left axis, and energy transitions through the reaction center (Phe- pheophytin), and quinones (Q) that conduct energized electrons out to the other photosynthetic center and biosynthesis. On top are shown the respective system types- normal chlorophyll a from white-light adapted C. thermalis, chlorophyll d in A. marina, and chlorophyll f in red-adapted C. thermalis. 

What these researchers summarize in the end is that both of the red light-using cyanobacteria squeeze this middle zone of the power gradient in different ways. There is an intermediate event in the trail from photon-induced electron excitation to the outgoing quinone (+ electron) and O2 that is the target of all the antenna chlorophylls- the photosynthetic reaction center. This typically has chlorophyll a (called P680) and pheophytin, a chlorophyll-like molecule. It is at this chlorophyll a molecule that the key step takes place- the excitation energy (an electron bumped to a higher energy level) conducted in from the antenna of ~30 other chlorophylls pops out its excited electron, which flits over to the pheophytin, then thence to the carrier quinone molecules and photosystem I. Simultaneously, an electron comes in to replace it from the oxygen-evolving center, which receives alternate units of photon energy, also from the chlorophyll/pheophytin reaction center. The figure above describes these steps in energetic terms, from the original excited state, to the pheophytin (Phe-, loss of 0.16 eV) to the exiting quinone state (Qa-, loss of 0.385 eV). In the organisms discussed here, chlorophyll d replaces a at this center, and since its structure is different and absorbance is different, its energized electron is about 0.1 eV less energetic. 

In A. marina, (center in the diagram above), the energy gap between the pheophytin and the quinone is squeezed, losing about 0.06 eV. This has the effect of losing some of the downward "slope" on the energy landscape that prevents side reactions. Since A. marina has no choice but to use this lower energy system, it needs all the efficiency it can get, in terms of the transfer from chlorophyll to pheopytin. But it then sacrifices some driving force from the next step to the quinone. This has the ultimate effect of raising damage levels and side reactions when faced with more intense light. However, given its typically stable and symbiotic life style, that is a reasonable tradeoff.

On the other hand, C. thermalis (right-most in the diagram above) uses its chlorophyll d/f system on an optional basis when the light is bad. So it can give up some efficiency (in driving pheophytin electron acceptance) for better damage control. It has dramatically squeezed the gap between chlorophyll and pheophytin, from 0.16 eV to 0.08 eV, while keeping the main pheophytin-to-quinone gap unchanged. This has the effect of keeping the pumping of electrons out to the quinones in good condition, with low side-effect damage, but restricts overall efficiency, slowing the rate of excitation transfer to pheophytin, which affects not only the quinone-mediated path of energy to photosystem I, but also the path to the oxygen evolving center. The authors mention that this cyanobacterium recovers some efficiency by making extra light-harvesting pigments that provide more inputs, under these low / far-red light conditions.

The methods used to study all this were mostly based on fluorescence, which emerges from the photosynthetic system when electrons fall back from their excited states. A variety of inhibitors have been developed to prevent electron transfer, such as to the quinones, which bottles up the system and causes increased fluorescence and thermoluminescence, whose wavelengths reveal the energy gaps causing them. Thus it is natural, though also impressive, that light provides such an incisive and precise tool to study this light-driven system. There has been much talk that these far red-adapted photosynthetic organisms validate the possibility of life around dim stars, including red dwarves. But obviously these particular systems developed evolutionarily out of the dominant chlorophyll a-based system, so wouldn't provide a direct path. There are other chlorophyll systems in bacteria, however, and systems that predate the use of oxygen as the electron source, so there are doubtless many ways to skin this cat.


  • Maybe humiliating Russia would not be such a bad thing.
  • Republicans might benefit from reading the Federalist Papers.
  • Fanny Willis schools Meadows on the Hatch act.
  • "The top 1% of households are responsible for more emissions (15-17%) than the lower earning half of American households put together (14% of national emissions)."

Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Sunday, July 23, 2023

Many Ways There are to Read a Genome

New methods to unravel the code of transcriptional regulators.

When we deciphered the human genome, we came up with three billion letters of its linear code- nice and tidy. But that is not how it is read inside our cells. Sure, it is replicated linearly, but the DNA polymerases don't care about the sequence- they are not "reading" the book, they are merely copying machines trying to get it to the next generation with as few errors as possible. The book is read in an entirely different way, by a herd of proteins that recognize specific sequences of the DNA- the transcription regulators (also commonly called transcription factors [TF], in the classic scientific sense of some "factor" that one is looking for). These regulators- and there are, by one recent estimate, 1,639 of them encoded in the human genome- constitute an enormously complex network of proteins and RNAs that regulate each other, and regulate "downstream" genes that encode everything else in the cell. They are made in various proportions to specify each cell type, to conduct every step in development, and to respond to every eventuality that evolution has met and mastered over the eons.

Loops occur in the DNA between site of regulator binding, in order to turn genes on (enhancer, E, and transcription regulator/factor, TF).

Once sufficient transcription regulators bind to a given gene, it assembles a transcription complex at its start site, including the RNA polymerase that then generates an RNA copy that can float off to be made into a protein, (such as a transcription regulator), or perhaps function in its RNA form as part of zoo of rRNA, tRNA, miRNA, piRNA, and many more that also help run the cell. Some regulators can repress transcription, and many cooperate with each other. There are also diverse regions of control for any given target gene in its nearby non-coding DNA- cassettes (called enhancers) that can be bound by different regulators and thus activated at different stages for different reasons. 

These binding sites in the DNA that transcription regulators bind to are typically quite small. A classic regulator SP1 (itself 785 amino acids long and bearing three consecutive DNA binding motifs coordinated by a zinc ions) binds to a sequence resembling (G/T)GGGCGG(G/A)(G/A)(C/T). So only ten bases are specified at all, and four of those positions are degenerate. By chance, a genome of three billion bases will have such a sequence about 45,769 times. So this kind of binding is not very strictly specified, and such sites tend to appear and disappear frequently in evolution. That is one of the big secrets of evolution- while some changes are hard, others are easy, and it there is constant variation and selection going on in the regulatory regions of genes, refining and defining where / when they are expressed.

Anyhow, researchers naturally have the question- what is the regulatory landscape of a given gene under some conditions of interest, or of an entire genome? What regulators bind, and which ones are most important? Can we understand, given our technical means, what is going on in a cell from our knowledge of transcription regulators? Can we read the genome like the cell itself does? Well the answer to that is, obviously no and not yet. But there are some remarkable technical capabilities. For example, for any given regulator, scientists can determine where it binds all over the genome in any given cell, by chemical crosslinking methods. The prediction of binding sites for all known regulators has been a long-standing hobby as well, though given the sparseness of this code and the lability of the proteins/sites, one that gives only statistical, which is to say approximate, results. Also, scientists can determine across whole genomes where genes are "open" and active, vs where they are closed. Chromatin (DNA bound with histones in the nucleus) tends to be closed up on repressed and inactive genes, while transcription regulators start their work by opening chromatin to make it accessible to other regulators, on active genes.

This last method offers the prospect of truly global analysis, and was the focus of a recent paper. The idea was to merge a detailed library predicted binding sites for all known regulators all over the genome, with experimental mapping of open chromatin regions in a particular cell or tissue of interest. And then combine all that with existing knowledge about what each of the target genes near the predicted binding sites do. The researchers clustered the putative regulators binding across all open regions by this functional gene annotation to come up with statistically over-represented transcription regulators and functions. This is part of a movement across bioinformatics to fold in more sources of data to improve predictions when individual methods each produce sketchy, unsatisfying results.

In this case, mapping open chromatin by itself is not very helpful, but becomes much more helpful when combined with assessments of which genes these open regions are close to, and what those genes do. This kind of analysis can quickly determine whether you are looking at an immune cell or a neuron, as the open chromatin is a snapshot of all the active genes at a particular moment. In this recent work, the analysis was extended to say that if some regulators are consistently bound near genes participating in some key cellular function, then we can surmise that that regulator may be causal for that cell type, or at least part of the program specific to that cell. The point for these researchers is that this multi-source analysis performs better in finding cell-type specific, and function-specific, regulators than is the more common approach of just adding up the prevalence of regulators occupying open chromatin all over a given genome, regardless of the local gene functions. That kind of approach tends to yield common regulators, rather than cell-type specific ones. 

To validate, they do rather half-hearted comparisons with other pre-existing techniques, without blinding, and with validation of only their own results. So it is hardly a fair comparison. They look at the condition systemic lupus (SLE), and find different predictions coming from their current technique (called WhichTF) vs one prior method (MEME-ChIP).  MEME-ChIP just finds predicted regulator binding sites for genomic regions (i.e. open chromatin regions) given by the experimenter, and will do a statistical analysis for prevalence, regardless of the functions of either the regulator or the genes it binds to. So you get absolute prevalence of each regulator in open (active) regions vs the genome as a whole. 

Different regulators are identified from the same data by different statistical methods. But both sets are relevant.


What to make of these results? The MEME-ChIP method finds regulators like SP1, SP2, SP4, and ZFX/Y. SP1 et al. are very common regulators, but that doesn't mean they are unimportant, or not involved in disease processes. SP1 has been observed as likely to be involved in autoimmune encephalitis in mice, a model of multiple sclerosis, and naturally not so far from lupus in pathology. ZFX is also a prominent regulator in the progenitor cells of the immune system. So while these authors think little of the competing methods, those methods seem to do a very good job of identifying significant regulators, as do their own methods. 

There is another problem with the author's WhatTF method, which is that gene annotation is in its infancy. Users are unlikely to find new functions using existing annotations. Many genes have no known function yet, and new functions are being found all the time for those already assigned functions. So if one's goal is classification of a cell or of transcription regulators according to existing schemes, this method is fine. But if one has a research goal to find new cell types, or new processes, this method will channel you into existing ones instead.

This kind of statistical refinement is unlikely to give us what we seek in any case- a strong predictive model of how the human genome is read and activatated by the herd of gene regulators. For that, we will need new methods for specific interaction detection, with a better appreciation for complexes between different regulators, (which will be afforded by the new AI-driven structural techniques), and more appreciation for the many other operators on chromatin, like the various histone modifying enzymes that generate another whole code of locks and keys that do the detailed regulation of chromatin accessibility. Reading the genome is likely to be a somewhat stochastic process, but we have not yet arrived at the right level of detail, or the right statistics, to do it justice.


  • Unconscious messaging and control. How the dark side operates.
  • Solzhenitsyn on evil.
  • Come watch a little Russian TV.
  • "Ruthless beekeeping practices"
  • The medical literature is a disaster.

Saturday, May 27, 2023

Where Does Oxygen Come From?

Boring into the photosynthetic reaction center of plants, where O2 is synthesized.

Oxygen might be important to us, but it is really just a waste product. Photosynthetic bacteria found that the crucial organic molecules of life that they were making out of CO2 and storing in the form of reduced compounds (like fats and sugars) had to get those reducing units (i.e. electrons) from somewhere. And water stepped up as a likely candidate, with its abudance and simplicity. After you take four electrons away from two water molecules, you are left with four protons and one molecular oxygen molecule, i.e. O2. The protons are useful to fuel the proton-motive force system across the photosynthetic membrane, making ATP. But what to do with the oxygen? It just bubbles away, but can also be used later in metabolism to burn up those high-energy molecules again, if you have evolved aerobic metabolism.

On the early earth, reductants like reduced forms of iron and sulfur were pretty common, so they were the original sources of electrons for all metabolism. Indeed, most theories of the origin of life place it in dynamic rocky redox environments like hydrothermal vents that had such conducive chemistry. But these compounds are not quite common enough for universal photosynthesis. For example, a photosynthetic bacterium floating at the top of the ocean would like to continue basking in the sun and metabolizing, even if the water around it is relatively clear of reduced iron, perhaps because of competition from its colleagues. What to do? The cyanobacteria came up with an amazing solution- split water!

A general overview of plant and cyanobacterial photosystems, comprising the first (PSII), where the first light quantum hits and oxygen is split, an intervening electron transport chain where energy is harvested, and the second (PS1), where a second light quantum hits, more energy is harvested, and the electron ends up added to NADP. From the original water molecules, protons are used to power the membrane proton-motive force and ATP synthesis, while the electrons are used to reduce CO2 and create organic chemicals.

A schematic of the P680 center of photosystem II. Green chlorophylls are at the center, with magnesium atoms (yellow). Light induces electron movement as denoted by the red arrows, out of the chlorophyll center and onwards to other cytochrome molecules. Note that the electrons originate at the bottom out of the oxygen evolving complex, or OEC, (purple), and are transferred via an aromatic tyrosine (TyrZ) side chain, coordinating with a nearby histidine (H189) protein side chain.

This is not very easy, however, since oxygen is highly, even notoriously "electronegative". That is, it likes and keeps its electrons. It takes a super-oxidant to strip those electrons off. Cyanobacteria came up with what is now called photosystem II (that is, it was discovered after photosystem I), which collects light through a large quantum antenna of chlorophyll molecules, ending up at a special pairing of chlorophyll molecules called P680. These collect the photon, and in response bump an electron up in energy and out to an electron chain that courses through the rest of the photosynthetic system, including photosystem I. At this point, P680 is hungry for an electron, indeed has the extreme oxidation potential needed to take electrons from oxygen. And one is conducted in from the oxygen evolving center (OEC), sitting nearby.

A schematic illustrating both the evolutionary convergence that put both photosystems (types I and II) into one organism (cyanobacteria, which later become plant chloroplasts), and the energy levels acquired by the main actors in the photosynthetic process, quoted in electron volts. At the very bottom (center) is a brief downward slide as oxygen is split by the pulling force of the super-oxidation state of light-activated P680. After the electrons are light-excited, they drop down in orderly fashion through a series of electron chain transits to various cytochromes, quinones, ferredoxins, and other carriers that generate either protons or chemical reducing power as they go along. Note how the depth of the oxygen-splitting oxidation state is unique among photosynthetic systems.

A recent paper resolves the long-standing problem of how exactly oxygen is oxidized by cyanobacteria and plants at the OEC, at the very last step before oxygen release. This center is a very strained cubic metal complex of one calcium and four manganese atoms, coordinated by oxygen atoms. The overall process is that two water molecules come in, four protons and four electrons are stripped off, and the remaining oxygens combine to form O2. This is, again, part of the grand process of metabolism, whose point is to add those electrons and protons to CO2, making the organic molecules of life, generally characterized as (-CH2-), such as fats, sugars, etc. Which can be burned later back into CO2. Metals are common throughout organic chemistry as catalysts, because they have a wonderful property of de-localizing electrons and allowing multiple oxidation states, (number of extra or missing electrons), unlike the more sparse and tightly-held states of the smaller elements. So they are used in many redox cofactors and enzymes to facilitate electron movement, such as in chlorophyll itself.


The authors provide a schematic of the manganese-calcium OEC reaction center. The transferring tyrosine is at top, calcium is in fuschia/violet, the manganese atoms are in purple, and the oxygens are in red. Arrows point to the oxygens destined to bond to each other and "evolve" away as O2. Note how one of these (O6) is only singly-coordinated and is sort of awkwardly wedged into the cube. Note also how the bond lengths to calcium are all longer than those to manganese, further straining the cube. These strains help to encourage activation and expulsion of the target oxygens.

Here, in the oxygen evolving center, the manganese atoms are coordinated all around with oxygens, which presents the question- which ones are the ones? Which are destined to become O2, and how does the process happen? These researchers didn't use complicated femtosecond X-ray systems or cyclotrons, (though they draw on the structural work of those who did), but room-temperature FTIR, which is infrared spectroscopy highly sensitive to organic chemical dynamics. Spinach leaf chloroplasts were put through an hour of dark adaptation, (which sets the OEC cycle to state S1), then hit with flashes of laser light to advance the position of the oxygen evolving cycle, since each flash (5 nanoseconds) induces one electron ejection by P680, and one electron transfer out of the OEC. Thus the experimenters could control the progression of the whole cycle, one step at a time, and then take extremely close FTIR measurements of the complexes as they do their thing in response to each single electron ejection. Some of the processes they observed were very fast (20 nanoseconds), but others were pretty slow, up to 1.5 milliseconds for the S4 state to eject the final O2 and reset to the S0 state with new water molecules. They then supplement their spectroscopy with the structural work from others and with computer dynamics simulations of the core process to come up with a full mechanism.


A schematic of the steps of oxygen evolution out of the manganese core complex, from states S0 to S4. Note the highly diverse times that elapse at the various steps, noted in nano, micro, or milli seconds. This is discussed further in the text.


Other workers have provided structural perspectives on this question, showing that the cubic metal structure is bit more weird than expected. An extra oxygen (numbered as #6) wedges its way in the cube, making the already strained structure (which accommodates a calcium and a dangling extra manganese atom) highly stressed. This is a complicated story, so several figures are provided here to give various perspectives. The sequence of events is that first, (S0), two waters enter the reaction center after the prior O2 molecule has left. Water has a mix of acid (H+) and base (OH-) ionic forms, so it is easy to bring in the hydroxyl form instead of complete water, with matching protons quickly entering the proton pool for ATP production. Then another proton quickly leaves as well, so the waters have now become two oxygens, one hydrogen, and four electrons (S0). Two of the coordinated manganese atoms go from their prior +4, +4 oxidation state to +3 and +2, acting as electron buffers. 

The first two electrons are pulled out rapidly, via the nearby tyrosine ring, and off to the P680 center (ending at S2, with Mn 3+ and Mn 4+). But the next steps are much slower, extricating the last two electrons from the oxygens and inducing them to bond each other. With state S3 and one more electron removed, both manganese atoms are back to the 4+ state. In the last step, one last proton leaves and one last electron is extracted over to the tyrosine oxygen, and the oxygen 6 is so bereft as to be in a radical state, which allows it to bow over to oxygen 5 and bond with it, making O2. The metal complex has nicely buffered the oxidation states to allow these extractions to go much more easily and in a more coordinated fashion than can happen in free solution.

The authors provide a set of snapshots of their infrared spectroscopy-supported simulations (done with chemical and quantum fidelity) of the final steps, where oxygens, in the bottom panel, bond together at center. Note how the atomic positions and hydrogen attachments also change subtly as the sequence progresses. Here the manganese atoms are salmon, oxygen red, calcium yellow, hydrogen white, and a chlorine molecule is green.

This closely optimized and efficient reaction system is not just a wonder of biology and of earth history, but an object lesson in chemical technology, since photolysis of water is a very relevant dream for a sustainable energy future- to efficiently provide hydrogen as a fuel. Currently, using solar power to run water electrolyzers is not very efficient (20% for solar, and 70% for electrolysis = 14%). Work is ongoing to design direct light-to-hydrogen hydrolysis, but so far uses high heat and noxious chemicals. Life has all this worked out at the nano scale already, however, so there must be hope for better methods.


  • The US carried off an amazing economic success during the pandemic, keeping everything afloat as 22 million jobs were lost. This was well worth a bit of inflation on the back end.
  • Death
  • Have we at long last hit peak gasoline?
  • The housing crisis and local control.
  • The South has always been the problem.
  • The next real estate meltdown.

Saturday, May 20, 2023

On the Spectrum

Autism, broader autism phenotype, temperament, and families. It turns out that everyone is on the spectrum.

The advent of genomic sequencing and the hunt for disease-causing mutations has been notably unhelpful for most mental diseases. Possible or proven disease-causing mutations pile up, but they do little to illuminate the biology of what is going on, and even less towards treatment. Autism is a prime example, with hundreds of genes now identified as carrying occasional variants with causal roles. The strongest of these variants affect synapse formation among neurons, and a second class affects long-term regulation of transcription, such as turning genes durably on or off during developmental transitions. Very well- that all makes a great deal of sense, but what have we gained?

Clinically, we have gained very little. What is affected are neural developmental processes that can't be undone, or switched off in later life with a drug. So while some degree of understanding slowly emerges from these studies, translating that to treatment remains a distant dream. One aspect of the genetics of autism, however, is highly informative, which is the sheer number of low-effect and common mutations. Autism can be thought of as coming in two types, genetically- those due to a high effect, typically spontaneous or rare mutation, and those due to a confluence of common variants. The former tends to be severe and singular- an affected child in a family that is otherwise unaffected. The latter might be thought of as familial, where traits that have appeared (mildly) elsewhere in the family have been concentrated in one child, to a degree that it is now diagnosable.

This pattern has given rise to the very interesting concept of the "Broader Autism Phenotype", or BAP. This stems from the observation that families of autistic children have higher rates where ... "the parents, grandparents, and collaterals are persons strongly preoccupied with abstractions of a scientific, literary, or artistic nature, and limited in genuine interest in people." Thus there is not just a wide spectrum of autism proper, based on the particular confluence of genetic and other factors that lead to a diagnosis and its severity, but there is also, outside of the medical spectrum, quite another spectrum of traits or temperaments which tend toward autism and comprise various eccentricities, but have not, at least to date, been medicalized.


The common nature of these variants leads to another question- why are they persistent in the population? It is hard to believe that such a variety and number of variations are exclusively deleterious, especially when the BAP seems to have, well, rather positive aspects. No, I would suggest that an alternative way to describe BAP is "an enhanced ability to focus", and develop interests in salient topics. Ever meet people who are technically useless, but warm-hearted? They are way off on the non-autistic part of the spectrum, while the more technically inclined, the fixers of the world and scholars of obscure topics, are more towards the "ability to focus" part of the spectrum. Only when such variants are unusually concentrated by the genetic lottery do children appear with frank autistic characteristics, totally unable to deal with social interactions, and given to obsessive focus and intense sensitivities.

Thus autism looks like a more general lens on human temperament and evolution, being the tip of a very interesting iceberg. As societies, we need the politicians, backslappers, networkers, and con men, but we also need, indeed increasingly as our societies and technologies developed over the centuries, people with the ability and desire to deal with reality- with technical and obscure issues- without social inflection, but with highly focused attention. Militaries are a prime example, fusing critical needs of managing and motivating people, with a modern technical base of vast scope, reliant on an army of specialists devoted to making all the machinery work. Why does there have to be this tradeoff? Why can't everyone be James Bond, both technically adept and socially debonaire? That isn't really clear, at least to me, but one might speculate that in the first place, dealing with people takes a great deal of specialized intelligence, and there may not be room for everything in one brain. Secondly, the enhanced ability to focus on technical or artistic topics may actively require, as is implicit in doing science and as was exemplified by Mr. Spock, an intentional disregard of social niceties and motivations, if one is to fully explore the logic of some other, non-human, world.