Showing posts with label evolution. Show all posts
Showing posts with label evolution. Show all posts

Saturday, October 7, 2023

Empty Skepticism at the Discovery Institute

What makes a hypothesis scientific, vs a just-so story, or a religious fixation?

"Intelligent" design has fallen on hard times, after a series of court cases determined that it was, after all, a religious idea and could not be foisted on unsuspecting schoolchildren, at least in state schools and under state curricula. But the very fact of religious motivation leads to its persistence in the face of derision, evidence, and apathy. The Discovery Institute, (which, paranthetically, does not make any discoveries), remains the vanguard of intelligent design, promoting "skepticism", god, alternative evolutionary theories, and, due to the paucity of ways to attack evolution, tangential right-wingery such as anti-vaccine agitation. By far their most interesting author is Günter Bechly, who delves into the paleontological record to heap scorn on other paleontologists and thereby make room for the unmentioned alternative hypothesis ... which is god.

A recent post discussed the twists and turns of ichthyosaur evolution. Or should we say biological change through time, with unknown causes? Ichthyosaurs flourished from about 250 million years ago (mya) to 100 mya, with the last representatives dated to 90 mya. They were the reptile analogs of whales and dophins, functioning as apex predators in the ocean. They were done in by various climate crises well-prior to the cometary impact that ended the Cretaceous and the reign of dinosaurs in general.

Bechly raises two significant points. First is the uncertain origins of Ichthyosaurs. As is typical with dramatic evolutionary transitions like that from land to water in whales, the time line is compressed, since there are a lot of adaptations that are desirable for the new environment that might have been partially pre-figured, but get fleshed out extensively with the new ecological role and lifestyle. Selection is presumably intense and transitional fossils are hard to find. This was true for whales, though beautiful transitional fossils have been found more recently. And apparently this is true for the Ichthyosaurs as well, where none have been found, yet. There is added drama stemming from the time of origin, which is right after the Permian exinction, perhaps the greatest known extinction event in the history of the biosphere. Radiations after significant extinction events tend to be rapid, with few transitional fossils, for the same reason of new niches opening and selection operating rapidly.

Ichthyosaur

Bechly and colleagues frequently make hay out of gaps in the fossil record, arguing that something (we decline to be more specific!) else needs to be invoked to explain such lack of evidence. It is a classic god of the gaps argument. But since the fossils are never out of sequence, and we are always looking at millions of years of time going by with even the slimmest layers of rock, this is hardly a compelling argument. One thing that we learned from Darwin's finches, and the whole argument around punctuated equilibrium, is that evolution is typically slow because selection is typically not directional but conservative. But when selection is directional, evolution by natural selection can be startlingly fast. This is an argument made very explicitly by Darwin through his lengthy discussions of domestic species, whose changes are, in geological terms, instant. 

But Bechly makes an additional interesting argument- that a specific hypothesis made about ichthyosaurs is a just-so story, a sort of hypothesis that evolutionary biologists are very prone to make. Quite a few fossils have been found of ichthyosaurs giving birth, and many of them find that the baby comes out not only live (not as an egg, as is usual with reptiles), but tail-first. Thus some scientists have made the argument that each are adaptations to aquatic birth, allowing the baby to be fully borne before starting to breathe. Yet Bechly cites a more recent scientific review of the fossil record that observes that tail-first birth is far from universal, and does not follow any particular phylogenetic pattern, suggesting that it is far from necessary for aquatic birth, and thus is unlikely to be, to any significant extent, an adaptation. 

Ha! Just another story of scientists making up fairy tales and passing them off as "science" and "evolutionary hypotheses", right?  

"Evolutionary biology again and again proves to be an enterprise in imaginative story-telling rather than hard science. But when intelligent design theorists question the Darwinist paradigm based on empirical data and a rational inference to the best explanation, they are accused of being science deniers. Which science?" ... "And we will not let Darwinists get away with a dishonest appeal to the progress of science when they simply rewrite their stories every time conflicting evidence can no longer be denied."

Well, that certainly is a damning indictment. Trial and sentencing to follow! But let's think a little more about what makes an explanation and a hypothesis, on the scientific, that is to say, empirical, level. Hypotheses are always speculative. That is the whole point. They try to connect observations with some rational or empirically supported underlying mechanism / process to account for (that is, explain) what is observed. Thus the idea that aquatic birth presents a problem for mammals who have to breathe represents a reasonable subject for an hypothesis. Whether headfirst or tailfirst, the baby needs to get to the surface post haste, as soon as its breathing reflex kicks in. While the direction of birth doesn't seem to the uninitiated (and now, apparently to experts with further data at hand) to make much difference, thinking it does is a reasonable hypothesis, based on obvious geometric arguments and biological assumptions, that it is possible that the breathing reflex is tied to emergence of the head during birth, in which case coming out tailfirst might delay slightly the time it takes between needing to breathe and being able to breathe. 

This argument combines a lot of known factors- the geometry of birth, the necessity of breathing, the phenomenon of the breathing reflex initiating in all mammals very soon after birth, by mechanisms that doubtless are not entirely known, but at the same time clearly the subject of evolutionary tuning. And also the paleontological record. Good or bad, the hypothesis is based on empirical data. What characterizes science is that it follows a disciplined road from one empirically supported milestone to the next, using hypotheses about underlying mechanisms, whether visible or not, which abide by all the known/empirical mechanisms. Magic is only allowed if you know what is going on behind the curtain. Unknown mechanisms can be invoked, but then immediately become subjects of further investigation, not of protective adulation and blind worship.

In contrast, the intelligent design hypothesis, implicit here but clear enough, is singularly lacking in any data at all. It is not founded on anything other than the sentiment that what has clearly happened over the long course of the fossil record operates by unknown mechanisms, by god operating pervasively to carry out the entire program of biological evolution, not by natural selection (a visible and documented natural process) but by something else, which its proponents have never been able to demonstrate in the least degree, on short time scales or long. Faith does not, on its own, warrant novel empirical mechanisms, and nor does skeptical disbelief warrant them. Nor does one poor, but properly founded, hypothesis that is later superceded by more careful analysis of the data impugn the process of science generally or the style of evolutionary thinking specifically.

Imagine, for example, if our justice system operated at this intellectual level. When investigating crimes, police could say that, if the causes were not immediately obvious, an unnamed intelligent designer was responsible, and leave it there. No cold cases, no presumption of usual natural causality, no dogged pursuit of "the truth" by telegenic detectives. Faith alone would furnish the knowledge that the author of all has (inscrutibly) rendered "his" judgement. It would surely be a convenient out for an over-burdened and under-educated police force!

Evolution by natural selection requires a huge amount of extrapolation from what we know about short time scales and existing biology to the billions of years of life that preceeded us. On the other hand, intelligent design requires extrapolation from nothing at all- from the incredibly persistent belief in god, religion, and the rest of the theological ball of wax not one element of which has ever been pinned down to an empirical fact. Believers take the opposite view solely because religious propaganda has ceaselessly drilled the idea that god is real and "omnipotent" and all-good, and whatever else wonderful, as a matter of faith. With this kind of training, then yes, "intelligent" design makes all kinds of sense. Otherwise not. Charles Darwin's original hypothesis was so brilliant because it drew on known facts and mechanisms to account (with suitable imagination and extrapolation) for the heretofore mysterious history of biology, with its painfully slow yet inexorable evolution from one species to another, one epoch to another. Denying that one has that imagination is a statement about one's intelligence, no matter how it was designed.

  • Only god can give us virulent viruses.
  • The priest who knew it so well, long ago.
  • A wonderful Native American Film- Dance me outside.
  • With a wonderful soundtrack, including NDN Kars.
  • We need to come clean on Taiwan.
  • Appeasers, cranks, and fascist wannabes.
  • Vaccines for poor people are not profitable.
  • California is dumbing down math, and that will not help any demographic.

Saturday, September 30, 2023

Are we all the Same, or all Different?

Refining diversity.

There has been some confusion and convenient logic around diversity. Are we all the same? Conventional wisdom makes us legally the same, and the same in terms of rights, in an ever-expanding program of level playing fields- race, gender, gender preference, neurodiversity, etc. At the same time, conventional wisdom treasures diversity, inclusion, and difference. Educational conventional wisdom assumes all children are the same, and deserve the same investments and education, until some magic point when diversity flowers, and children pursue their individual dreams, applying to higher educational institutions, or not. But here again, selectiveness and merit are highly contested- should all ethnic groups be equally represented at universities, or are we diverse on that plane as well?

It is quite confusing, on the new political correctness program, to tell who is supposed to be the same and who different, and in what ways, and for what ends. Some acute social radar is called for to navigate this woke world and one can sympathize, though not too much, with those who are sick of it and want to go back to simpler times of shameless competition; black and white. 

The fundamental tension is that a society needs some degree of solidarity and cohesion to satisfy our social natures and to get anything complex done. At the same time, Darwinian and economic imperatives have us competing with each other at all levels- among nations, ethnicities, states, genders, families, work groups, individuals. We are wonderfully sensitive to infinitesimal differences, which form the soul of Darwinian selection. Woke efforts clearly try to separate differences that are essential and involuntary, (which should in principle be excluded from competition), from those that are not fixed, such as personal virtue and work ethic, thus forming the proper field of education and competition.

But that is awfully abstract. Reducing that vague principle to practice is highly fraught. Race, insofar as it can be defined at all, is clearly an essential trait. So race should not be a criterion for any competitive aspect of the society- job hunting, education, customer service. But what about "diversity" and what about affirmative action? Should the competition be weighted a little to make up for past wrongs? How about intelligence? Intelligence is heritable, but we can't call it essential, lest virtually every form of competition in our society be brought to a halt. Higher education and business, and the general business of life, is extremely competitive on the field of intelligence- who can con whom, who can come up with great ideas, write books, do the work, and manage others.

These impulses towards solidarity and competition define our fundamental political divides, with Republicans glorying in the unfairness of life, and the success of the rich. Democrats want everyone to get along, with care for unfortunate and oppressed. Our social schizophrenia over identity and empathy is expressed in the crazy politics of today. And Republicans reflect contemporary identity politics as well, just in their twisted, white-centric way. We are coming apart socially, and losing key cooperative capacity that puts our national project in jeopardy. We can grant that the narratives and archetypes that have glued the civic culture have been fantasies- that everyone is equal, or that the founding fathers were geniuses that selflessly wrought the perfect union. But at the same time, the new mantras of diversity have dangerous aspects as well.


Each side, in archetypal terms, is right and each is an essential element in making society work. Neither side's utopia is either practical or desirable. The Democratic dream is for everyone to get plenty of public services and equal treatment at every possible nexus of life, with morally-informed regulation of every social and economic harm, and unions helping to run every workplace. In the end, there would be little room for economic activity at all- for the competition that undergirds innovation and productivity, and we would find ourselves poverty-stricken, which was what led other socialist/communist states to drastic solutions that were not socially progressive at all.

On the other hand is a capitalist utopia where the winners take not just hundreds of billions of dollars, but everything else, such as the freedom of workers to organize or resist, and political power as well. The US would turn into a moneyed class system, just like the old "nobility" of Europe, with serfs. It is the Nietzschian, Randian ideal of total competition, enabling winners to oppress everyone else in perpetuity, and, into the bargain, write themselves into the history books as gods.

These are not (and were not, historically) appetizing prospects, and we need the tension of mature and civil political debate between them to find a middle ground that is both fertile and humane. Nature is, as in so many other things, an excellent guide. Cooperation is a big theme in evolution, from the assembly of the eukaryotic cell from prokaryotic precusors, to its wonderous elaboration into multicellular bodies and later into societies such as our own and those of the insects. Cooperation is the way to great accomplishments. Yet competition is the baseline that is equally essential. Diversity, yes, but it is competition and selection among that diversity and cooperative enterprise that turns the downward trajectory of entropy and decay (as dictated by physics and time) into flourishing progress.


  • Identity, essentialism, and postmodernism.
  • Family structure, ... or diversity?
  • Earth in the far future.
  • Electric or not, cars are still bad.
  • Our non-political and totally not corrupt supreme court.
  • The nightmare of building in California.

Saturday, September 9, 2023

Keeping Cellular Signals Straight

Cells often use the same signaling systems for different inputs and purposes. Scaffolds come to the rescue.

Eukaryotic cells are huge, at least compared with their progenitors, bacteria. Thanks to their mitochondria and other organizational apparatus, the typical eukaryotic cell has about 100,000 times the volume of a bacterium. These cells are virtual cities- partially organized by their membrane organelles, but there is a much more going on, with tremendous complexity to manage. One issue that was puzzling over the decades was how signals were kept straight. Eukaryotic cells use a variety of signaling systems, proto-typicaly starting with a receptor at the cell surface, linking to a kinase (or series of kinases) that then amplifies and broadcasts the signal inside the cell, ending up with the target phosphorylated proteins entering the nucleus and changing the transcription program of the cell. 

While our genome does have roughly 500 kinases, and one to two thousand receptors, a few of them (especially some kinases and their partners, which form "intracellular signaling systems") tend to crop up frequently in different systems and cell types, like the MAP kinase cascade, associated with growth and stress responses, and the AKT kinase, associated with nutrient sensing and growth responses. Not only do many different receptors turn these cellular signaling hubs on, but their effects can often be different as well, even from unrelated signals hitting the same cell.

If all these proteins were diffusable all over the cell, such specificity of signaling would be impossible. But it turns out that they are usually tethered in particular ways, by organizational helpers called scaffold proteins. These scaffolds may localize the signaling to some small volume within the larger cell, such as a membrane "raft" domain. They may also bind multiple actors of the same signaling cascade, bringing several proteins (kinases and targets) together to make signaling both efficient and (sterically) insulated from outside interference. And, in a recent paper, they can also tweak their binding targets allosterically to insulate them from outside interference.

What is allostery vs stery? If one protein (A) binds another (B) such that a phosphorylation or other site is physically hidden from other proteins, such as a kinase (C) that would activate it, that site is said to be sterically hidden- that is, by geometry alone. On the other hand, if that site remains free and accessible, but the binding of A re-arranges protein B such that it no longer binds C very well, blocking the kinase event despite the site of phosphorylation being available, then A has allosterically regulated B. It has altered the shape of B in some subtle way that alters its behavior. While steric effects are dominant and occur everywhere in protein interactions and regulation, allostery comes up pretty frequently as well, proteins being very flexible gymnasts. 

GSK3 is part of insulin signaling. It is turned off by phosphorylation, which affects a large number of downstream functions, such as turning on glycogen synthase.

The current case turns on the kinase GSK3, which, according to Wikipedia... "has been identified as a protein kinase for over 100 different proteins in a variety of different pathways. ... GSK-3 has been the subject of much research since it has been implicated in ... diseases, including type 2 diabetes, Alzheimer's disease, inflammation, cancer, addiction and bipolar disorder." GSK3 was named for its kinase activity targeting glycogen synthase, which inactivates the synthase, thus shutting down production of glycogen, which is a way to store sugar for later use. Connected with this homeostatic role, the hormone insulin turns GSK3 off by phosphorylation by a pathway downstream of the membrane-resident insulin receptor called the PI3 kinase / protein kinase B pathway. Insulin thus indirectly increases glycogen synthesis, mopping up excess blood sugar. The circuit reads: insulin --> kinases --| GSK3 --| glycogen synthase --> more glycogen.

GSK3 also functions in this totally different pathway, downstream of WNT and Frizzled. Here, GSK3 phosphorylates beta-catenin and turns it off, most of the time. WNT (like insulin) turns GSK3 off, which allows beta-catenin to accumulate and do its gene regulation in the nucleus. Cross-talk between these pathways would be very inopportune, and is prevented by the various functions of Axin, a scaffold protein. 


Another well-studied role of GSK3 is in a developmental signal, called WNT, which promotes developmental decisions of cells during embryogenisis, wound repair, and cancer, cell migration, proliferation, etc. GSK3 is central here for the phosphorylation of beta-catenin, which is a transcription regulator, among other things, and when active migrates to the nucleus to turn its target genes on. But when phosphorylated, beta-catenin is diverted to the proteosome and destroyed, instead. This is the usual state of affairs, with WNT inactive, GSK3 active, and beta-catenin getting constantly made and then immediately disposed of. This complex is called a "destruction" complex. But an incoming WNT signal, typically from neighboring cells carrying out some developmental program, alters the activity of a key scaffold in this pathway, Axin, which is destroyed and replaced by Dishevelled, which turns GSK3 off.

How is GSK3 kept on all the time for the developmental purposes of the WNT pathway, while allowing cells to still be responsive to insulin and other signals that also use GSK3 for their intracellular transmission? The current authors found that the Axin scaffold has a special property of allosterically preventing the phosphorylation of its bound GSK3 by other upstream signaling systems. They even re-engineered Axin to an extremely minimal 26 amino acid portion that binds GSK3, and this still performed the inhibition, showing that the binding doesn't sterically block phosphorylation by insulin signals, but blocks allosterically. 

That is great, but what about the downstream connections? Keeping GSK3 on is great, but doesn't that make a mess of the other pathways it participates in? This is where scaffolds have a second job, which is to bring upstream and downstream components together, to keep the whole signal flow isolated. Axin also binds beta-catenin, the GSK3 substrate in WNT signaling, keeping everything segregated and straight. 

Scaffold proteins may not "do" anything, as enzymes or signaling proteins in their own right, but they have critical functions as "conveners" of specific, channeled communication pathways, and allow the re-use of powerful signaling modules, over evolutionary time, in new circuits and functions, even in the same cells.


  • The oceans need more help, less talk.
  • Is Trump your church?
  • Can Poland make it through?

Sunday, August 27, 2023

Better Red Than Dead

Some cyanobacteria strain for photosynthetic efficiency at the red end of the light spectrum.

The plant world is green around us- why green, and not some other color, like, say, black? That plants are green means that they are letting green light through (or out by reflection), giving up some energy. Chlorophyll absorbs both red light and blue light, but not green, though all are near the peak of solar output. Some accessory pigments within the light-gathering antenna complexes can extend the range of wavelenghts absorbed, but clearly a fair amount of green light gets through. A recent theory suggests that this use of two separated bands of light is an optimal solution to stabilize power output. At any rate, it is not just the green light- the extra energy of the blue light is also thrown away as heat- its excitation is allowed to decay to the red level of excitation, within the antenna complex of chlorophyll molecules, since the only excited state used in photosynthesis is that at ~690 nm. This forms a uniform common denominator for all incoming light energy that then induces charge separation at the oxygen reaction center, (stripping water of electrons and protons), and sends newly energized electrons out to quinone molecules and on into the biosynthetic apparatus.

The solar output, which plants have to work with.

Fine. But what if you live deeper in the water, or in the veins of a rock, or in a mossy, shady nook? What if all you have access to is deeper red light, like at 720 nm, with lower energy than the standard input? In that case, you might want to re-engineer your version of photosynthesis to get by with slightly lower-energy light, while getting the same end results of oxygen splitting and carbon fixation. A few cyanobacteria (the same bacterial lineage that pioneered chlorophyll and the standard photosynthesis we know so well) have done just that, and a recent paper discusses the tradeoffs involved, which are of two different types.

The chlorophylls with respective absorption spectra and partial structures. Redder light is toward the right. Chlorophyll a is one used most widely in plants and cyanobacteria. Chlorophyll b is also widely used in these organisms as an additional antenna pigment that extends the range of absorbed light. Chlorophylls d and f are red-shifted and used in specialized species discussed here. 

One of the species, Chroococcidiopsis thermalis, is able to switch states, from bright/white light absorbtion with normal array of pigments, to a second state where it expresses chlorophylls d and f, which absorb light at the lower energy 720 nm, in the far red. This "facultative" ability means that it can optimize the low-light state without much regard to efficiency or photo-damage protection, which it can address by switching back to the high energy wavelength pigment system. The other species is Acaryochloris marina, which has no bright light system, but only chlorophyll d. This bacterium lives inside the cells of bigger red algae, so has a relatively stable, if shaded, environment to deal with.

What these and prior researchers found was that the ultimate quantum energy used to split water to O2, and to send energized electrons off the photosystem I and carbon compound synthesis, is the same as in any other chlorophyll a-using system. The energetics of those parts of the system apparently can not be changed. The shortfall needs to be made up in the front end, where there is a sharp drop in energy from that absorbed- 1.82 electron volts (eV) from photons at 680 nm (but only 1.72 eV from far-red photons)- and that needed at the next points in the electron transport chains (about 1.0 eV). This difference plays a large role in directing those electrons to where the plant wants them to go- down the gradient to the oxygen-evolving center, and to the quinones that ferry energized electrons to other synthetic centers. While it seems like more waste, a smaller difference allows the energized electrons to go astray, forming chemical radicals and other products dangerous to the cell. 

Summary diagram, described in text. Energy levels are described for photon excitation of chlorophyll (Chl, left axis, and energy transitions through the reaction center (Phe- pheophytin), and quinones (Q) that conduct energized electrons out to the other photosynthetic center and biosynthesis. On top are shown the respective system types- normal chlorophyll a from white-light adapted C. thermalis, chlorophyll d in A. marina, and chlorophyll f in red-adapted C. thermalis. 

What these researchers summarize in the end is that both of the red light-using cyanobacteria squeeze this middle zone of the power gradient in different ways. There is an intermediate event in the trail from photon-induced electron excitation to the outgoing quinone (+ electron) and O2 that is the target of all the antenna chlorophylls- the photosynthetic reaction center. This typically has chlorophyll a (called P680) and pheophytin, a chlorophyll-like molecule. It is at this chlorophyll a molecule that the key step takes place- the excitation energy (an electron bumped to a higher energy level) conducted in from the antenna of ~30 other chlorophylls pops out its excited electron, which flits over to the pheophytin, then thence to the carrier quinone molecules and photosystem I. Simultaneously, an electron comes in to replace it from the oxygen-evolving center, which receives alternate units of photon energy, also from the chlorophyll/pheophytin reaction center. The figure above describes these steps in energetic terms, from the original excited state, to the pheophytin (Phe-, loss of 0.16 eV) to the exiting quinone state (Qa-, loss of 0.385 eV). In the organisms discussed here, chlorophyll d replaces a at this center, and since its structure is different and absorbance is different, its energized electron is about 0.1 eV less energetic. 

In A. marina, (center in the diagram above), the energy gap between the pheophytin and the quinone is squeezed, losing about 0.06 eV. This has the effect of losing some of the downward "slope" on the energy landscape that prevents side reactions. Since A. marina has no choice but to use this lower energy system, it needs all the efficiency it can get, in terms of the transfer from chlorophyll to pheopytin. But it then sacrifices some driving force from the next step to the quinone. This has the ultimate effect of raising damage levels and side reactions when faced with more intense light. However, given its typically stable and symbiotic life style, that is a reasonable tradeoff.

On the other hand, C. thermalis (right-most in the diagram above) uses its chlorophyll d/f system on an optional basis when the light is bad. So it can give up some efficiency (in driving pheophytin electron acceptance) for better damage control. It has dramatically squeezed the gap between chlorophyll and pheophytin, from 0.16 eV to 0.08 eV, while keeping the main pheophytin-to-quinone gap unchanged. This has the effect of keeping the pumping of electrons out to the quinones in good condition, with low side-effect damage, but restricts overall efficiency, slowing the rate of excitation transfer to pheophytin, which affects not only the quinone-mediated path of energy to photosystem I, but also the path to the oxygen evolving center. The authors mention that this cyanobacterium recovers some efficiency by making extra light-harvesting pigments that provide more inputs, under these low / far-red light conditions.

The methods used to study all this were mostly based on fluorescence, which emerges from the photosynthetic system when electrons fall back from their excited states. A variety of inhibitors have been developed to prevent electron transfer, such as to the quinones, which bottles up the system and causes increased fluorescence and thermoluminescence, whose wavelengths reveal the energy gaps causing them. Thus it is natural, though also impressive, that light provides such an incisive and precise tool to study this light-driven system. There has been much talk that these far red-adapted photosynthetic organisms validate the possibility of life around dim stars, including red dwarves. But obviously these particular systems developed evolutionarily out of the dominant chlorophyll a-based system, so wouldn't provide a direct path. There are other chlorophyll systems in bacteria, however, and systems that predate the use of oxygen as the electron source, so there are doubtless many ways to skin this cat.


  • Maybe humiliating Russia would not be such a bad thing.
  • Republicans might benefit from reading the Federalist Papers.
  • Fanny Willis schools Meadows on the Hatch act.
  • "The top 1% of households are responsible for more emissions (15-17%) than the lower earning half of American households put together (14% of national emissions)."

Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Sunday, July 23, 2023

Many Ways There are to Read a Genome

New methods to unravel the code of transcriptional regulators.

When we deciphered the human genome, we came up with three billion letters of its linear code- nice and tidy. But that is not how it is read inside our cells. Sure, it is replicated linearly, but the DNA polymerases don't care about the sequence- they are not "reading" the book, they are merely copying machines trying to get it to the next generation with as few errors as possible. The book is read in an entirely different way, by a herd of proteins that recognize specific sequences of the DNA- the transcription regulators (also commonly called transcription factors [TF], in the classic scientific sense of some "factor" that one is looking for). These regulators- and there are, by one recent estimate, 1,639 of them encoded in the human genome- constitute an enormously complex network of proteins and RNAs that regulate each other, and regulate "downstream" genes that encode everything else in the cell. They are made in various proportions to specify each cell type, to conduct every step in development, and to respond to every eventuality that evolution has met and mastered over the eons.

Loops occur in the DNA between site of regulator binding, in order to turn genes on (enhancer, E, and transcription regulator/factor, TF).

Once sufficient transcription regulators bind to a given gene, it assembles a transcription complex at its start site, including the RNA polymerase that then generates an RNA copy that can float off to be made into a protein, (such as a transcription regulator), or perhaps function in its RNA form as part of zoo of rRNA, tRNA, miRNA, piRNA, and many more that also help run the cell. Some regulators can repress transcription, and many cooperate with each other. There are also diverse regions of control for any given target gene in its nearby non-coding DNA- cassettes (called enhancers) that can be bound by different regulators and thus activated at different stages for different reasons. 

These binding sites in the DNA that transcription regulators bind to are typically quite small. A classic regulator SP1 (itself 785 amino acids long and bearing three consecutive DNA binding motifs coordinated by a zinc ions) binds to a sequence resembling (G/T)GGGCGG(G/A)(G/A)(C/T). So only ten bases are specified at all, and four of those positions are degenerate. By chance, a genome of three billion bases will have such a sequence about 45,769 times. So this kind of binding is not very strictly specified, and such sites tend to appear and disappear frequently in evolution. That is one of the big secrets of evolution- while some changes are hard, others are easy, and it there is constant variation and selection going on in the regulatory regions of genes, refining and defining where / when they are expressed.

Anyhow, researchers naturally have the question- what is the regulatory landscape of a given gene under some conditions of interest, or of an entire genome? What regulators bind, and which ones are most important? Can we understand, given our technical means, what is going on in a cell from our knowledge of transcription regulators? Can we read the genome like the cell itself does? Well the answer to that is, obviously no and not yet. But there are some remarkable technical capabilities. For example, for any given regulator, scientists can determine where it binds all over the genome in any given cell, by chemical crosslinking methods. The prediction of binding sites for all known regulators has been a long-standing hobby as well, though given the sparseness of this code and the lability of the proteins/sites, one that gives only statistical, which is to say approximate, results. Also, scientists can determine across whole genomes where genes are "open" and active, vs where they are closed. Chromatin (DNA bound with histones in the nucleus) tends to be closed up on repressed and inactive genes, while transcription regulators start their work by opening chromatin to make it accessible to other regulators, on active genes.

This last method offers the prospect of truly global analysis, and was the focus of a recent paper. The idea was to merge a detailed library predicted binding sites for all known regulators all over the genome, with experimental mapping of open chromatin regions in a particular cell or tissue of interest. And then combine all that with existing knowledge about what each of the target genes near the predicted binding sites do. The researchers clustered the putative regulators binding across all open regions by this functional gene annotation to come up with statistically over-represented transcription regulators and functions. This is part of a movement across bioinformatics to fold in more sources of data to improve predictions when individual methods each produce sketchy, unsatisfying results.

In this case, mapping open chromatin by itself is not very helpful, but becomes much more helpful when combined with assessments of which genes these open regions are close to, and what those genes do. This kind of analysis can quickly determine whether you are looking at an immune cell or a neuron, as the open chromatin is a snapshot of all the active genes at a particular moment. In this recent work, the analysis was extended to say that if some regulators are consistently bound near genes participating in some key cellular function, then we can surmise that that regulator may be causal for that cell type, or at least part of the program specific to that cell. The point for these researchers is that this multi-source analysis performs better in finding cell-type specific, and function-specific, regulators than is the more common approach of just adding up the prevalence of regulators occupying open chromatin all over a given genome, regardless of the local gene functions. That kind of approach tends to yield common regulators, rather than cell-type specific ones. 

To validate, they do rather half-hearted comparisons with other pre-existing techniques, without blinding, and with validation of only their own results. So it is hardly a fair comparison. They look at the condition systemic lupus (SLE), and find different predictions coming from their current technique (called WhichTF) vs one prior method (MEME-ChIP).  MEME-ChIP just finds predicted regulator binding sites for genomic regions (i.e. open chromatin regions) given by the experimenter, and will do a statistical analysis for prevalence, regardless of the functions of either the regulator or the genes it binds to. So you get absolute prevalence of each regulator in open (active) regions vs the genome as a whole. 

Different regulators are identified from the same data by different statistical methods. But both sets are relevant.


What to make of these results? The MEME-ChIP method finds regulators like SP1, SP2, SP4, and ZFX/Y. SP1 et al. are very common regulators, but that doesn't mean they are unimportant, or not involved in disease processes. SP1 has been observed as likely to be involved in autoimmune encephalitis in mice, a model of multiple sclerosis, and naturally not so far from lupus in pathology. ZFX is also a prominent regulator in the progenitor cells of the immune system. So while these authors think little of the competing methods, those methods seem to do a very good job of identifying significant regulators, as do their own methods. 

There is another problem with the author's WhatTF method, which is that gene annotation is in its infancy. Users are unlikely to find new functions using existing annotations. Many genes have no known function yet, and new functions are being found all the time for those already assigned functions. So if one's goal is classification of a cell or of transcription regulators according to existing schemes, this method is fine. But if one has a research goal to find new cell types, or new processes, this method will channel you into existing ones instead.

This kind of statistical refinement is unlikely to give us what we seek in any case- a strong predictive model of how the human genome is read and activatated by the herd of gene regulators. For that, we will need new methods for specific interaction detection, with a better appreciation for complexes between different regulators, (which will be afforded by the new AI-driven structural techniques), and more appreciation for the many other operators on chromatin, like the various histone modifying enzymes that generate another whole code of locks and keys that do the detailed regulation of chromatin accessibility. Reading the genome is likely to be a somewhat stochastic process, but we have not yet arrived at the right level of detail, or the right statistics, to do it justice.


  • Unconscious messaging and control. How the dark side operates.
  • Solzhenitsyn on evil.
  • Come watch a little Russian TV.
  • "Ruthless beekeeping practices"
  • The medical literature is a disaster.

Saturday, May 27, 2023

Where Does Oxygen Come From?

Boring into the photosynthetic reaction center of plants, where O2 is synthesized.

Oxygen might be important to us, but it is really just a waste product. Photosynthetic bacteria found that the crucial organic molecules of life that they were making out of CO2 and storing in the form of reduced compounds (like fats and sugars) had to get those reducing units (i.e. electrons) from somewhere. And water stepped up as a likely candidate, with its abudance and simplicity. After you take four electrons away from two water molecules, you are left with four protons and one molecular oxygen molecule, i.e. O2. The protons are useful to fuel the proton-motive force system across the photosynthetic membrane, making ATP. But what to do with the oxygen? It just bubbles away, but can also be used later in metabolism to burn up those high-energy molecules again, if you have evolved aerobic metabolism.

On the early earth, reductants like reduced forms of iron and sulfur were pretty common, so they were the original sources of electrons for all metabolism. Indeed, most theories of the origin of life place it in dynamic rocky redox environments like hydrothermal vents that had such conducive chemistry. But these compounds are not quite common enough for universal photosynthesis. For example, a photosynthetic bacterium floating at the top of the ocean would like to continue basking in the sun and metabolizing, even if the water around it is relatively clear of reduced iron, perhaps because of competition from its colleagues. What to do? The cyanobacteria came up with an amazing solution- split water!

A general overview of plant and cyanobacterial photosystems, comprising the first (PSII), where the first light quantum hits and oxygen is split, an intervening electron transport chain where energy is harvested, and the second (PS1), where a second light quantum hits, more energy is harvested, and the electron ends up added to NADP. From the original water molecules, protons are used to power the membrane proton-motive force and ATP synthesis, while the electrons are used to reduce CO2 and create organic chemicals.

A schematic of the P680 center of photosystem II. Green chlorophylls are at the center, with magnesium atoms (yellow). Light induces electron movement as denoted by the red arrows, out of the chlorophyll center and onwards to other cytochrome molecules. Note that the electrons originate at the bottom out of the oxygen evolving complex, or OEC, (purple), and are transferred via an aromatic tyrosine (TyrZ) side chain, coordinating with a nearby histidine (H189) protein side chain.

This is not very easy, however, since oxygen is highly, even notoriously "electronegative". That is, it likes and keeps its electrons. It takes a super-oxidant to strip those electrons off. Cyanobacteria came up with what is now called photosystem II (that is, it was discovered after photosystem I), which collects light through a large quantum antenna of chlorophyll molecules, ending up at a special pairing of chlorophyll molecules called P680. These collect the photon, and in response bump an electron up in energy and out to an electron chain that courses through the rest of the photosynthetic system, including photosystem I. At this point, P680 is hungry for an electron, indeed has the extreme oxidation potential needed to take electrons from oxygen. And one is conducted in from the oxygen evolving center (OEC), sitting nearby.

A schematic illustrating both the evolutionary convergence that put both photosystems (types I and II) into one organism (cyanobacteria, which later become plant chloroplasts), and the energy levels acquired by the main actors in the photosynthetic process, quoted in electron volts. At the very bottom (center) is a brief downward slide as oxygen is split by the pulling force of the super-oxidation state of light-activated P680. After the electrons are light-excited, they drop down in orderly fashion through a series of electron chain transits to various cytochromes, quinones, ferredoxins, and other carriers that generate either protons or chemical reducing power as they go along. Note how the depth of the oxygen-splitting oxidation state is unique among photosynthetic systems.

A recent paper resolves the long-standing problem of how exactly oxygen is oxidized by cyanobacteria and plants at the OEC, at the very last step before oxygen release. This center is a very strained cubic metal complex of one calcium and four manganese atoms, coordinated by oxygen atoms. The overall process is that two water molecules come in, four protons and four electrons are stripped off, and the remaining oxygens combine to form O2. This is, again, part of the grand process of metabolism, whose point is to add those electrons and protons to CO2, making the organic molecules of life, generally characterized as (-CH2-), such as fats, sugars, etc. Which can be burned later back into CO2. Metals are common throughout organic chemistry as catalysts, because they have a wonderful property of de-localizing electrons and allowing multiple oxidation states, (number of extra or missing electrons), unlike the more sparse and tightly-held states of the smaller elements. So they are used in many redox cofactors and enzymes to facilitate electron movement, such as in chlorophyll itself.


The authors provide a schematic of the manganese-calcium OEC reaction center. The transferring tyrosine is at top, calcium is in fuschia/violet, the manganese atoms are in purple, and the oxygens are in red. Arrows point to the oxygens destined to bond to each other and "evolve" away as O2. Note how one of these (O6) is only singly-coordinated and is sort of awkwardly wedged into the cube. Note also how the bond lengths to calcium are all longer than those to manganese, further straining the cube. These strains help to encourage activation and expulsion of the target oxygens.

Here, in the oxygen evolving center, the manganese atoms are coordinated all around with oxygens, which presents the question- which ones are the ones? Which are destined to become O2, and how does the process happen? These researchers didn't use complicated femtosecond X-ray systems or cyclotrons, (though they draw on the structural work of those who did), but room-temperature FTIR, which is infrared spectroscopy highly sensitive to organic chemical dynamics. Spinach leaf chloroplasts were put through an hour of dark adaptation, (which sets the OEC cycle to state S1), then hit with flashes of laser light to advance the position of the oxygen evolving cycle, since each flash (5 nanoseconds) induces one electron ejection by P680, and one electron transfer out of the OEC. Thus the experimenters could control the progression of the whole cycle, one step at a time, and then take extremely close FTIR measurements of the complexes as they do their thing in response to each single electron ejection. Some of the processes they observed were very fast (20 nanoseconds), but others were pretty slow, up to 1.5 milliseconds for the S4 state to eject the final O2 and reset to the S0 state with new water molecules. They then supplement their spectroscopy with the structural work from others and with computer dynamics simulations of the core process to come up with a full mechanism.


A schematic of the steps of oxygen evolution out of the manganese core complex, from states S0 to S4. Note the highly diverse times that elapse at the various steps, noted in nano, micro, or milli seconds. This is discussed further in the text.


Other workers have provided structural perspectives on this question, showing that the cubic metal structure is bit more weird than expected. An extra oxygen (numbered as #6) wedges its way in the cube, making the already strained structure (which accommodates a calcium and a dangling extra manganese atom) highly stressed. This is a complicated story, so several figures are provided here to give various perspectives. The sequence of events is that first, (S0), two waters enter the reaction center after the prior O2 molecule has left. Water has a mix of acid (H+) and base (OH-) ionic forms, so it is easy to bring in the hydroxyl form instead of complete water, with matching protons quickly entering the proton pool for ATP production. Then another proton quickly leaves as well, so the waters have now become two oxygens, one hydrogen, and four electrons (S0). Two of the coordinated manganese atoms go from their prior +4, +4 oxidation state to +3 and +2, acting as electron buffers. 

The first two electrons are pulled out rapidly, via the nearby tyrosine ring, and off to the P680 center (ending at S2, with Mn 3+ and Mn 4+). But the next steps are much slower, extricating the last two electrons from the oxygens and inducing them to bond each other. With state S3 and one more electron removed, both manganese atoms are back to the 4+ state. In the last step, one last proton leaves and one last electron is extracted over to the tyrosine oxygen, and the oxygen 6 is so bereft as to be in a radical state, which allows it to bow over to oxygen 5 and bond with it, making O2. The metal complex has nicely buffered the oxidation states to allow these extractions to go much more easily and in a more coordinated fashion than can happen in free solution.

The authors provide a set of snapshots of their infrared spectroscopy-supported simulations (done with chemical and quantum fidelity) of the final steps, where oxygens, in the bottom panel, bond together at center. Note how the atomic positions and hydrogen attachments also change subtly as the sequence progresses. Here the manganese atoms are salmon, oxygen red, calcium yellow, hydrogen white, and a chlorine molecule is green.

This closely optimized and efficient reaction system is not just a wonder of biology and of earth history, but an object lesson in chemical technology, since photolysis of water is a very relevant dream for a sustainable energy future- to efficiently provide hydrogen as a fuel. Currently, using solar power to run water electrolyzers is not very efficient (20% for solar, and 70% for electrolysis = 14%). Work is ongoing to design direct light-to-hydrogen hydrolysis, but so far uses high heat and noxious chemicals. Life has all this worked out at the nano scale already, however, so there must be hope for better methods.


  • The US carried off an amazing economic success during the pandemic, keeping everything afloat as 22 million jobs were lost. This was well worth a bit of inflation on the back end.
  • Death
  • Have we at long last hit peak gasoline?
  • The housing crisis and local control.
  • The South has always been the problem.
  • The next real estate meltdown.

Saturday, May 20, 2023

On the Spectrum

Autism, broader autism phenotype, temperament, and families. It turns out that everyone is on the spectrum.

The advent of genomic sequencing and the hunt for disease-causing mutations has been notably unhelpful for most mental diseases. Possible or proven disease-causing mutations pile up, but they do little to illuminate the biology of what is going on, and even less towards treatment. Autism is a prime example, with hundreds of genes now identified as carrying occasional variants with causal roles. The strongest of these variants affect synapse formation among neurons, and a second class affects long-term regulation of transcription, such as turning genes durably on or off during developmental transitions. Very well- that all makes a great deal of sense, but what have we gained?

Clinically, we have gained very little. What is affected are neural developmental processes that can't be undone, or switched off in later life with a drug. So while some degree of understanding slowly emerges from these studies, translating that to treatment remains a distant dream. One aspect of the genetics of autism, however, is highly informative, which is the sheer number of low-effect and common mutations. Autism can be thought of as coming in two types, genetically- those due to a high effect, typically spontaneous or rare mutation, and those due to a confluence of common variants. The former tends to be severe and singular- an affected child in a family that is otherwise unaffected. The latter might be thought of as familial, where traits that have appeared (mildly) elsewhere in the family have been concentrated in one child, to a degree that it is now diagnosable.

This pattern has given rise to the very interesting concept of the "Broader Autism Phenotype", or BAP. This stems from the observation that families of autistic children have higher rates where ... "the parents, grandparents, and collaterals are persons strongly preoccupied with abstractions of a scientific, literary, or artistic nature, and limited in genuine interest in people." Thus there is not just a wide spectrum of autism proper, based on the particular confluence of genetic and other factors that lead to a diagnosis and its severity, but there is also, outside of the medical spectrum, quite another spectrum of traits or temperaments which tend toward autism and comprise various eccentricities, but have not, at least to date, been medicalized.


The common nature of these variants leads to another question- why are they persistent in the population? It is hard to believe that such a variety and number of variations are exclusively deleterious, especially when the BAP seems to have, well, rather positive aspects. No, I would suggest that an alternative way to describe BAP is "an enhanced ability to focus", and develop interests in salient topics. Ever meet people who are technically useless, but warm-hearted? They are way off on the non-autistic part of the spectrum, while the more technically inclined, the fixers of the world and scholars of obscure topics, are more towards the "ability to focus" part of the spectrum. Only when such variants are unusually concentrated by the genetic lottery do children appear with frank autistic characteristics, totally unable to deal with social interactions, and given to obsessive focus and intense sensitivities.

Thus autism looks like a more general lens on human temperament and evolution, being the tip of a very interesting iceberg. As societies, we need the politicians, backslappers, networkers, and con men, but we also need, indeed increasingly as our societies and technologies developed over the centuries, people with the ability and desire to deal with reality- with technical and obscure issues- without social inflection, but with highly focused attention. Militaries are a prime example, fusing critical needs of managing and motivating people, with a modern technical base of vast scope, reliant on an army of specialists devoted to making all the machinery work. Why does there have to be this tradeoff? Why can't everyone be James Bond, both technically adept and socially debonaire? That isn't really clear, at least to me, but one might speculate that in the first place, dealing with people takes a great deal of specialized intelligence, and there may not be room for everything in one brain. Secondly, the enhanced ability to focus on technical or artistic topics may actively require, as is implicit in doing science and as was exemplified by Mr. Spock, an intentional disregard of social niceties and motivations, if one is to fully explore the logic of some other, non-human, world.


Saturday, May 6, 2023

The Development of Metamorphosis

Adulting as a fly involves a lot of re-organization.

Humans undergo a slight metamorphosis, during adolescence. Imagine undergoing pupation like insects do and coming out with a totally new body, with wings! Well, Kafka did, and it wasn't very pleasant. But insects do it all the time, and have been doing it for hundreds of millions of years, taking to the air and dominating the biosphere. What goes on during metamorphosis, how complete is its refashioning of the body, and how did it evolve? A recent paper (review) considered in detail how the brains of insects change during metamorphosis, finding a curious blend of birth, destruction, and reprogramming among their neurons.

Time is on the Y axis, and the emergence of later, more advanced types of insects is on the X axis. This shows the progressive elaboration of non-metamorphosis (ametabolous), partially metamorphosing (hemimetabolous), and fully metamorphosing (holometabolous) forms. Dragonflies are only partially metamorphosing in this scheme, though their adult forms are often highly different from their larval (nymph) form.


Insects evolved from crustaceans, and took to land as small silvertail-like creatures with exoskeletons, roughly 450 million years ago. Over 100 million years, they developed the process of metamorphosis as a way to preserve the benefits of their original lifestyle for early development, in moist locations, while conquering the air and distance as adults. Early insect types are termed ametabolous, meaning that they have no metamorphosis at all, developing straight from eggs to an adult-style form. These go through several molts to accommodate growth, but don't redesign their bodies. Next came hemimetabolous development, which is exemplified by grasshoppers and cockroaches. Also dragonflies, which significantly refashion themselves during the last molt, gaining wings. In the nymph stage, those wings were carried around as small patches of flat embryonic tissue, and then suddenly grow out at the last molt. Dragonflies are extreme, and most hemimetabolous insects don't undergo such dramatic change. Last came holometabolous development, which involves pupation and a total redesign of the body that can go from a caterpillar to a butterfly.

The benefit of having wings is pretty clear- it allows huge increases in range for feeding and mating. Dragonflies are premier flying predators. But as a larva, wallowing in fruit juice or leaf sap or underwater, as dragonflies are, wings and long legs would be a hindrance. This conundrum led to the innovation of metamorphosis, based on the already somewhat dramatic practice of molting off the exoskeleton periodically. If one can grow a whole new skeleton, why not put wings on it, or legs? And metamorphosis has been tremendously successful, used by over 98% of insect species.

The adult insect tissues do not come from nowhere- they are set up as arrested embryonic tissues called imaginal discs. These are small patches that exist in the larva at specific positions. During pupation, while much of the rest of the body refashions itself, imaginal discs rapidly develop into future tissues like wings, legs, genitalia, antennas, and new mouth parts. These discs have a fascinating internal structure that prefigures the future organ. The leg disc is concentrically arranged with the more distant future parts (toes) at its center. Transplanting a disc from one insect to another or one place to another doesn't change its trajectory- it will still become a leg wherever it is put. So it is apparent that the larval stage is an intermediate stage of organismal development, where a bunch of adult features are primed but put on hold, while a simpler and much more primitive larval body plan is executed to accommodate its role in early growth and its niche in tight, moist, hidden places.

The new paper focuses on the brain, which larva need as well as adults. So the question is- how does the one brain develop from the other? Is the larval brain thrown away? The answer is that no, the brain is not thrown away at all, but undergoes its own quite dramatic metamorphosis. The adult brain is substantially bigger, so many neurons are added. A few neurons are also killed off. But most of the larval neurons are reprogrammed, trimmed back and regrown out to new regions to do new functions.

In this figure, the neurons are named as mushroom body outgoing neuron (MBON) or dopaminergic neuron (DAN, also MBIN for incoming mushroom body neuron), mushroom body extrinsic neuron to calyx (MBE-CA), and mushroom body protocerebral posterior lateral 1 (PPL1). MBON-c1 is totally reprogrammed, MBON-d1 changes its projections substantially, as do the (teal) incoming neurons, and MBON-12 was not operational in the larval stage at all. Note how MBON-c1 is totally reprogrammed to serve new locations in the adult.

The mushroom body, which is the brain area these authors focus on, is situated below the antennas and mediates smell reception, learning, and memory. Fly biologists regard it as analogous to our cortex- the most flexible area of the brain. Larvae don't have antennas, so their smell/taste reception is a lot more primitive. The mushroom body in drosophila has about a hundred neurons at first, and continuously adds neurons over larval life, with a big push during pupation, ending up with ~2200 neurons in adults. Obviously this has to wire into the antennas as they develop, for instance.

The authors find that, for instance, no direct connections between input and output neurons of the mushroom body (MBIN and MBON, respectively) survive from larval to adult stages. Thus there can be no simple memories of this kind preserved between these life stages. While there are some signs of memory retention for a few things in flies, for the most part the slate is wiped clean. 

"These MBONs [making feedback connections] are more highly interconnected in their adult configuration compared to their larval one: their adult configuration shows 13 connections (31% of possible connections), while their larval configuration has only 7 (17%). Importantly, only three of these connections (7%) are present in both larva and adult. This percentage is similar to the 5% predicted if the two stages were wired up independently at their respective frequencies."


Interestingly, no neuron changed its type- that is, which neurotransmitter it uses to communicate. So, while pruning and rewiring was pervasive, the cells did not fundamentally change their stripes. All this is driven by the hormonal system (juvenile hormone, which blocks adult development, and ecdysone, which drives molting, and in the absence of juvenile hormone, pupation) which in turn drives a program of transcription factors that direct the genes needed for development. While a great deal is known about neuronal pathfinding and development, this paper doesn't comment on those downstream events- how it is that selected neurons are pruned, turned around, and induced to branch out in totally new directions, for instance. That will be the topic of future work.


  • Corrupt business practices. Why is this lawful?
  • Why such easy bankruptcy for corporations, but not for poor countries?
  • Watch the world's mesmerizing shipping.
  • Oh, you want that? Let me jack up the price for you.
  • What transgender is like.
  • "China has arguably been the biggest beneficiary of the U.S. security system in Asia, which ensured the regional stability that made possible the income-boosting flows of trade and investment that propelled the country’s economic miracle. Today, however, General Secretary of the Chinese Communist Party Xi Jinping claims that China’s model of modernization is an alternative to “Westernization,” not a prime example of its benefits."

Saturday, April 8, 2023

Molecules That See

Being trans is OK: retinal and the first event of vision.

Our vision is incredible. If I was not looking right now and experiencing it myself, it would be unbelievable that a biological system made up of motley molecules could accomplish the speed, acuity and color that our visual system provides. It was certainly a sticking point for creationists, who found (and perhaps still find) it incredible that nature alone can explain it, not to mention its genesis out of the mists of evolutionary time. But science has been plugging away, filling in the details of the pathway, which so far appear to arise by natural means. Where consciousness fits in has yet to be figured out, but everything else is increasingly well-accounted. 

It all starts in the eye, which has a curiously backward sheet of tissue at the back- the retina. Its nerves and blood vessels are on the surface, and after light gets through those, it hits the photoreceptor cells at the rear. These photoreceptor cells come in two types, rods (non-color sensitive) and cones (sensitive to either red, green, or blue). The photoreceptor cells have a highly polarized and complicated structure, where photosensitive pigments are bottom-most in a dense stack of membranes. Above these is a segment where the mitochondria reside, which provide power, as vision needs a lot of energy. Above these is the nucleus of the cell (the brains of the operation) and top-most is the synaptic output to the rest of the nervous system- to those nerves that network on the outside of the retina. 

A single photoreceptor cell, with the outer segment at the very back of the retina, and other elements in front.

Facing the photoreceptor membranes at the bottom of the retina is the retinal pigment epithelium, which is black with melanin. This is finally where light stops, and it also has very important functions in supporting the photoreceptor cells by buffering their ionic, metabolic, and immune environment, and phagocytosing and digesting photoreceptor membranes as they get photo-oxidized, damaged, and sloughed off. Finally, inside the photoreceptor cells are the pigment membranes, which harbor the photo-sensitive protein rhodopsin, which in turn hosts the sensing pigment, retinal. Retinal is a vitamin A-derived long-chain molecule that is bound inside rhodopsin or within other opsins which respectively confer slightly shifted color sensitivity. 

These opsins transform the tickle that retinal receives from a photon into a conformational change that they, as GPCRs (G-protein coupled receptors), transmit to G-proteins, called transducin. For each photon coming in, about 50 transducin molecules are activated. Each of activated transducin G-protein alpha subunits induce (in its target cGMP phosphodisterase) about 1000 cGMP molecules to be consumed. The local drop in cGMP concentration then closes the cGMP-gated cation channels in the photoreceptor cell membrane, which starts the electrical impulse that travels out to the synapse and nervous system. This amplification series provides the exquisite sensitivity that allows single photons to be detected by the system, along with the high density of the retinal/opsin molecules packed into the photoreceptor membranes.

Retinal, used in all photoreceptor cell types. Light causes the cis-form to kick over to the trans form, which is more stable.

The central position of retinal has long been understood, as has the key transition that a photon induces, from cis-retinal to all-trans retinal. Cis-retinal has a kink in the middle, where its double bond in the center of the fatty chain forms a "C" instead of a "W", swinging around the 3-carbon end of the chain. All-trans retinal is a sort of default state, while the cis-structure is the "cocked" state- stable but susceptible to triggering by light. Interestingly, retinal can not be reset to the cis-state while still in the opsin protein. It has to be extracted, sent off to a series of at least three different enzymes to be re-cocked. It is alarming, really, to consider the complexity of all this.

A recent paper (review) provided the first look at what actually happens to retinal at the moment of activation. This is, understandably, a very fast process, and femtosecond x-ray analysis needed to be brought in to look at it. Not only that, but as described above, once retinal flips from the dark to the light-activated state, it never reverses by itself. So every molecule or crystal used in the analysis can only be used once- no second looks are possible. The authors used a spray-crystallography system where protein crystals suspended in liquid were shot into a super-fine and fast X-ray beam, just after passing by an optical laser that activated the retinal. Computers are now helpful enough that the diffractions from these passing crystals, thrown off in all directions, can be usefully collected. In the past, crystals were painstakingly positioned on goniometers at the center of large detectors, and other issues predominated, such as how to keep such crystals cold for chemical stability. The question here was what happens in the femto- and pico-seconds after optical light absorption by retinal, ensconced in its (temporary) rhodopsin protein home.

Soon after activation, at one picosecond, retinal has squirmed around, altering many contacts with its protein. The trans (dark) conformation is shown in red, while the just-activated form is in yellow. The PSB site on the far end of the fatty chain (right) is secured against the rhodopsin host, as is the retinal ring (left side), leaving the middle of the molecule to convey most of the shape change, a bit like a bicycle pedal.

And what happens? As expected, the retinal molecule twists from cis to trans, causing the protein contacts to shift. The retinal shift happens by 200 femtoseconds, and the knock-on effects through the protein are finished by 100 picoseconds. It all makes a nanosecond seem impossibly long! As imaged above, the shape shift of retinal changes a series of contacts it has with the rhodopsin protein, inducing it to change shape as well. The two ends of the retinal molecule seem to be relatively tacked down, leaving the middle, where the shape change happens, to do most of the work. 

"One picosecond after light activation, rhodopsin has reached the red-shifted Batho-Rh intermediate. Already by this early stage of activation, the twisted retinal is freed from many of its interactions with the binding pocket while structural perturbations radiate away as a transient anisotropic breathing motion that is almost entirely decayed by 100 ps. Other subtle and transient structural rearrangements within the protein arise in important regions for GPCR activation and bear similarities to those observed by TR-SFX during photoactivation of seven-TM helix retinal-binding proteins from bacteria and archaea."

All this speed is naturally lost in the later phases, which take many milliseconds to send signals to the brain, discern movement and shape, to identify objects in the scene, and do all the other processing needed before consciousness can make any sense of it. But it is nice to know how elegant and uniform the opening scene in this drama is.


  • Down with lead.
  • Medicare advantage, cont.
  • Ukraine, cont.
  • What the heck is going on in Wisconsin?
  • Graph of the week- world power needs from solar, modeled to 2050. We are only scratching the surface so far.