Showing posts with label article review. Show all posts
Showing posts with label article review. Show all posts

Saturday, September 16, 2023

Why do we Put up with the Specter of Unemployment?

A post by Paul Krugman got me thinking... why do we allow unemployment at all?

Paul Krugman has been ruminating a lot about inflation- why it went up, and why it is back down. One insight is that, asssuming that interest rate increases are supposed to work by raising unemployment, they have not evidently accomplished anything, yet inflation is back down and has been at 2-3 % for the last year, on a month to month basis. Perhaps the rate increases operated through other channels, like dampening the real estate market or general confidence. Or perhaps the rate increases had as much inflationary as deflationary effect, which is to say none at all, and our current state is simply due to the working-out of all the supply chain disruptions, emergency federal spending, and opportunistic profit-making brought on by the pandemic. As was, incidentally, supposed by the MMT school from the start.

But why make workers the focus of inflation policy in the first place? And why use unemployment as the index of inflation-fighting effectiveness? Why have unemployment at all? Unemployment is a central and classic feature of capitalism, certainly not of our natural state. Chimpanzees never experience unemployment- there is always something to do. But when it comes to capitalism, once workers have bought into the labor-rental scheme, they are dependent on the specific employer for pay, and on the employer class as a whole for the existence of a labor market. While employers like nothing better than to "discipline" workers with the prospect of sleeping on the streets, we can do better.

The Phillips curve, of unemployment related to inflation (the non-accelerating inflation rate level of unemployment, or NAIRU). A somewhat mythical and protean concept that used to be taken as a "law" of economics, that low unemployment drives higher inflation, via hotter labor market and wage increases. Even the Fed doesn't take this seriously any more.

Capitalists manage to pay themselves pretty well, to the point that our whole economy and social life (and politics) has been deranged by whole new classes of super-rich and their lackeys. So an allergy to income is not generally the problem- merely parting with money to pay others fairly. It is clear from the recent minimum wage increases that paying the lower end more has very little inflationary effect- it is peanuts on top of peanuts. But is immensely meaningful for those on the receiving end.

Similarly, the provision of a job guarantee, (as previously posted), thereby eliminating involuntary unemployment, would help workers on the low end of the scale with much greater security. The government would be the employer of last resort, at a decent wage, offering a wide range of work, from street cleaning and park maintenance to non-profit collaborations and technical operations appropriate to whatever skills are on offer. Looking around the country, there is no end of work that needs doing. The Great Depression, which gave us so much innovative legislation, also gave us a model of public works and public jobs programs- something well worth learning from and using on a permanent basis.

Such a job guarantee would automatically provide a floor for the minimum wage, (and also a floor for work conditions, hours, and benefits), and replace most unemployment insurance and other benefit programs. If a person didn't want the jobs offered, they could take a lower basic welfare-type income. But the work would not be designed to either onerous or easy- the point is to get some useful work done for society, and take in the bottom level of the labor market as needed from fluctuations in the private market. It is an insurance system, just as we have for health, for property hazards, and, as embodied in the Federal Reserve, for the banks and US capitalism writ large. Such a guarantee of work is, I think, far superior to the current unemployment insurance system, which is grudgingly funded by employers, pays people to not work, which is morally perverse and heavily abused. Private employers would naturally be able to bid workers off the system with higher pay.

Such a system would have little effect on the Fed's interest rate policies, (assuming they are effective in the first place), since unemployment is really just an index of economic activity, not the point of interest rate increases. Economic slowing would be reflected in higher numbers of people thrown into the job guarantee, and presumably getting lower (but still decent) pay. (A pay scale that would, incidentally, be more anti-cyclical than current policies.) And would be reflected in a myriad of other slowdowns that would contribute, if needed, to inflation control. 

The irony is that welfare reform of the last few decades focused relentlessly on "work requirements", and of the decades prior to that on "job training". The latter was a boondoggle, and the former forced the poor into appalling, coercive, and low-paying jobs- the very bottom end of the capitalist system. Which was the design, no doubt. I can imagine that capitalists would yell "communism" about a program where governments give decent work with decent income and benefits to anyone willing to work. Well, if that is communism, we need more of it.

Unemployment is currently used as a potent weapon- both by capitalists, given its dire consequences, and paradoxically by unions as well, which walk off the job and strike as a way to pressure employers who may find it inconvenient to hire scabs in a short time. A job guarantee could transform such conflict by taking the most dire consequences off the table. Everyone could maintain their livelihoods, and negotiations could proceed with less drama and coercion. And that is what our society should be about, promoting freedom and civility by removing forms of unjust and pernicious coercion, whether political, criminal, military, or economic.


Saturday, September 9, 2023

Keeping Cellular Signals Straight

Cells often use the same signaling systems for different inputs and purposes. Scaffolds come to the rescue.

Eukaryotic cells are huge, at least compared with their progenitors, bacteria. Thanks to their mitochondria and other organizational apparatus, the typical eukaryotic cell has about 100,000 times the volume of a bacterium. These cells are virtual cities- partially organized by their membrane organelles, but there is a much more going on, with tremendous complexity to manage. One issue that was puzzling over the decades was how signals were kept straight. Eukaryotic cells use a variety of signaling systems, proto-typicaly starting with a receptor at the cell surface, linking to a kinase (or series of kinases) that then amplifies and broadcasts the signal inside the cell, ending up with the target phosphorylated proteins entering the nucleus and changing the transcription program of the cell. 

While our genome does have roughly 500 kinases, and one to two thousand receptors, a few of them (especially some kinases and their partners, which form "intracellular signaling systems") tend to crop up frequently in different systems and cell types, like the MAP kinase cascade, associated with growth and stress responses, and the AKT kinase, associated with nutrient sensing and growth responses. Not only do many different receptors turn these cellular signaling hubs on, but their effects can often be different as well, even from unrelated signals hitting the same cell.

If all these proteins were diffusable all over the cell, such specificity of signaling would be impossible. But it turns out that they are usually tethered in particular ways, by organizational helpers called scaffold proteins. These scaffolds may localize the signaling to some small volume within the larger cell, such as a membrane "raft" domain. They may also bind multiple actors of the same signaling cascade, bringing several proteins (kinases and targets) together to make signaling both efficient and (sterically) insulated from outside interference. And, in a recent paper, they can also tweak their binding targets allosterically to insulate them from outside interference.

What is allostery vs stery? If one protein (A) binds another (B) such that a phosphorylation or other site is physically hidden from other proteins, such as a kinase (C) that would activate it, that site is said to be sterically hidden- that is, by geometry alone. On the other hand, if that site remains free and accessible, but the binding of A re-arranges protein B such that it no longer binds C very well, blocking the kinase event despite the site of phosphorylation being available, then A has allosterically regulated B. It has altered the shape of B in some subtle way that alters its behavior. While steric effects are dominant and occur everywhere in protein interactions and regulation, allostery comes up pretty frequently as well, proteins being very flexible gymnasts. 

GSK3 is part of insulin signaling. It is turned off by phosphorylation, which affects a large number of downstream functions, such as turning on glycogen synthase.

The current case turns on the kinase GSK3, which, according to Wikipedia... "has been identified as a protein kinase for over 100 different proteins in a variety of different pathways. ... GSK-3 has been the subject of much research since it has been implicated in ... diseases, including type 2 diabetes, Alzheimer's disease, inflammation, cancer, addiction and bipolar disorder." GSK3 was named for its kinase activity targeting glycogen synthase, which inactivates the synthase, thus shutting down production of glycogen, which is a way to store sugar for later use. Connected with this homeostatic role, the hormone insulin turns GSK3 off by phosphorylation by a pathway downstream of the membrane-resident insulin receptor called the PI3 kinase / protein kinase B pathway. Insulin thus indirectly increases glycogen synthesis, mopping up excess blood sugar. The circuit reads: insulin --> kinases --| GSK3 --| glycogen synthase --> more glycogen.

GSK3 also functions in this totally different pathway, downstream of WNT and Frizzled. Here, GSK3 phosphorylates beta-catenin and turns it off, most of the time. WNT (like insulin) turns GSK3 off, which allows beta-catenin to accumulate and do its gene regulation in the nucleus. Cross-talk between these pathways would be very inopportune, and is prevented by the various functions of Axin, a scaffold protein. 


Another well-studied role of GSK3 is in a developmental signal, called WNT, which promotes developmental decisions of cells during embryogenisis, wound repair, and cancer, cell migration, proliferation, etc. GSK3 is central here for the phosphorylation of beta-catenin, which is a transcription regulator, among other things, and when active migrates to the nucleus to turn its target genes on. But when phosphorylated, beta-catenin is diverted to the proteosome and destroyed, instead. This is the usual state of affairs, with WNT inactive, GSK3 active, and beta-catenin getting constantly made and then immediately disposed of. This complex is called a "destruction" complex. But an incoming WNT signal, typically from neighboring cells carrying out some developmental program, alters the activity of a key scaffold in this pathway, Axin, which is destroyed and replaced by Dishevelled, which turns GSK3 off.

How is GSK3 kept on all the time for the developmental purposes of the WNT pathway, while allowing cells to still be responsive to insulin and other signals that also use GSK3 for their intracellular transmission? The current authors found that the Axin scaffold has a special property of allosterically preventing the phosphorylation of its bound GSK3 by other upstream signaling systems. They even re-engineered Axin to an extremely minimal 26 amino acid portion that binds GSK3, and this still performed the inhibition, showing that the binding doesn't sterically block phosphorylation by insulin signals, but blocks allosterically. 

That is great, but what about the downstream connections? Keeping GSK3 on is great, but doesn't that make a mess of the other pathways it participates in? This is where scaffolds have a second job, which is to bring upstream and downstream components together, to keep the whole signal flow isolated. Axin also binds beta-catenin, the GSK3 substrate in WNT signaling, keeping everything segregated and straight. 

Scaffold proteins may not "do" anything, as enzymes or signaling proteins in their own right, but they have critical functions as "conveners" of specific, channeled communication pathways, and allow the re-use of powerful signaling modules, over evolutionary time, in new circuits and functions, even in the same cells.


  • The oceans need more help, less talk.
  • Is Trump your church?
  • Can Poland make it through?

Sunday, August 27, 2023

Better Red Than Dead

Some cyanobacteria strain for photosynthetic efficiency at the red end of the light spectrum.

The plant world is green around us- why green, and not some other color, like, say, black? That plants are green means that they are letting green light through (or out by reflection), giving up some energy. Chlorophyll absorbs both red light and blue light, but not green, though all are near the peak of solar output. Some accessory pigments within the light-gathering antenna complexes can extend the range of wavelenghts absorbed, but clearly a fair amount of green light gets through. A recent theory suggests that this use of two separated bands of light is an optimal solution to stabilize power output. At any rate, it is not just the green light- the extra energy of the blue light is also thrown away as heat- its excitation is allowed to decay to the red level of excitation, within the antenna complex of chlorophyll molecules, since the only excited state used in photosynthesis is that at ~690 nm. This forms a uniform common denominator for all incoming light energy that then induces charge separation at the oxygen reaction center, (stripping water of electrons and protons), and sends newly energized electrons out to quinone molecules and on into the biosynthetic apparatus.

The solar output, which plants have to work with.

Fine. But what if you live deeper in the water, or in the veins of a rock, or in a mossy, shady nook? What if all you have access to is deeper red light, like at 720 nm, with lower energy than the standard input? In that case, you might want to re-engineer your version of photosynthesis to get by with slightly lower-energy light, while getting the same end results of oxygen splitting and carbon fixation. A few cyanobacteria (the same bacterial lineage that pioneered chlorophyll and the standard photosynthesis we know so well) have done just that, and a recent paper discusses the tradeoffs involved, which are of two different types.

The chlorophylls with respective absorption spectra and partial structures. Redder light is toward the right. Chlorophyll a is one used most widely in plants and cyanobacteria. Chlorophyll b is also widely used in these organisms as an additional antenna pigment that extends the range of absorbed light. Chlorophylls d and f are red-shifted and used in specialized species discussed here. 

One of the species, Chroococcidiopsis thermalis, is able to switch states, from bright/white light absorbtion with normal array of pigments, to a second state where it expresses chlorophylls d and f, which absorb light at the lower energy 720 nm, in the far red. This "facultative" ability means that it can optimize the low-light state without much regard to efficiency or photo-damage protection, which it can address by switching back to the high energy wavelength pigment system. The other species is Acaryochloris marina, which has no bright light system, but only chlorophyll d. This bacterium lives inside the cells of bigger red algae, so has a relatively stable, if shaded, environment to deal with.

What these and prior researchers found was that the ultimate quantum energy used to split water to O2, and to send energized electrons off the photosystem I and carbon compound synthesis, is the same as in any other chlorophyll a-using system. The energetics of those parts of the system apparently can not be changed. The shortfall needs to be made up in the front end, where there is a sharp drop in energy from that absorbed- 1.82 electron volts (eV) from photons at 680 nm (but only 1.72 eV from far-red photons)- and that needed at the next points in the electron transport chains (about 1.0 eV). This difference plays a large role in directing those electrons to where the plant wants them to go- down the gradient to the oxygen-evolving center, and to the quinones that ferry energized electrons to other synthetic centers. While it seems like more waste, a smaller difference allows the energized electrons to go astray, forming chemical radicals and other products dangerous to the cell. 

Summary diagram, described in text. Energy levels are described for photon excitation of chlorophyll (Chl, left axis, and energy transitions through the reaction center (Phe- pheophytin), and quinones (Q) that conduct energized electrons out to the other photosynthetic center and biosynthesis. On top are shown the respective system types- normal chlorophyll a from white-light adapted C. thermalis, chlorophyll d in A. marina, and chlorophyll f in red-adapted C. thermalis. 

What these researchers summarize in the end is that both of the red light-using cyanobacteria squeeze this middle zone of the power gradient in different ways. There is an intermediate event in the trail from photon-induced electron excitation to the outgoing quinone (+ electron) and O2 that is the target of all the antenna chlorophylls- the photosynthetic reaction center. This typically has chlorophyll a (called P680) and pheophytin, a chlorophyll-like molecule. It is at this chlorophyll a molecule that the key step takes place- the excitation energy (an electron bumped to a higher energy level) conducted in from the antenna of ~30 other chlorophylls pops out its excited electron, which flits over to the pheophytin, then thence to the carrier quinone molecules and photosystem I. Simultaneously, an electron comes in to replace it from the oxygen-evolving center, which receives alternate units of photon energy, also from the chlorophyll/pheophytin reaction center. The figure above describes these steps in energetic terms, from the original excited state, to the pheophytin (Phe-, loss of 0.16 eV) to the exiting quinone state (Qa-, loss of 0.385 eV). In the organisms discussed here, chlorophyll d replaces a at this center, and since its structure is different and absorbance is different, its energized electron is about 0.1 eV less energetic. 

In A. marina, (center in the diagram above), the energy gap between the pheophytin and the quinone is squeezed, losing about 0.06 eV. This has the effect of losing some of the downward "slope" on the energy landscape that prevents side reactions. Since A. marina has no choice but to use this lower energy system, it needs all the efficiency it can get, in terms of the transfer from chlorophyll to pheopytin. But it then sacrifices some driving force from the next step to the quinone. This has the ultimate effect of raising damage levels and side reactions when faced with more intense light. However, given its typically stable and symbiotic life style, that is a reasonable tradeoff.

On the other hand, C. thermalis (right-most in the diagram above) uses its chlorophyll d/f system on an optional basis when the light is bad. So it can give up some efficiency (in driving pheophytin electron acceptance) for better damage control. It has dramatically squeezed the gap between chlorophyll and pheophytin, from 0.16 eV to 0.08 eV, while keeping the main pheophytin-to-quinone gap unchanged. This has the effect of keeping the pumping of electrons out to the quinones in good condition, with low side-effect damage, but restricts overall efficiency, slowing the rate of excitation transfer to pheophytin, which affects not only the quinone-mediated path of energy to photosystem I, but also the path to the oxygen evolving center. The authors mention that this cyanobacterium recovers some efficiency by making extra light-harvesting pigments that provide more inputs, under these low / far-red light conditions.

The methods used to study all this were mostly based on fluorescence, which emerges from the photosynthetic system when electrons fall back from their excited states. A variety of inhibitors have been developed to prevent electron transfer, such as to the quinones, which bottles up the system and causes increased fluorescence and thermoluminescence, whose wavelengths reveal the energy gaps causing them. Thus it is natural, though also impressive, that light provides such an incisive and precise tool to study this light-driven system. There has been much talk that these far red-adapted photosynthetic organisms validate the possibility of life around dim stars, including red dwarves. But obviously these particular systems developed evolutionarily out of the dominant chlorophyll a-based system, so wouldn't provide a direct path. There are other chlorophyll systems in bacteria, however, and systems that predate the use of oxygen as the electron source, so there are doubtless many ways to skin this cat.


  • Maybe humiliating Russia would not be such a bad thing.
  • Republicans might benefit from reading the Federalist Papers.
  • Fanny Willis schools Meadows on the Hatch act.
  • "The top 1% of households are responsible for more emissions (15-17%) than the lower earning half of American households put together (14% of national emissions)."

Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Sunday, July 23, 2023

Many Ways There are to Read a Genome

New methods to unravel the code of transcriptional regulators.

When we deciphered the human genome, we came up with three billion letters of its linear code- nice and tidy. But that is not how it is read inside our cells. Sure, it is replicated linearly, but the DNA polymerases don't care about the sequence- they are not "reading" the book, they are merely copying machines trying to get it to the next generation with as few errors as possible. The book is read in an entirely different way, by a herd of proteins that recognize specific sequences of the DNA- the transcription regulators (also commonly called transcription factors [TF], in the classic scientific sense of some "factor" that one is looking for). These regulators- and there are, by one recent estimate, 1,639 of them encoded in the human genome- constitute an enormously complex network of proteins and RNAs that regulate each other, and regulate "downstream" genes that encode everything else in the cell. They are made in various proportions to specify each cell type, to conduct every step in development, and to respond to every eventuality that evolution has met and mastered over the eons.

Loops occur in the DNA between site of regulator binding, in order to turn genes on (enhancer, E, and transcription regulator/factor, TF).

Once sufficient transcription regulators bind to a given gene, it assembles a transcription complex at its start site, including the RNA polymerase that then generates an RNA copy that can float off to be made into a protein, (such as a transcription regulator), or perhaps function in its RNA form as part of zoo of rRNA, tRNA, miRNA, piRNA, and many more that also help run the cell. Some regulators can repress transcription, and many cooperate with each other. There are also diverse regions of control for any given target gene in its nearby non-coding DNA- cassettes (called enhancers) that can be bound by different regulators and thus activated at different stages for different reasons. 

These binding sites in the DNA that transcription regulators bind to are typically quite small. A classic regulator SP1 (itself 785 amino acids long and bearing three consecutive DNA binding motifs coordinated by a zinc ions) binds to a sequence resembling (G/T)GGGCGG(G/A)(G/A)(C/T). So only ten bases are specified at all, and four of those positions are degenerate. By chance, a genome of three billion bases will have such a sequence about 45,769 times. So this kind of binding is not very strictly specified, and such sites tend to appear and disappear frequently in evolution. That is one of the big secrets of evolution- while some changes are hard, others are easy, and it there is constant variation and selection going on in the regulatory regions of genes, refining and defining where / when they are expressed.

Anyhow, researchers naturally have the question- what is the regulatory landscape of a given gene under some conditions of interest, or of an entire genome? What regulators bind, and which ones are most important? Can we understand, given our technical means, what is going on in a cell from our knowledge of transcription regulators? Can we read the genome like the cell itself does? Well the answer to that is, obviously no and not yet. But there are some remarkable technical capabilities. For example, for any given regulator, scientists can determine where it binds all over the genome in any given cell, by chemical crosslinking methods. The prediction of binding sites for all known regulators has been a long-standing hobby as well, though given the sparseness of this code and the lability of the proteins/sites, one that gives only statistical, which is to say approximate, results. Also, scientists can determine across whole genomes where genes are "open" and active, vs where they are closed. Chromatin (DNA bound with histones in the nucleus) tends to be closed up on repressed and inactive genes, while transcription regulators start their work by opening chromatin to make it accessible to other regulators, on active genes.

This last method offers the prospect of truly global analysis, and was the focus of a recent paper. The idea was to merge a detailed library predicted binding sites for all known regulators all over the genome, with experimental mapping of open chromatin regions in a particular cell or tissue of interest. And then combine all that with existing knowledge about what each of the target genes near the predicted binding sites do. The researchers clustered the putative regulators binding across all open regions by this functional gene annotation to come up with statistically over-represented transcription regulators and functions. This is part of a movement across bioinformatics to fold in more sources of data to improve predictions when individual methods each produce sketchy, unsatisfying results.

In this case, mapping open chromatin by itself is not very helpful, but becomes much more helpful when combined with assessments of which genes these open regions are close to, and what those genes do. This kind of analysis can quickly determine whether you are looking at an immune cell or a neuron, as the open chromatin is a snapshot of all the active genes at a particular moment. In this recent work, the analysis was extended to say that if some regulators are consistently bound near genes participating in some key cellular function, then we can surmise that that regulator may be causal for that cell type, or at least part of the program specific to that cell. The point for these researchers is that this multi-source analysis performs better in finding cell-type specific, and function-specific, regulators than is the more common approach of just adding up the prevalence of regulators occupying open chromatin all over a given genome, regardless of the local gene functions. That kind of approach tends to yield common regulators, rather than cell-type specific ones. 

To validate, they do rather half-hearted comparisons with other pre-existing techniques, without blinding, and with validation of only their own results. So it is hardly a fair comparison. They look at the condition systemic lupus (SLE), and find different predictions coming from their current technique (called WhichTF) vs one prior method (MEME-ChIP).  MEME-ChIP just finds predicted regulator binding sites for genomic regions (i.e. open chromatin regions) given by the experimenter, and will do a statistical analysis for prevalence, regardless of the functions of either the regulator or the genes it binds to. So you get absolute prevalence of each regulator in open (active) regions vs the genome as a whole. 

Different regulators are identified from the same data by different statistical methods. But both sets are relevant.


What to make of these results? The MEME-ChIP method finds regulators like SP1, SP2, SP4, and ZFX/Y. SP1 et al. are very common regulators, but that doesn't mean they are unimportant, or not involved in disease processes. SP1 has been observed as likely to be involved in autoimmune encephalitis in mice, a model of multiple sclerosis, and naturally not so far from lupus in pathology. ZFX is also a prominent regulator in the progenitor cells of the immune system. So while these authors think little of the competing methods, those methods seem to do a very good job of identifying significant regulators, as do their own methods. 

There is another problem with the author's WhatTF method, which is that gene annotation is in its infancy. Users are unlikely to find new functions using existing annotations. Many genes have no known function yet, and new functions are being found all the time for those already assigned functions. So if one's goal is classification of a cell or of transcription regulators according to existing schemes, this method is fine. But if one has a research goal to find new cell types, or new processes, this method will channel you into existing ones instead.

This kind of statistical refinement is unlikely to give us what we seek in any case- a strong predictive model of how the human genome is read and activatated by the herd of gene regulators. For that, we will need new methods for specific interaction detection, with a better appreciation for complexes between different regulators, (which will be afforded by the new AI-driven structural techniques), and more appreciation for the many other operators on chromatin, like the various histone modifying enzymes that generate another whole code of locks and keys that do the detailed regulation of chromatin accessibility. Reading the genome is likely to be a somewhat stochastic process, but we have not yet arrived at the right level of detail, or the right statistics, to do it justice.


  • Unconscious messaging and control. How the dark side operates.
  • Solzhenitsyn on evil.
  • Come watch a little Russian TV.
  • "Ruthless beekeeping practices"
  • The medical literature is a disaster.

Saturday, June 10, 2023

A Hard Road to a Cancer Drug

The long and winding story of the oncogene KRAS and its new drug, sotorasib.

After half a century of the "War on Cancer", new treatments are finally straggling into the clinic. It has been an extremely hard and frustrating road to study cancer, let alone treat it. We have learned amazing things, but mostly we have learned how convoluted a few billion years of evolution can make things. The regulatory landscape within our cells is undoubtedly the equal of any recalcitrant bureaucracy, full of redundant offices, multiple veto points, and stakeholders with obscure agendas. I recently watched a seminar in the field, which discussed one of the major genes mutated in cancer and what it has taken to develop a treatment against it. 

Cancer is caused by DNA mutations, and several different types need to occur in succession. There are driver mutations, which are the first step in the loss of normal cellular control. But additional mutations have to happen for such cells to progress through regulatory blocks, like escape from local environmental controls on cell type and cell division, past surveillance by the immune system, and past the reluctance of differentiated cells to migrate away from their resident organ. By the end, cancer cells typically have huge numbers of mutations, having incurred mutations in their DNA repair machinery in an adaptive effort to evade all these different controls.

While this means that many different targets exist that can treat some cancers, it also means that any single cancer requires a precisely tailored treatment, specific to its mutated genes. And that resistance is virtually inevitable given the highly mutable nature of these cells. 

One of the most common genes to be mutated to drive cancer (in roughly 20% of all cases) is KRAS, part of the RAS family of NRAS, KRAS, and HRAS. These were originally discovered through viruses that cause cancer in rats. These viruses (such as Kirsten rat sarcoma virus) had a copy of a rat gene in it, which it overpoduces and uses to overcome normal proliferation controls during infection. The viral gene was called an oncogene, and the original rat (or human) version was called a proto-oncogene, named KRAS. The RAS proteins occupy a central part of the signaling path that external events and stresses turn on to activate cell growth and proliferation, called the MAP kinase cascade. For instance, epidermal growth factor comes along in the blood, binds to a receptor on the outside of a cell, and turns on RAS, then MEK, MAPK, and finally transcription regulators that turn on genes in the nucleus, resulting in new proteins being expressed. "Turning on" means different things at each step in this cascade. The transcription regulators typically get phosphorylated by their upstream kinases like MAPK, which tag them for physical transport into the nucleus, where they can then activate genes. MAPK is turned on by being itself phosphorylated by MEK, and MEK is phosphorylated by RAF. RAF is turned on by binding to RAS, whose binding activity in turn is regulated by the state of a nucleotide (GTP) bound by RAS. When binding GTP, RAS is on, but if binding GDP, it is off.

A schematic of the RAS pathway, whereby extracellular growth signals are interpreted and amplified inside our cells, resulting in new gene expression as well as other more immediate effects. The cell surface receptor, activated by its ligand, activates associated SOS which activates RAS to the active (GTP) state. This leads to a kinase cascade through RAF, MEK, and MAPK and finally to gene regulators like MYC.

This whole system seems rather ornate, but it accomplishes one important thing, which is amplification. One turned-on RAF molecule or MEK molecule can turn on / phosphorylate many targets, so this cascade, though it appears linear in a diagram, is acutally a chain reaction of sorts, amplifying as it goes along. And what governs the state of RAS and its bound GTP? The state of the EGFR receptor, of course. When KRAS is activated, the resident GDP leaves, and GTP comes to take its place. RAS is a weak GTPase enzyme itself, slowly converting itself from the active back to the inactive state with GDP. 

Given all this, one would think that RAS, and KRAS in particular, might be "druggable", by sticking some well-designed molecule into the GTP/GDP binding pocket and freezing it in an inactive state. But the sad fact of the matter is that the affinity KRAS has to GTP is incredibly high- so high it is hard to measure, with a binding constant of about 20 pM. That is, half the KRAS-bound GTP comes off when the ambient concentration of GTP is infinitesimal, 0.02 nano molar. This means that nothing else is likely to be designed that can displace GTP or GDP from the KRAS protein, which means that in traditional terms, it is "undruggable". What is the biological logic of this? Well, it turns out that the RAS enzymes are managed by yet other proteins, which have the specific roles of prying GDP off (GTP exchange factor, or GEF) and of activating the GTP-ase activity of RAS to convert GTP to GDP (GTPase activating protein, or GAP). It is the GEF protein that is stimulated by the receptors like EGFR that induce RAS activity. 

So we have to be cleverer in finding ways to attack this protein. Incidentally, most of the oncogenic mutations of KRAS are at the twelfth residue, glycine, which occupies a key part of the GAP binding site. As glycine is the smallest amino acid, any other amino acid here is bulkier, and blocks GAP binding, which means that KRAS with any of these mutations can not be turned off. It just keeps on signaling and signaling, driving the cell to think it needs to grow all the time. This property of gain of function and the ability of any mutation to fit the bill is why this particular defect in KRAS is such a common cancer-driving mutation. It accounts for ~90% of pancreatic cancers, for instance. 

The seminar went on a long tangent, which occupied the field (of those looking for ways to inhibit KRAS with drugs) for roughly a decade. RAS proteins are not intrinsically membrane proteins, but they are covalently modified with a farnesyl fatty tail, which keeps them stuck in the cell's plasma membrane. Indeed, if this modification is prevented, RAS proteins don't work. So great- how to prevent that? Several groups developed inhibitors of the farnesyl transferase enzyme that carries out this modification. The inhibitors worked great, since the farnesyl transferase has a nice big pocket for its large substrate to bind, and doesn't bind it too tightly. But they didn't inhibit the RAS proteins, because there was a backup system- geranygeranyl transferase that steps into the breach as a backup, which can attach an even bigger fatty tail to RAS proteins. Arghhh!

While some are working on inhibiting both enzymes, the presenter, Kevan Shokat of UCSF, went in another direction. As a chemist, he figured that for the fraction of the KRAS mutants at position 12 that transform from glycine to cysteine, some very specific chemistry (that is, easy methods of cross-linking), can be brought to bear. Given the nature of the genetic code, the fraction of mutations that go from glycine to cysteine are small, there being eight amino acids that are within a one-base change of glycine, coded by GGT. So at best, this approach is going to have a modest impact. Nevertheless, there was little choice, so they forged ahead with a complicated chemical scheme to make a small molecule that could chemically crosslink to that cysteine, with selectivity determined by a modest shape fit to the surface of the KRAS protein near this GEF binding site. 

A structural model of KRAS, with its extremely tightly-bound substrate GDP in orange. The drug sotorasib is below in teal, bound in another pocket, with a tail extending upwards to the (mutant) cysteine 12, which is not differentiated by color, but sits over a magnesium ion (green) being coordinated by GDP. The main job of sotorasib is to interfere with the binding of the guanine exchange factor (GEF) which happens on the surface to its left, and would reset KRAS to an active state.

This approach worked surprisingly well, as the KRAS protein obligingly offfered a cryptic nook that the chemists took advantage of to make this hybrid compound, now called the drug sotorasib. This is an FDA-approved treatment for cancers which are specifically driven by this particular KRAS mutation of position 12 from glycine to cysteine. That research group is currently trying to extend their method to other mutant forms, with modest success. 

So let's take a step back. This new treatment requires, obviously, the patient's tumor to be sequenced to figure out its molecular nature. That is pretty standard these days. And then, only a small fraction of patients will get the good news that this drug may help them. Lung cancers are the principal candidates currently, (of which about 15% have this mutation), while only about 1-2% of other cancers have this mutation. This drug has some toxicity- while it is a magic bullet, its magic is far from perfect, (which is odd given the exquisite selectivity it has for the mutated form of KRAS, which should only exist in cancer tissues). And lastly, it gives, on average, under six months of reprieve from cancer progression, compared to four and a half months with a more generic drug. As mentioned above, tumors at this stage are riven with other mutations and evolve resistence to this treatment with appalling relentlessness.

While it is great to have developed a new class of drugs like this one against a very recalcitrant target, and done so on a highly rational basis driven by our growing molecular knowlege of cancer biology, this result seems like a bit of a let-down. And note also that this achievement required decades of publicly funded research, and doubtless a billion dollars or more of corporate investment to get to this point. Costs are about twenty five thousand dollars per patient, and overall sales are maybe two hundred million dollars per year, expected to increase steadily.

Does this all make sense? I am not sure, but perhaps the important part is that things can not get worse. The patent on this drug will eventually expire and its costs will come down. And the research community will keep looking for other, better ways to attack hard targets like KRAS, and will someday succeed.


Saturday, May 27, 2023

Where Does Oxygen Come From?

Boring into the photosynthetic reaction center of plants, where O2 is synthesized.

Oxygen might be important to us, but it is really just a waste product. Photosynthetic bacteria found that the crucial organic molecules of life that they were making out of CO2 and storing in the form of reduced compounds (like fats and sugars) had to get those reducing units (i.e. electrons) from somewhere. And water stepped up as a likely candidate, with its abudance and simplicity. After you take four electrons away from two water molecules, you are left with four protons and one molecular oxygen molecule, i.e. O2. The protons are useful to fuel the proton-motive force system across the photosynthetic membrane, making ATP. But what to do with the oxygen? It just bubbles away, but can also be used later in metabolism to burn up those high-energy molecules again, if you have evolved aerobic metabolism.

On the early earth, reductants like reduced forms of iron and sulfur were pretty common, so they were the original sources of electrons for all metabolism. Indeed, most theories of the origin of life place it in dynamic rocky redox environments like hydrothermal vents that had such conducive chemistry. But these compounds are not quite common enough for universal photosynthesis. For example, a photosynthetic bacterium floating at the top of the ocean would like to continue basking in the sun and metabolizing, even if the water around it is relatively clear of reduced iron, perhaps because of competition from its colleagues. What to do? The cyanobacteria came up with an amazing solution- split water!

A general overview of plant and cyanobacterial photosystems, comprising the first (PSII), where the first light quantum hits and oxygen is split, an intervening electron transport chain where energy is harvested, and the second (PS1), where a second light quantum hits, more energy is harvested, and the electron ends up added to NADP. From the original water molecules, protons are used to power the membrane proton-motive force and ATP synthesis, while the electrons are used to reduce CO2 and create organic chemicals.

A schematic of the P680 center of photosystem II. Green chlorophylls are at the center, with magnesium atoms (yellow). Light induces electron movement as denoted by the red arrows, out of the chlorophyll center and onwards to other cytochrome molecules. Note that the electrons originate at the bottom out of the oxygen evolving complex, or OEC, (purple), and are transferred via an aromatic tyrosine (TyrZ) side chain, coordinating with a nearby histidine (H189) protein side chain.

This is not very easy, however, since oxygen is highly, even notoriously "electronegative". That is, it likes and keeps its electrons. It takes a super-oxidant to strip those electrons off. Cyanobacteria came up with what is now called photosystem II (that is, it was discovered after photosystem I), which collects light through a large quantum antenna of chlorophyll molecules, ending up at a special pairing of chlorophyll molecules called P680. These collect the photon, and in response bump an electron up in energy and out to an electron chain that courses through the rest of the photosynthetic system, including photosystem I. At this point, P680 is hungry for an electron, indeed has the extreme oxidation potential needed to take electrons from oxygen. And one is conducted in from the oxygen evolving center (OEC), sitting nearby.

A schematic illustrating both the evolutionary convergence that put both photosystems (types I and II) into one organism (cyanobacteria, which later become plant chloroplasts), and the energy levels acquired by the main actors in the photosynthetic process, quoted in electron volts. At the very bottom (center) is a brief downward slide as oxygen is split by the pulling force of the super-oxidation state of light-activated P680. After the electrons are light-excited, they drop down in orderly fashion through a series of electron chain transits to various cytochromes, quinones, ferredoxins, and other carriers that generate either protons or chemical reducing power as they go along. Note how the depth of the oxygen-splitting oxidation state is unique among photosynthetic systems.

A recent paper resolves the long-standing problem of how exactly oxygen is oxidized by cyanobacteria and plants at the OEC, at the very last step before oxygen release. This center is a very strained cubic metal complex of one calcium and four manganese atoms, coordinated by oxygen atoms. The overall process is that two water molecules come in, four protons and four electrons are stripped off, and the remaining oxygens combine to form O2. This is, again, part of the grand process of metabolism, whose point is to add those electrons and protons to CO2, making the organic molecules of life, generally characterized as (-CH2-), such as fats, sugars, etc. Which can be burned later back into CO2. Metals are common throughout organic chemistry as catalysts, because they have a wonderful property of de-localizing electrons and allowing multiple oxidation states, (number of extra or missing electrons), unlike the more sparse and tightly-held states of the smaller elements. So they are used in many redox cofactors and enzymes to facilitate electron movement, such as in chlorophyll itself.


The authors provide a schematic of the manganese-calcium OEC reaction center. The transferring tyrosine is at top, calcium is in fuschia/violet, the manganese atoms are in purple, and the oxygens are in red. Arrows point to the oxygens destined to bond to each other and "evolve" away as O2. Note how one of these (O6) is only singly-coordinated and is sort of awkwardly wedged into the cube. Note also how the bond lengths to calcium are all longer than those to manganese, further straining the cube. These strains help to encourage activation and expulsion of the target oxygens.

Here, in the oxygen evolving center, the manganese atoms are coordinated all around with oxygens, which presents the question- which ones are the ones? Which are destined to become O2, and how does the process happen? These researchers didn't use complicated femtosecond X-ray systems or cyclotrons, (though they draw on the structural work of those who did), but room-temperature FTIR, which is infrared spectroscopy highly sensitive to organic chemical dynamics. Spinach leaf chloroplasts were put through an hour of dark adaptation, (which sets the OEC cycle to state S1), then hit with flashes of laser light to advance the position of the oxygen evolving cycle, since each flash (5 nanoseconds) induces one electron ejection by P680, and one electron transfer out of the OEC. Thus the experimenters could control the progression of the whole cycle, one step at a time, and then take extremely close FTIR measurements of the complexes as they do their thing in response to each single electron ejection. Some of the processes they observed were very fast (20 nanoseconds), but others were pretty slow, up to 1.5 milliseconds for the S4 state to eject the final O2 and reset to the S0 state with new water molecules. They then supplement their spectroscopy with the structural work from others and with computer dynamics simulations of the core process to come up with a full mechanism.


A schematic of the steps of oxygen evolution out of the manganese core complex, from states S0 to S4. Note the highly diverse times that elapse at the various steps, noted in nano, micro, or milli seconds. This is discussed further in the text.


Other workers have provided structural perspectives on this question, showing that the cubic metal structure is bit more weird than expected. An extra oxygen (numbered as #6) wedges its way in the cube, making the already strained structure (which accommodates a calcium and a dangling extra manganese atom) highly stressed. This is a complicated story, so several figures are provided here to give various perspectives. The sequence of events is that first, (S0), two waters enter the reaction center after the prior O2 molecule has left. Water has a mix of acid (H+) and base (OH-) ionic forms, so it is easy to bring in the hydroxyl form instead of complete water, with matching protons quickly entering the proton pool for ATP production. Then another proton quickly leaves as well, so the waters have now become two oxygens, one hydrogen, and four electrons (S0). Two of the coordinated manganese atoms go from their prior +4, +4 oxidation state to +3 and +2, acting as electron buffers. 

The first two electrons are pulled out rapidly, via the nearby tyrosine ring, and off to the P680 center (ending at S2, with Mn 3+ and Mn 4+). But the next steps are much slower, extricating the last two electrons from the oxygens and inducing them to bond each other. With state S3 and one more electron removed, both manganese atoms are back to the 4+ state. In the last step, one last proton leaves and one last electron is extracted over to the tyrosine oxygen, and the oxygen 6 is so bereft as to be in a radical state, which allows it to bow over to oxygen 5 and bond with it, making O2. The metal complex has nicely buffered the oxidation states to allow these extractions to go much more easily and in a more coordinated fashion than can happen in free solution.

The authors provide a set of snapshots of their infrared spectroscopy-supported simulations (done with chemical and quantum fidelity) of the final steps, where oxygens, in the bottom panel, bond together at center. Note how the atomic positions and hydrogen attachments also change subtly as the sequence progresses. Here the manganese atoms are salmon, oxygen red, calcium yellow, hydrogen white, and a chlorine molecule is green.

This closely optimized and efficient reaction system is not just a wonder of biology and of earth history, but an object lesson in chemical technology, since photolysis of water is a very relevant dream for a sustainable energy future- to efficiently provide hydrogen as a fuel. Currently, using solar power to run water electrolyzers is not very efficient (20% for solar, and 70% for electrolysis = 14%). Work is ongoing to design direct light-to-hydrogen hydrolysis, but so far uses high heat and noxious chemicals. Life has all this worked out at the nano scale already, however, so there must be hope for better methods.


  • The US carried off an amazing economic success during the pandemic, keeping everything afloat as 22 million jobs were lost. This was well worth a bit of inflation on the back end.
  • Death
  • Have we at long last hit peak gasoline?
  • The housing crisis and local control.
  • The South has always been the problem.
  • The next real estate meltdown.

Saturday, May 20, 2023

On the Spectrum

Autism, broader autism phenotype, temperament, and families. It turns out that everyone is on the spectrum.

The advent of genomic sequencing and the hunt for disease-causing mutations has been notably unhelpful for most mental diseases. Possible or proven disease-causing mutations pile up, but they do little to illuminate the biology of what is going on, and even less towards treatment. Autism is a prime example, with hundreds of genes now identified as carrying occasional variants with causal roles. The strongest of these variants affect synapse formation among neurons, and a second class affects long-term regulation of transcription, such as turning genes durably on or off during developmental transitions. Very well- that all makes a great deal of sense, but what have we gained?

Clinically, we have gained very little. What is affected are neural developmental processes that can't be undone, or switched off in later life with a drug. So while some degree of understanding slowly emerges from these studies, translating that to treatment remains a distant dream. One aspect of the genetics of autism, however, is highly informative, which is the sheer number of low-effect and common mutations. Autism can be thought of as coming in two types, genetically- those due to a high effect, typically spontaneous or rare mutation, and those due to a confluence of common variants. The former tends to be severe and singular- an affected child in a family that is otherwise unaffected. The latter might be thought of as familial, where traits that have appeared (mildly) elsewhere in the family have been concentrated in one child, to a degree that it is now diagnosable.

This pattern has given rise to the very interesting concept of the "Broader Autism Phenotype", or BAP. This stems from the observation that families of autistic children have higher rates where ... "the parents, grandparents, and collaterals are persons strongly preoccupied with abstractions of a scientific, literary, or artistic nature, and limited in genuine interest in people." Thus there is not just a wide spectrum of autism proper, based on the particular confluence of genetic and other factors that lead to a diagnosis and its severity, but there is also, outside of the medical spectrum, quite another spectrum of traits or temperaments which tend toward autism and comprise various eccentricities, but have not, at least to date, been medicalized.


The common nature of these variants leads to another question- why are they persistent in the population? It is hard to believe that such a variety and number of variations are exclusively deleterious, especially when the BAP seems to have, well, rather positive aspects. No, I would suggest that an alternative way to describe BAP is "an enhanced ability to focus", and develop interests in salient topics. Ever meet people who are technically useless, but warm-hearted? They are way off on the non-autistic part of the spectrum, while the more technically inclined, the fixers of the world and scholars of obscure topics, are more towards the "ability to focus" part of the spectrum. Only when such variants are unusually concentrated by the genetic lottery do children appear with frank autistic characteristics, totally unable to deal with social interactions, and given to obsessive focus and intense sensitivities.

Thus autism looks like a more general lens on human temperament and evolution, being the tip of a very interesting iceberg. As societies, we need the politicians, backslappers, networkers, and con men, but we also need, indeed increasingly as our societies and technologies developed over the centuries, people with the ability and desire to deal with reality- with technical and obscure issues- without social inflection, but with highly focused attention. Militaries are a prime example, fusing critical needs of managing and motivating people, with a modern technical base of vast scope, reliant on an army of specialists devoted to making all the machinery work. Why does there have to be this tradeoff? Why can't everyone be James Bond, both technically adept and socially debonaire? That isn't really clear, at least to me, but one might speculate that in the first place, dealing with people takes a great deal of specialized intelligence, and there may not be room for everything in one brain. Secondly, the enhanced ability to focus on technical or artistic topics may actively require, as is implicit in doing science and as was exemplified by Mr. Spock, an intentional disregard of social niceties and motivations, if one is to fully explore the logic of some other, non-human, world.


Saturday, May 6, 2023

The Development of Metamorphosis

Adulting as a fly involves a lot of re-organization.

Humans undergo a slight metamorphosis, during adolescence. Imagine undergoing pupation like insects do and coming out with a totally new body, with wings! Well, Kafka did, and it wasn't very pleasant. But insects do it all the time, and have been doing it for hundreds of millions of years, taking to the air and dominating the biosphere. What goes on during metamorphosis, how complete is its refashioning of the body, and how did it evolve? A recent paper (review) considered in detail how the brains of insects change during metamorphosis, finding a curious blend of birth, destruction, and reprogramming among their neurons.

Time is on the Y axis, and the emergence of later, more advanced types of insects is on the X axis. This shows the progressive elaboration of non-metamorphosis (ametabolous), partially metamorphosing (hemimetabolous), and fully metamorphosing (holometabolous) forms. Dragonflies are only partially metamorphosing in this scheme, though their adult forms are often highly different from their larval (nymph) form.


Insects evolved from crustaceans, and took to land as small silvertail-like creatures with exoskeletons, roughly 450 million years ago. Over 100 million years, they developed the process of metamorphosis as a way to preserve the benefits of their original lifestyle for early development, in moist locations, while conquering the air and distance as adults. Early insect types are termed ametabolous, meaning that they have no metamorphosis at all, developing straight from eggs to an adult-style form. These go through several molts to accommodate growth, but don't redesign their bodies. Next came hemimetabolous development, which is exemplified by grasshoppers and cockroaches. Also dragonflies, which significantly refashion themselves during the last molt, gaining wings. In the nymph stage, those wings were carried around as small patches of flat embryonic tissue, and then suddenly grow out at the last molt. Dragonflies are extreme, and most hemimetabolous insects don't undergo such dramatic change. Last came holometabolous development, which involves pupation and a total redesign of the body that can go from a caterpillar to a butterfly.

The benefit of having wings is pretty clear- it allows huge increases in range for feeding and mating. Dragonflies are premier flying predators. But as a larva, wallowing in fruit juice or leaf sap or underwater, as dragonflies are, wings and long legs would be a hindrance. This conundrum led to the innovation of metamorphosis, based on the already somewhat dramatic practice of molting off the exoskeleton periodically. If one can grow a whole new skeleton, why not put wings on it, or legs? And metamorphosis has been tremendously successful, used by over 98% of insect species.

The adult insect tissues do not come from nowhere- they are set up as arrested embryonic tissues called imaginal discs. These are small patches that exist in the larva at specific positions. During pupation, while much of the rest of the body refashions itself, imaginal discs rapidly develop into future tissues like wings, legs, genitalia, antennas, and new mouth parts. These discs have a fascinating internal structure that prefigures the future organ. The leg disc is concentrically arranged with the more distant future parts (toes) at its center. Transplanting a disc from one insect to another or one place to another doesn't change its trajectory- it will still become a leg wherever it is put. So it is apparent that the larval stage is an intermediate stage of organismal development, where a bunch of adult features are primed but put on hold, while a simpler and much more primitive larval body plan is executed to accommodate its role in early growth and its niche in tight, moist, hidden places.

The new paper focuses on the brain, which larva need as well as adults. So the question is- how does the one brain develop from the other? Is the larval brain thrown away? The answer is that no, the brain is not thrown away at all, but undergoes its own quite dramatic metamorphosis. The adult brain is substantially bigger, so many neurons are added. A few neurons are also killed off. But most of the larval neurons are reprogrammed, trimmed back and regrown out to new regions to do new functions.

In this figure, the neurons are named as mushroom body outgoing neuron (MBON) or dopaminergic neuron (DAN, also MBIN for incoming mushroom body neuron), mushroom body extrinsic neuron to calyx (MBE-CA), and mushroom body protocerebral posterior lateral 1 (PPL1). MBON-c1 is totally reprogrammed, MBON-d1 changes its projections substantially, as do the (teal) incoming neurons, and MBON-12 was not operational in the larval stage at all. Note how MBON-c1 is totally reprogrammed to serve new locations in the adult.

The mushroom body, which is the brain area these authors focus on, is situated below the antennas and mediates smell reception, learning, and memory. Fly biologists regard it as analogous to our cortex- the most flexible area of the brain. Larvae don't have antennas, so their smell/taste reception is a lot more primitive. The mushroom body in drosophila has about a hundred neurons at first, and continuously adds neurons over larval life, with a big push during pupation, ending up with ~2200 neurons in adults. Obviously this has to wire into the antennas as they develop, for instance.

The authors find that, for instance, no direct connections between input and output neurons of the mushroom body (MBIN and MBON, respectively) survive from larval to adult stages. Thus there can be no simple memories of this kind preserved between these life stages. While there are some signs of memory retention for a few things in flies, for the most part the slate is wiped clean. 

"These MBONs [making feedback connections] are more highly interconnected in their adult configuration compared to their larval one: their adult configuration shows 13 connections (31% of possible connections), while their larval configuration has only 7 (17%). Importantly, only three of these connections (7%) are present in both larva and adult. This percentage is similar to the 5% predicted if the two stages were wired up independently at their respective frequencies."


Interestingly, no neuron changed its type- that is, which neurotransmitter it uses to communicate. So, while pruning and rewiring was pervasive, the cells did not fundamentally change their stripes. All this is driven by the hormonal system (juvenile hormone, which blocks adult development, and ecdysone, which drives molting, and in the absence of juvenile hormone, pupation) which in turn drives a program of transcription factors that direct the genes needed for development. While a great deal is known about neuronal pathfinding and development, this paper doesn't comment on those downstream events- how it is that selected neurons are pruned, turned around, and induced to branch out in totally new directions, for instance. That will be the topic of future work.


  • Corrupt business practices. Why is this lawful?
  • Why such easy bankruptcy for corporations, but not for poor countries?
  • Watch the world's mesmerizing shipping.
  • Oh, you want that? Let me jack up the price for you.
  • What transgender is like.
  • "China has arguably been the biggest beneficiary of the U.S. security system in Asia, which ensured the regional stability that made possible the income-boosting flows of trade and investment that propelled the country’s economic miracle. Today, however, General Secretary of the Chinese Communist Party Xi Jinping claims that China’s model of modernization is an alternative to “Westernization,” not a prime example of its benefits."