Showing posts with label evolution. Show all posts
Showing posts with label evolution. Show all posts

Saturday, April 1, 2023

Consciousness and the Secret Life of Plants

Could plants be conscious? What are the limits of consciousness and pain? 

Scientific American recently reviewed a book titled "Planta Sapiens". The title gives it all away, and the review was quite positive, with statements like: 

"Our senses can not grasp the rich communicative world of plants. We therefore lack language to describe the 'intelligence' of a root tip in conversation with the microbial life of the soil or the 'cognition' that emerges when chemical whispers ripple through a lacework of leaf cells."

This is provocative indeed! What if plants really do have a secret life and suffer pain with our every bite and swing of the scythe? What of our vaunted morals and ethics then?

I am afraid that I take a skeptical view of this kind of thing, so let's go through some of the aspects of consciousness, and ask how widespread it really is. One traditional view, from the ur-scientific types like Descartes, is that only humans have consciousness, and all other creatures, have at best a mechanism, unfeeling and mechanical, that may look like consciousness, but isn't. This, continued in a sense by B. F. Skinner in the 20th century, is a statement from ignorance. We can not fully communicate with animals, so we can not really participate in what looks like their consciousness, so let's just ignore it. This position has the added dividend of supporting our unethical treatment of animals, which was an enormous convenience, and remains the core position of capitalism generally, regarding farm animals (though its view of humans is hardly more generous).

Well, this view is totally untenable, from our experience of animals, our ability to indeed communicate with them to various degrees, to see them dreaming, not to mention from an evolutionary standpoint. Our consciousness did not arise from nothing, after all. So I think we can agree that mammals can all be included in the community of conscious fellow-beings on the planet. It is clear that the range of conscious pre-occupations can vary tremendously, but whenever we have looked at the workings of memory, attention, vision, and other components assumed to be part of or contributors to conscious awareness, they all exist in mammals, at least. 

But what about other animals like insects, jellyfish, or bacteria? Here we will need a deeper look at the principles in play. As far as we understand it, consciousness is an activity that binds various senses and models of the world into an experience. It should be distinguished from responsiveness to stimuli. A thermostat is responsive. A bacterium is responsive. That does not constitute consciousness. Bacteria are highly responsive to chemical gradients in their environment, to food sources, to the pheromones of fellow bacteria. They appear to have some amount of sensibility and will. But we can not say that they have experience in the sense of a conscious experience, even if they integrate a lot of stimuli into a holistic and sensitive approach to their environment. 


The same is true of our own cells, naturally. They also are highly responsive on an individual basis, working hard to figure out what the bloodstream is bringing them in terms of food, immune signals, pathogens, etc. Could each of our cells be conscious? I would doubt it, because their responsiveness is mechanistic, rather than being an independent as well as integrated model of their world. Simlarly, if we are under anaesthesia and a surgeon cuts off a leg, is that leg conscious? It has countless nerve cells, and sensory apparatus, but it does not represent anything about its world. It rather is built to send all these signals to a modeling system elsewhere, i.e. our brain, which is where consciousness happens, and where (conscious) pain happens as well.

So I think the bottom line is that consciousness is rather widely shared as a property of brains, thus of organisms with brains, which were devised over evolutionary time to provide the kind of integrated experience that a neural net can not supply. Jellyfish, for instance, have neural nets that feel pain, respond to food and mates, and swim exquisitely. They are highly responsive, but, I would argue, not conscious. On the other hand, insects have brains and would count as conscious, even though their level of consciousness might be very primitive. Honey bees map out their world, navigate about, select the delicacies they want from plants, and go home to a highly organized hive. They also remember experiences and learn from them.

This all makes it highly unlikely that consciousness is present in quantum phenomena, in rocks, in bacteria, or in plants. They just do not have the machinery it takes to feel something as an integrated and meaningful experience. Where exactly the line is between highly responsive and conscious is probably not sharply defined. There are brains that are exceedingly small, and neural nets that are very rich. But it is also clear that it doesn't take consciousness to experience pain or try to avoid it, (which plants, bacteria, and jellyfish all do). Where is the limit of ethical care, if our criterion shifts from consciousness to pain? Wasn't our amputated leg in pain after the operation above, and didn't we callously ignore its feelings? 

I would suggest that the limit remains that of consciousness, not that of responsiveness to pain. Pain is not problematic because of a reflex reaction. The doctor can tap our knee as often as he wants, perhaps causing pain to our tendon, but not to our consciousness. Pain is problematic because of suffering, which is a conscious construct built around memory, expectations, and models of how things "should" be. While one can easily see that a plant might have certain positive (light, air, water) and negative (herbivores, fungi) stimuli that shape its intrinsic responses to the environment, these are all reflexive, not reflective, and so do not appear (to an admittedly biased observer) to constitute suffering that rises to ethical consideration.

Saturday, March 11, 2023

An Origin Story for Spider Venom

Phylogenetic analysis shows that the major component of spider venom derives from one ancient ancestor.

One reason why biologists are so fully committed to the Darwinian account of natural selection and evolution is that it keeps explaining and organizing what we see. Despite the almost incredible diversity and complexity of life, every close look keeps confirming what Darwin sensed and outlined so long ago. In the modern era, biology has gone through the "Modern Synthesis", bringing genetics, molecular biology, and evolutionary theory into alignment with mutually supporting data and theories. For example, it was Linus Pauling and colleagues (after they lost the race to determine the structure of DNA) who proposed that the composition of proteins (hemoglobin, in their case) could be used to estimate evolutionary relationships, both among those molecules, and among their host species.

Naturally, these methods have become vastly more powerful, to the point that most phylogenetic analyses of the relationship between species (including the definition of what species are, vs subspecies, hybrids, etc.) are led these days by DNA analysis, which provides the richest possible trove of differentiating characters- a vast spectrum from universally conserved to highly (and forensically) varying. And, naturally, it also constitutes a record of the mutational steps that make up the evolutionary process. The correlation of such analyses with other traditionally used diagnostic characters, and with the paleontological record, is a huge area of productive science, which leads, again and again, to new revelations about life's history.


One sample structure of a DRP- the disulfide rich protein that makes up most of spider venoms.
 The disulfide bond (between two cysteines) is shown in red. There is usually another disulfide helping to hold the two halves of the molecule together as well. The rest of the molecule is (evolutionarily, and structurally) free to change shape and character, in order to carry out its neuron-channel blocking or other toxic function.

One small example was published recently, in a study of spider venoms. Spiders arose, from current estimates, about 375 million years ago, and comprise the second most prevalent form of animal life, second only to their cousins, the insects. They generally have a hunting lifestyle, using venom to immobilize their prey, after capture and before digestion. These venoms are highly complex brews that can have over a hundred distinct molecules, including potassium, acids, tissue- and membrane-digesting enzymes, nucleosides, pore-forming peptides, and neurotoxins. At over three-fourths of the venom, the protein-based neurotoxins are the most interesting and best studied of the venom components, and a spider typically deploys dozens of types in its venom. They are also called cysteine-rich peptides or disulfide-rich peptides (DRPs) due to their composition. The fact that spiders tend to each have a large variety of these DRPs in their collection argues that a lot of gene duplication and diversification has occured.

A general phylogenetic tree of spiders (left). On the right are the signal peptides of a variety of venoms from some of these species. The identity of many of these signal sequences, which are not present in the final active protein, is a sign that these venom genes were recently duplicated.

So where do they come from? Sequences of the peptides themselves are of limited assistance, being small, (averaging ~60 amino acids), and under extensive selection to diversify. But they are processed from larger proteins (pro-proteins) and genes that show better conservation, providing the present authors more material for their evolutionary studies. The figure above, for example, shows, on the far right, the signal peptides from families of these DRP genes from single species. Signal peptides are the small leading section of a translated protein that directs it to be secreted rather than being kept inside the cell. Right after the protein is processed to the right place, this signal is clipped off and thus is not part of the mature venom protein. These signal peptides tend to be far more conserved than the mature venom protein, despite that fact that they have little to do- just send the protein to the right place, which can be accomplished by all sorts of sequences. But this is a sign that the venoms are under positive evolutionary pressure- to be more effective, to extend the range of possible victims, and to overcome whatever resistance the victims might evolve against them. 

Indeed, these authors show specifically that strong positive selection is at work, which is one more insight that molecular data can provide. (First, by comparing the rates of protein-coding positions that are neutral via the genetic code (synonymous) vs those that make the protein sequence change (non-synonymous), and second by the pattern and tempo of evolution of venom sequences compared with the mass of neutral sequences of the species.

"Given their significant sequence divergence since their deep-rooted evolutionary origin, the entire protein-coding gene, including the signal and propeptide regions, has accumulated significant differences. Consistent with this hypothesis, the majority of positively selected sites (~96%) identified in spider venom DRP toxins (all sites in Araneomorphae, and all but two sites in Mygalomorphae) were restricted to the mature peptide region, whereas the signal and propeptide regions harboured a minor proportion of these sites (1% and 3%, respectively)."

 

Phylogenetic tree (left), connecting up venom genes from across the spider phylogeny. On right, some of the venom sequences are shown just by their cysteine (C) locations, which form the basic structural scaffold of these proteins (top figure).


The more general phyogenetic analysis from all their sequences tells these authors that all the venom DRP genes, from all spider species, came from one origin. One easy way to see this is in the image above on the right, where just the cysteine scaffold of these proteins from around the phylogeny are lined up, showing that this scaffold is very highly conserved, regardless of the rest of the sequence. This finding (which confirms prior work) is surprising, since venoms of other animals, like snakes, tend to incorporate a motley bunch of active enzymes and components, sourced from a variety of ancestral sources. So to see spiders sticking so tenaciously to this fundamental structure and template for the major component of their venom is impressive- clearly it is a very effective molecule. The authors point out the cone snails, another notorious venom-maker, originated much more recently, (about 45 million years ago), and shows the same pattern of using one ancestral form to evolve a diversified blizzard of venom components, which have been of significant interest to medical science.


  • Example: a spider swings a bolas to snare a moth.

Saturday, February 11, 2023

A Gene is Born

Yes, genes do develop out of nothing.

The "intelligent" design movement has long made a fetish of information. As science has found, life relies on encoded information for its genetic inheritance and the reliable expression of its physical manifestations. The ID proposition is, quite simply, that all this information could not have developed out of a mindless process, but only through "design" by a conscious being. Evidently, Darwinian natural selection still sticks on some people's craw. Michael Behe even developed a pseudo-mathematical theory about how, yes, genes could be copied mindlessly, but new genes could never be conjured out of nothing, due to ... information.

My understanding of information science equates information to loss of entropy, and expresses a minimal cost of the energy needed to create, compute or transmit information- that is, the Shannon limits. A quite different concept comes from physics, in the form of information conservation in places like black holes. This form of information is really the implicit information of the wave functions and states of physical matter, not anything encoded or transmitted in the sense of biology or communication. Physical state information may be indestructable (and un-create-able) on this principle, but coded information is an entirely different matter.

In a parody of scientific discussion, intelligent design proponents are hosted by the once-respectable Hoover Institution for a discussion about, well, god.

So the fecundity that life shows in creating new genes out of existing genes, (duplications), and even making whole-chromosome or whole-genome duplications, has long been a problem for creationists. Energetically, it is easy to explain as a mere side-effect of having plenty of energy to work with, combined with error-prone methods of replication. But creationistically, god must come into play somewhere, right? Perhaps it comes into play in the creation of really new genes, like those that arise from nothing, such as at the origin of life?

A recent paper discussed genes in humans that have over our recent evolutionary history arisen from essentially nothing. It drew on prior work in yeast that elegantly laid out a spectrum or life cycle of genes, from birth to death. It turns out that there is an active literature on the birth of genes, which shows that, just like duplication processes, it is entirely natural for genes to develop out of humble, junky precursors. And no information theory needs to be wheeled in to show that this is possible.

Yeast provides the tools to study novel genes in some detail, with rich genetics and lots of sequenced relatives, near and far. Here is portrayed a general life cycle of a gene, from birth out of non-gene DNA sequences (left) into the key step of translation, and on to a subject of normal natural selection ("Exposed") for some function. But if that function decays or is replaced, the gene may also die, by mutation, becoming a pseudogene, and eventually just some more genomic junk.

The death of genes is quite well understood. The databases are full of "pseudogenes" that are very similar to active genes, but are disabled for some reason, such as a truncation somewhere or loss of reading frame due to a point mutation or splicing mutation. Their annotation status is dynamic, as they are sometimes later found to be active after all, under obscure conditions or to some low level. Our genomes are also full of transposons and retroviruses that have died in this fashion, by mutation.

Duplications are also well-understood, some of which have over evolutionary time given rise to huge families of related proteins, such as kinases, odorant receptors, or zinc-finger transcription factors. But the hunt for genes that have developed out of non-gene materials is a relatively new area, due to its technical difficulty. Genome annotators were originally content to pay attention to genes that coded for a hundred amino acids or more, and ignore everything else. That became untenable when a huge variety of non-coding RNAs came on the scene. Also, occasional cases of very small genes that encoded proteins came up from work that found them by their functional effects.

As genome annotation progressed, it became apparent that, while a huge proportion of genes are conserved between species, (or members of families of related proteins), other genes had no relatives at all, and would never provide information by this highly convenient route of computer analysis. They are orphans, and must have either been so heavily mutated since divergence that their relationships have become unrecognizable, or have arisen recently (that is, since their evolutionary divergence from related species that are used for sequence comparison) from novel sources that provide no clue about their function. Finer analysis of ever more closely related species is often informative in these cases.

The recent paper on human novel genes makes the finer point that splicing and export from the nucleus constitute the major threshold between junk genes and "real" genes. Once an RNA gets out of the nucleus, any reading frame it may have will be translated and exposed to selection. So the acquisition of splicing signals is a key step, in their argument, to get a randomly expressed bit of RNA over the threshold.

A recent paper provided a remarkable example of novel gene origination. It uncovered a series of 74 human genes that are not shared with macaque, (which they took as their reference), have a clear path of origin from non-coding precursors, and some of which have significant biological effects on human development. They point to a gradual process whereby promiscuous transcription from the genome gave rise by chance to RNAs that acquired splice sites, which piped them into the nuclear export machinery and out to the cytoplasm. Once there, they could be translated, over whatever small coding region they might possess, after which selection could operate on their small protein products. A few appear to have gained enough function to encourage expansion of the coding region, resulting in growth of the gene and entrenchment as part of the developmental program.

Brain "organoids" grown from genetically manipulated human stem cells. On left is the control, in middle is where ENSG00000205704 was deleted, and on the right is where ENSG00000205704 is over-expressed. The result is very striking, as an evolutionarily momentous effect of a tiny and novel gene.

One gene, "ENSG00000205704" is shown as an example. Where in macaque, the genomic region corresponding to this gene encodes at best a non-coding RNA that is not exported from the nucleus, in humans it encodes a spliced and exported mRNA that encodes a protein of 107 amino acids. In humans it is also highly expressed in the brain, and when the researchers deleted it in embryonic stem cells and used those cells to grow "organoids", or clumps of brain-like tissue, the growth was significantly reduced by the knockout, and increased by the over-expression of this gene. What this gene does is completely unknown. Its sequence, not being related to anything else in human or other species, gives no clue. But it is a classic example of gene that arose from nothing to have what looks like a significant effect on human evolution. Does that somehow violate physics or math? Nothing could be farther from the truth.

  • Will nuclear power get there?
  • What the heck happened to Amazon shopping?

Saturday, December 31, 2022

Hand-Waving to God

A decade on, the Discovery Institute is still cranking out skepticism, diversion, and obfuscation.

A post a couple of weeks ago mentioned that the Discovery Institute offered a knowledgeable critique of the lineages of the Ediacaran fauna. They have raised their scientific game significantly, and so I wanted to review what they are doing these days, focusing on two of their most recent papers. The Discovery Institute has a lineage of its own, from creationism. It has adapted to the derision that entailed, by retreating to "intelligent design", which is creationism without naming the creators, nailing down the schedule of creation, or providing any detail of how and from where creation operates. Their review of the Ediacaran fauna raised some highly skeptical points about whether these organisms were animals or not. Particularly, they suggested that cholesterol is not really restricted to animals, so the chemical traces of cholesterol that were so clearly found in the Dickinsonia fossil layers might not really mean that these were animals- they might also be unusual protists of gigantic size, or odd plant forms, etc. While the critique is not unreasonable, it does not alter the balance of the evidence which does indeed point to an animal affinity. These fauna are so primitive and distant that it is fair to say that we can not be sure, and particularly we can not be sure that they had any direct ancestral relationship to any later organisms of the ensuing Cambrian period, when recognizable animals emerged.

Fair enough. But what of their larger point? The Discovery Institute is trying to make the point, I believe, about the sudden-ness of early Cambrian evolution of animals, and thus its implausibility under conventional evolutionary theory. But we are traversing tens of millions of years through these intervals, which is a long time, even in evolutionary terms. Secondly, the Ediacaran period, though now represented by several exquisite fossil beds, spanned a hundred million years and is still far from completely characterized paleontologically, even supposing that early true animals would have fossilized, rather than being infinitesimal and very soft-bodied. So the Cambrian biota could easily have predecessors in the Ediacaran that have or have not yet been observed- it is as yet not easy to say. But what we can not claim is the negative, that no predecessors existed before some time X- say the 540 MYA point at the base of the Cambrian. So the implication that the Discovery Institute is attempting to suggest has very little merit, particularly since everything that they themselves cite about the molecular and paleontological sequence is so clearly progressive and in proper time sequence, in complete accord with the overall theory of evolution.

For we should always keep in mind that an intelligent designer has a free hand, and can make all of life in a day (or in six, if absolutely needed). The fact that this designer works in the shadows of slightly altered mutation rates, or in a few million years rather than twenty million, and never puts fossils out of sequence in the sedimentary record, is an acknowledgement that this designer is a bit dull, and bears a strong resemblence to evolution by natural selection. To put it in psychological terms, the institute is in the "negotiation" stage of grief- over the death of god.

Saturday, December 17, 2022

The Pillow Creatures That Time Forgot

Did the Ediacaran fauna lead to anything else, or was it a dead end?

While to a molecular biologist, the evolution of the eukaryotic cell is probably the greatest watershed event after the advent of life itself, most others would probably go with the rise of animals and plants, after about three billion years of exclusively microbial life. This event is commonly located at the base of the Cambrian, (i.e. the Cambrian explosion), which is where the fossils that Darwin and his contemporaries were familiar with began, about 540 million years ago. Darwin was puzzled by this sudden start of the fossil record, from apparently nothing, and presciently held (as he did in the case of the apparent age of the sun) that the data were faulty, and that the ancient character of life on earth would leave other traces much farther back in time.

That has indeed proved to be the case. There are signs of microbial life going back over three billion years, and whole geologies in the subsequent time dependent on its activity, such as the banded iron formations prevalent around two billion years ago that testify to the slow oxygenation of the oceans by photosynthesizing microbes. And there are also signs of animal life prior to the Cambrian, going back roughly to 600 million years ago that have turned up, after much deeper investigations of the fossil record. This immediately pre-Cambrian period is labeled the Ediacaran, for one of its fossil-bearing sites in Australia. A recent paper looked over this whole period to ask whether the evolution of proto-animals during this time was a steady process, or punctuated by mass extinction event(s). They conclude that, despite the patchy record, there is enough to say that there was a steady (if extremely slow) march of ecological diversification and specialization through the time, until the evolution of true animals in the Cambrian literally ate up all the Ediacaran fauna. 

Fossil impression of Dickinsonia, with trailing impressions that some think might be a trail from movement. Or perhaps just friends in the neighborhood.
 
For the difference between the Ediacaran fauna and that of the Cambrian is stark. The Ediacaran fauna is beautiful, but simple. There are no backbones, no sensory organs. No mouth, no limbs, no head. In developmental terms, they seem to have had only two embryological cell layers, rather than our three, which makes all the difference in terms of complexity. How they ate remains a mystery, but they are assumed to have simply osmosed nutrients from their environment, thanks to their apparently flat forms. A bit like sponges today. As they were the most complex animals at the time, (and some were large, up to 2 meters long), they may have had an easy time of it, simply plopping themselves on top of rich microbial mats, oozing with biofilms and other nutrients.

The paper provides a schematic view of the ecology at single locations, and also of longer-term evolution, from a sequence of views (i.e. fossils) obtained from different locations around the world of roughly ten million year intervals through the Ediacaran. One noticeable trend is the increasing development or prevalence of taller fern-like forms that stick up into the water over time, versus the flatter bottom-dwelling forms. This may reflect some degree of competition, perhaps after the bottom microbial mats have been over-"grazed". A second trend is towards slightly more complexity at the end of the period, with one very small form (form C (a) in the image below) even marked by shell remains, though what its animal inhabitant looked like is unknown. 

Schematic representation of putative animals observed during the Ediacaran epoch, from early, (A, ~570 MYA, Avalon assemblage), middle, (B, ~554 MYA, White River and other assemblages), and late (C, ~545 MYA, Nama assemblage). The A panel is also differentiated by successional forms from early to mature ecosystems, while the C panel is differentiated by ocean depth, from shallow to deep. The persistence of these forms is quite impressive overall, as is their common simplicity. But lurking somewhere among them are the makings of far more complicated animals.

Very few of these organisms have been linked to actual animals of later epochs, so virtually all of them seem to have been superceded by the wholly different Cambrian fauna- much of which itself remains perplexing. One remarkable study used mass-spec chemical analysis on some Dickinsonia fossils from the late Ediacaran to determine that they bore specific traces of cholesterol, marking them as probable animals, rather than overgrown protists or seaweed. But beyond that, there is little that can be said. (Note a very critical and informed review of all this from the Discovery Institute, of all places.) Their preservation is often remarkable, considering the age involved, and they clearly form the sole fauna known from pre-Cambrian times. 

But the core question of how the Cambrian (and later) animals came to be remains uncertain, at least as far as the fossil record is concerned. One relevant observation is that there is no sign of burrowing through the sediments of the Ediacaran epoch. So the appearance of more complex animals, while it surely had some kind of precedent deep in the Ediacaran, or even before, did not make itself felt in any macroscopic way then. It is evident that once the triploblastic developmental paradigm arose, out of the various geologic upheavals that occurred at the bases of both the Ediacaran and the Cambrian, its new design including mouths, eyes, spines, bones, plates, limbs, guts, and all the rest that we are now so very familiar with, utterly over-ran everything that had gone before.

Some more fine fossils from Canada, ~ 580 MYA.


  • A video tour of some of the Avalon fauna.
  • An excellent BBC podcast on the Ediacaran.
  • We need to measure the economy differently.
  • Deep dive on the costs of foreign debt.
  • Now I know why chemotherapy is so horrible.
  • Waste on an epic scale.
  • The problem was not the raids, but the terrible intelligence... by our intelligence agency.

Saturday, December 3, 2022

Senescent, Cancerous Cannibals

Tumor cells not only escape normal cell proliferation controls, but some of them eat nearby cells.

Our cells live in an uneasy truce. Cooperation is prized and specialization into different cell types, tissues, and organs is pervasive. But deep down, each cell wants to grow and survive, prompting many mechanisms of control, such as cell suicide (apoptosis) and immunological surveillance (macrophages, killer T-cells). Cancer is the ultimate betrayal, not only showing disregard for the ruling order, but in its worst forms killing the whole organism in a pointless drive for growth.

A fascinating control mechanism that has come to prominence recently is cellular senescence. In petri dishes, cells can only be goosed along for a few dozen cycles of division until they give out, and become senescent. Which is to say, they cease replicating but remain alive. It was first thought that this was another mechanism to keep cancer under control, restricting replication to "stem" cells and their recent progeny. But a lot of confusing and interesting observations indicate that the deeper meaning of senescence lies in development, where it appears to function as an alternate form of cell suicide, delayed so that tissues are less disrupted. 

Apoptosis is used very widely during development to reshape tissues, and senescence is used extensively as well in these programs. Senescent cells are far from quiescent, however. They have high metabolic activity and are particularly notorious for secreting a witches' brew of inflammatory cytokines and other proteins- the senescence-associated secretory phenotype, or SASP. in the normal course of events, this attracts immune system cells which initiate repair and clearance operations that remove the senescent cells and make sure the tissue remains on track to fulfill its program. These SASP products can turn nearby cells to senescence as well, and form an inflammatory micro-environment that, if resolved rapidly, is harmless, but if persistent, can lead to bad, even cancerous local outcomes. 

The significance of senescent cells has been highlighted in aging, where they are found to be immensely influential. To quote the wiki site:

"Transplantation of only a few (1 per 10,000) senescent cells into lean middle-aged mice was shown to be sufficient to induce frailty, early onset of aging-associated diseases, and premature death."

The logic behind all this seems to be another curse of aging, which is that while we are young, senescent cells are cleared with very high efficiency. But as the immune system ages, a very small proportion of senescent cells are missed, which are, evolutionarily speaking, an afterthought, but gradually accumulate with age, and powerfully push the aging process along. We are, after all, anxious to reduce chronic inflammation, for example. A quest for "senolytic" therapies to clear senescent cells is becoming a big theme in academia and the drug industry and may eventually have very significant benefits. 

Another odd property of senescent cells is that their program, and the effects they have on nearby cells, resemble to some partial degree those of stem cells. That is, the prevention of cell death is a common property, as is the prevention of certain controls preventing differentiation. This brings us to tumor cells, which frequently enter senescence under stress, like that of chemotherapy. This fate is highly ambivalent. It would have been better for such cells to die outright, of course. Most senescent tumor cells stay in senescence, which is bad enough for their SASP effects in the local environment. But a few tumor cells emerge from senescence, (whether due to further mutations or other sporadic properties is as yet unknown), and they do so with more stem-like character that makes them more proliferative and malignant.

A recent paper offered a new wrinkle on this situation, finding that senescent tumor cells have a novel property- that of eating neighboring cells. As mentioned above, senescent cells have high metabolic demands, as do tumor cells, so finding enough food is always an issue. But in the normal body, only very few cells are empowered to eat other cells- i.e. those of the immune system. To find other cells doing this is highly unusual, interesting, and disturbing. It is one more item in the list of bad things that happen when senescence and cancer combine forces.

A senescent tumor cell (green) phagocytoses and digests a normal cell (red).


  • Shockingly, some people are decent.
  • Tangling with the medical system carries large risks.
  • Is stem cell therapy a thing?
  • Keep cats indoors.

Saturday, November 5, 2022

LPS: Bacterial Shield and Weapon

Some special properties of the super-antigen lipopolysaccharide.

Bacteria don't have it easy in their tiny Brownian world. While we have evolved large size and cleverness, they have evolved miracles of chemistry and miniaturization. One of the key classifications of bacteria is between Gram positive and negative, which refers to the Gram stain. This chemical stains the peptidoglycan layer of all bacteria, which is their "cell wall", wrapped around the cell membrane and providing structural support against osmotic pressure. For many bacteria, a heavy layer of peptidoglycan is all they have on the outside, and they stain strongly with the Gram stain. But other bacteria, like the paradigmatic E. coli, stain weakly, because they have a thin layer of peptidoglycan, outside of which is another membrane, the outer membrane (OM, whereas the inner membrane is abbreviated IM).

Structure of the core of LPS, not showing the further "poly" saccharide tails that would go upwards, hitched to the red sugars. At bottom are the lipid tails that form a strong membrane barrier. These, plus the blue sugar core, form the lipid-A structure that is highly antigenic.

This outer membrane doesn't do much osmotic regulation or active nutrient trafficking, but it does face the outside world, and for that, Gram-negative bacteria have developed a peculiar membrane component called lipopolysaccharide, or LPS for short. The outer membrane is assymetric, with normal phospholipids used for the inner leaflet, and LPS used for the outside leaflet. Maintaining such assymetry is not easy, requiring special "flippases" that know which side is which and continually send the right lipid type to its correct side. LPS is totally different from other membrane lipids, using a two-sugar core to hang six lipid tails (a structure called lipid-A), which is then decorated with chains of additional sugars (the polysaccharide part) going off in the other direction, towards the outside world.

The long, strange trip that LPS takes to its destination. Synthesis starts on the inner leaflet of the inner membrane, at the cytoplasm of the bacterial cell. The lipid-A core is then flipped over to the outer leaflet, where extra sugar units are added, sometimes in great profusion. Then a train of proteins (Lpt-A,B,C,D,E,D) extract the enormous LPS molecule out of the inner membrane, ferry it through the periplasm, through the peptidoglycan layer, and through to the outer leaflet of the outer membrane.

A recent paper provided the structural explanation behind one transporter, LptDE, from Neisseria gonerrhoeae. This is the protein that receives LPS from a its synthesis inside the cell, after prior transport through the inner membrane and inter-membrane space (including the peptidoglycan layer), and places LPS on the outer leaflet of the outer membrane. It is an enormous barrel, with a dynamic crack in its side where LPS can squeeze out, to the right location. It is a structure that explains neatly how directionality can be imposed on this transport, which is driven by ATP hydrolysis (by LtpB) at the inner membrane, that loads a sequence of transporters sending LPS outward.

Some structures of LptD (teal or red), and LPS (teal, lower) with LptE (yellow), an accessory protein that loads LPS into LptD. This structure is on the outer leaflet of the outer membrane, and releases LPS (bottom center) through its "lateral gate" into the right position to join other LPS molecules on the outer leaflet.

LPS shields Gram-negative bacteria from outside attack, particularly from antibiotics and antimicrobial peptides. These are molecules made by all sorts of organisms, from other bacteria to ourselves. The peptides typically insert themselves into bacterial membranes, assemble into pores, and kill the cell. LPS is resistant to this kind of attack, due to its different structure from normal phospholipids that have only two lipid tails each. Additionally, the large, charged sugar decorations outside fend off large hydrophobic compounds. LPS can be (and is) altered in many additional ways by chemical modifications, changes to the sugar decorations, extra lipid attachments, etc. to fend off newly evolved attacks. Thus LPS is the result of a slow motion arms race, and differs in its detailed composition between different species of bacteria. One way that LPS can be further modified is with extra hydrophobic groups such as lipids, to allow the bacteria to clump together into biofilms. These are increasingly understood as a key mode of pathogenesis that allow bacteria to both physically stick around in very dangerous places (such as catheters), and also form a further protective shield against attack, such as by antibiotics or whatever else their host throws at them.

In any case, the lipid-A core has been staunchly retained through evolution and forms a super-antigen that organisms such as ourselves have evolved to sense at incredibly low levels. We encode a small protein, called LY96 (or MD-2), that binds the lipid-A portion of LPS very specifically at infinitesimal concentrations, complexes with cell surface receptor TLR4, and sets off alarm bells through the immune system. Indeed, this chemical was originally called "endotoxin", because cholera bacteria, even after being killed, caused an enormous and toxic immune response- a response that was later, through painstaking purification and testing, isolated to the lipid-A molecule.

LPS (in red) as it is bound and recognized by human protein MD-2 (LY96) and its complex partner TLR4. TLR4 is one of our key immune system alarm bells, detecting LPS at picomolar levels. 

LPS is the kind of antigen that is usually great to detect with high sensitivity- we don't even notice that our immune system has found, moved in, and resoved all sorts of minor infections. But if bacteria gain a foothold in the body and pump out a lot of this antigen, the result can be overwhelmingly different- cytokine storm, septic shock, and death. Rats and mice, for instance, have a fraction of our sensitivity to LPS, sparing them from systemic breakdown from common exposures brought on by their rather more gritty lifestyles.


  • Econometrics gets some critique.
  • Clickbait is always bad information, but that is the business model.
  • Monster bug wars.
  • Customer-blaming troll due to lose a great deal of money.

Saturday, October 15, 2022

From Geo-Logic to Bio-Logic

Why did ATP become the central energy currency and all-around utility molecule, at the origin of life?

The exploration of the solar system and astronomical objects beyond has been one of the greatest achievements of humanity, and of the US in particular. We should be proud of expanding humanity's knowledge using robotic spacecraft and space-based telescopes that have visited every planet and seen incredibly far out in space, and back in time. But one thing we have not found is life. The Earth is unique, and it is unlikely that we will ever find life elsewhere within traveling distance. While life may concievably have landed on Earth from elsewhere, it is more probable that it originated here. Early Earth had as conducive conditions as anywhere we know of, to create the life that we see all around us: carbon-based, water-based, precious, organic life.

Figuring out how that happened has been a side-show in the course of molecular biology, whose funding is mostly premised on medical rationales, and of chemistry, whose funding is mostly industrial. But our research enterprise thankfully has a little room for basic research and fundamental questions, of which this is one of the most frustrating and esoteric, if philosphically meaningful. The field has coalesced in recent decades around the idea that oceanic hydrothermal vents provided some of the likeliest conditions for the origin of life, due to the various freebies they offer.

Early earth, as today, had very active geology that generated a stream of reduced hydrogen and other compounds coming out of hydrothermal vents, among other places. There was no free oxygen, and conditions were generally reducing. Oxygen was bound up in rocks, water, and CO2. The geology is so reducing that water itself was and still is routinely reduced on its trip through the mantle by processes such as serpentinization.

The essential problem is how to jump the enormous gap from the logic of geology and chemistry, over to the logic of biology. It is not a question of raw energy- the earth has plenty of energetic processes, from vocanoes and tectonics to incoming solar energy. The question is how a natural process that has resolutely chemical logic, running down the usual chemical and physical gradients from lower to higher entropy, could have generated the kind of replicating and coding molecular system where biological logic starts. A paper from 2007 gives a broad and scrupulous overview of the field, featuring detailed arguments supporting the RNA world as the probable destination (from chemical origins) where biological logic really began. 

To rehearse very briefly, RNA has, and still retains in life today, both coding capacity and catalytic capacity, unifying in one molecule the most essential elements of life. So RNA is thought to have been the first molecule with truly biological ... logic, being replaced later with DNA for some of its more sedentary roles. But there is no way to get to even very short RNA molecules without some kind of metabolic support. There has to be an organic soup of energy and small organic molecules- some kind of pre-biological metabolism- to give this RNA something to do and chemical substituents to replicate itself out of. And that is the role of the hydrothermal vent system, which seems like a supportive environment. For the trick in biology is that not everything is coded explicitly. Brains are not planned out in the DNA down to their crenelations, and membranes are not given size and weight blueprints. Biology relies heavily on natural chemistry and other unbiological physical processes to channel its development and ongoing activity. The coding for all this, which seems so vast with our 3 Gb genome, is actually rather sparse, specifying some processes in exquisite detail, (large proteins, after billions of years of jury-rigging, agglomeration, and optimization), while leaving a tremendous amount still implicit in the natural physical course of events.

A rough sketch of the chemical forces and gradients at a vent. CO2 is reduced into various simple organic compounds at the rock interfaces, through the power of the incoming hydrogen rich (electron-rich) chemicals. Vents like this can persist for thousands of years.

So the origin of life does not have to build the plane from raw aluminum, as it were. It just has to explain how a piece of paper got crumpled in a peculiar way that allowed it to fly, after which evolution could take care of the rest of the optimization and elaboration. Less metaphorically, if a supportive chemical environment could spontaneously (in geo-chemical terms) produce an ongoing stream of reduced organic molecules like ATP and acyl groups and TCA cycle intermediates out of the ambient CO2, water, and other key elements common in rocks, then the leap to life is a lot less daunting. And hydrothermal vents do just that- they conduct a warm and consistent stream of chemically reduced (i.e. extra electrons) and chemical-rich fluid out of the sea floor, while gathering up the ambient CO2 (which was highly concentrated on the early Earth) and making it into a zoo of organic chemicals. They also host the iron and other minerals useful in catalytic conversions, which remain at the heart of key metabolic enzymes to this day. And they also contain bubble-like stuctures that could have confined and segregated all this activity in pre-cellular forms. In this way, they are thought to be the most probable locations where many of the ingredients of life were being generated for free, making the step over to biological logic much less daunting than was once thought.

The rTCA cycle, portrayed in the reverse from our oxidative version, as a cycle of compounds that spontaneously generate out of simple ingredients, due to their step-wise reduction and energy content values. The fact that the output (top) can be easily cleaved into the inputs provides a "metabolic" cycle that could exist in a reducing geological setting, without life or complicated enzymes.

The TCA cycle, for instance, is absolutely at the core of metabolism, a flow of small molecules that disassemble (or assemble, if run in reverse) small carbon compounds in stepwise fashion, eventually arriving back at the starting constituents, with only outputs (inputs) of hydrogen reduction power, CO2, and ATP. In our cells, we use it to oxidize (metabolize) organic compounds to extract energy. Its various stations also supply the inputs to innumerable other biosynthetic processes. But other organisms, admittedly rare in today's world, use it in the forward direction to create organic compounds from CO2, where it is called reductive or reverse (rTCA). An article from 2004 discusses how this latter cycle and set of compounds very likely predates any biological coding capacity, and represents an intrisically natural flow of carbon reduction that would have been seen in a pre-biotic hydrothermal vent setting. 

What sparked my immediate interest in all this was a recent paper that described experiments focused on showing why ATP, of all the other bases and related chemicals, became such a central part of life's metabolism, including as a modern accessory to the TCA cycle. ATP is the major energy currency in cells, giving the extra push to thousands of enzymes, and forming the cores of additional central metabolic cofactors like NAD (nicotine adenine dinucleotide), and acetyl-CoA (the A is for adenine), and participating as one of the bases of DNA and RNA in our genetic core processes. 

Of all nucleoside diphosphates, ADP is most easily converted to ATP in the very simple conditions of added acyl phosphate and Fe3+ in water, at ambient temperatures or warmer. Note that the trace for ITP shows the same absorbance before and after the reaction. The others show no reaction either. Panel F shows a time course of the ADP reaction, in hours. The X axis refers to time of chromatography of the sample, not of the reaction.

Why ATP, and not the other bases, or other chemicals? Well, bases appear as early products out of pre-biotic reaction mixtures, so while somewhat complicated, they are a natural part of the milieu. The current work compares how phosphorylation of all the possible di-phosphate bases works, (that is, adenosine, cytidine, guanosine, inosine, and uridine diphosphates), using the plausible prebiotic ingredients ferric ion (Fe3+) and acetyl phosphate. They found surprisingly that only ADP can be productively converted to ATP in this setting, and it was pretty insensitive to pH, other ions, etc. This was apparently due to the special Fe3+ coordinating capability that ADP has due to its pentose N and neighboring amino group that allows an easy electron transfers to the incoming phosphate group. Iron remains common as an enzymatic cofactor today, and it is obviously highly plausible in this free form as a critical catalyst in a pre-biotic setting. Likewise, acetyl phosphate could hardly be simpler, occurs naturally under prebiotic conditions, and remains an important element of bacterial metabolism (and transiently one of eukaryotic metabolism) today. 

Ferric iron and ATP make a unique mechanistic pairing that enables easy phosphorylation at the third position, making ATP out of ADP and acyl phosphate. At step b, the incoming acyl phosphate is coordinated by the amino group while the iron is coordinated by the pentose nitrogen and two existing phosphates.

The point of this paper was simply to reveal why ATP, of all the possible bases and related chemicals, gained its dominant position of core chemical and currency. It is rare in origin-of-life research to gain a definitive insight like this, amid the masses of speculation and modeling, however plausible. So this is a significant step ahead for the field, while it continues to refine its ideas of how this amazing transition took place. Whether it can demonstrate the spontaneous rTCA cycle in a reasonable experimental setting is perhaps the next significant question.


  • How China extorts celebrities, even Taiwanese celebrities, to toe the line.
  • Stay away from medicare advantage, unless you are very healthy, and will stay that way.
  • What to expect in retirement.

Saturday, October 1, 2022

For the Love of Money

The social magic of wealth ... and Trump's travel down the wealth / status escalator.

I have been reading the archly sarcastic "The Theory of the Leisure Class", by Thorstein Veblen. It introduced the concept of "conspicuous consumption" by way of arguing that social class is marked by work, specifically by the total lack of work that occupies the upper, or leisure class, and more and more mundane forms of work as one sinks down the social scale. This is a natural consequence of what he calls our predatory lifestyle, which, at least in times of yore, reserved to men, especially those of the upper class, the heroic roles of hunter and warrior, contrasted with the roles of women, who were assigned all non-heroic forms of work, i.e. drudgery. This developed over time into a pervasive horror of menial work and a scramble to evince whatever evidence one can of being above it, such as wearing clean, uncomfortable and fashionable clothes, doing useless things like charity drives, golf, and bridge. And having one's wife do the same, to show how financially successful one is.

Veblen changed our culture even as he satarized and skewered it, launching a million disgruntled teenage rebellions, cynical movies, songs, and other analyses. But his rules can not be broken. Hollywood still showcases the rich, and silicon valley, for all its putative nerdiness, is just another venue for social signaling by way of useless toys, displays of leisure (at work, no less, with the omnipresent foosball and other games), and ever more subtle fashion statements.

Conversely, the poor are disparaged, if not hated. We step over homeless people, holding our noses. The Dalit of India are perhaps the clearest expression of this instinct. But our whole economic system is structured in this way, paying the hardest and most menial jobs the worst, while paying some of the most social destructive professions, like corporate law, the best, and placing them by attire, titles, and other means, high on the social hierarchy.

As Reagan said, nothing succeeds like success. We are fascinated, indeed mesmerized, by wealth. It seems perfectly reasonable to give wealthy areas of town better public services. It seems perfectly reasonable to have wealthy people own all our sports teams, run all our companies, and run for most political offices. We are after all Darwinian through and through. But what if a person's wealth comes from their parents? Does the status still rub off? Should it? Or what if it came from criminal activities? Russia is run by a cabal of oligarchs, more or less- is their status high or low?

All this used to make more sense, in small groups where reputations were built over a lifetime of toil in support of the family, group, and tribe. Worth was assessed by personal interaction, not by the proxy of money. And this status was difficult to bequeath to others. The fairy tale generally has the prince proving himself through arduous tasks, to validate the genetic and social inheritance that the rest of the world may or may not be aware of. 

But with the advent of money, and even more so with the advent of inherited nobility and kingship, status became transferable, inheritable, and generally untethered from the values it supposedly exemplifies. Indeed, in our society it is well-known that wealth correlates with a decline in ethical and social values. Who exemplifies this most clearly? Obviously our former president, whose entire public persona is based on wealth. It was evidently inherited, and he parlayed it into publicity, notariety, scandal, and then the presidency. He was adulated, first by tabloids and TV, which loved brashness (and wealth), then by Republican voters, who appear to love cruelty, mean-ness, low taste and intellect, ... and wealth. 

But now the tide is slowly turning, as Trump's many perfidies and illegal practices catch up with him. It is leaking out, despite every effort of half the media, that he may not be as wealthy as he fraudulently portrayed. And with that, the artificial status conferred by being "a successful businessman" is deflating, and his national profile is withering. One might say that he is taking an downward ride on the escalator of social status that is in our society conferred largely by wealth.

All that is shiny ... mines coal.

Being aware of this social instinct is naturally the first step to addressing it. A century ago and more, the communists and socialists provided a thoroughgoing critique of the plutocratic class as being not worthy of social adulation, as the Carnegies and Horatio Algers of the world would have it. But once in power, the ensuing communist governments covered themselves in the ignominy of personality cults that facilitated (and still do in some cases) even worse political tyrannies and economic disasters. 

The succeeding model of "managed capitalism" is not quite as catastrophic and has rehabilitated the rich in their societies, but one wouldn't want to live there either. So we have to make do with the liberal state and its frustratingly modest regulatory powers, aiming to make the wealthy do virtuous things instead of destructive things. Bitcoin is but one example of a waste of societal (and ecological) resources, which engenders social adulation of the riches to be mined, but should instead be regulated out of existence. Taking back the media is a critical step. We need to reel back the legal equation of money with speech and political power that has spread corruption, and tirelessly tooted its own ideology of status and celebrity through wealth.


Saturday, September 17, 2022

Death at the Starting Line- Aneuploidy and Selfish Centromeres

Mammalian reproduction is unusually wasteful, due to some interesting processes and tradeoffs.

Now that we have settled the facts that life begins at conception and abortion is murder, a minor question arises. There is a lot of murder going on in early embryogenesis, and who is responsible? Probably god. Roughly two-thirds of embryos that form are aneuploid (have an extra chromosome or lack a chromosome) and die, usually very soon. Those that continue to later stages of pregnancy cause a high rate of miscarriages-about 15% of pregnancies. A recent paper points out that these rates are unusual compared with most eukaryotes. Mammals are virtually alone in exhibiting such high wastefulness, and the author proposes an interesting explanation for it.

First, some perspective on aneupoidy. Germ cells go through a two-stage process of meiosis where their DNA is divided two ways, first by homolog pairs, (that is, the sets inherited from each parent, with some amount of crossing-over that provides random recombination), and second by individual chromosomes. In more primitive organisms (like yeast) this is an efficient, symmetrical, and not-at-all wasteful process. Any loss of genetic material would be abhorrent, as the cells are putting every molecule of their being into the four resulting spores, each of which are viable.

A standard diagram of meiosis. Note that the microtubules (yellow) engage in a gradual and competitive process of capturing centromeres of each chromosome to arrive at the final state of regular alignment, which can then be followed by even division of the genetic material and the cell.


In animals, on the other hand, meiosis of egg cells is asymmetric, yielding one ovum / egg and three polar bodies, which  have various roles in some species to assist development, but are ultimately discarded. This asymmetric division sets up a competition between chromosomes to get into the egg, rather than into a polar body. One would think that chromosomes don't have much say in the matter, but actually, cell division is a very delicate process that can be gamed by "strong" centromeres.

Centromeres are the central structures on chromosomes that form attachments to the microtubules forming the mitotic spindle. This attachment process is highly dynamic and even competitive, with microtubules testing out centromere attachment sites, and using tension ultimately as the mark of having a properly oriented chromosome with microtubules from each side of the dividing cell (i.e. each microtubule organizing center) attached to each of the centromeres, holding them steady and in tension at the midline of the cell. Well, in oocytes, this does not happen at the midline, but lopsidedly towards one pole, given that one of the product cells is going to be much larger than the others. 

In oocytes, cell division is highly asymmetric with a winner-take-all result. This opens the door to a mortal competition among chromosomes to detect which side is which and to get on the winning side. 

One of the mysteries of biology is why the centromere is a highly degenerate, and also a speedily evolving, structure. They are made up of huge regions of monotonously repeated DNA, which have been especially difficult to sequence accurately. Well, this competition to get into the next generation can go some way to explain this structure, and also why it changes rapidly, (on evolutionary time scales), as centromeric repeats expand to capture more microtubules and get into the egg, and other portions of the machinery evolve to dampen this unsociable behavior and keep everyone in line. It is a veritable arms race. 

But the funny thing is that it is only mammals that show a particularly wasteful form of this behavior, in the form of frequent aneuploidy. The competition is so brazen that some centromeres force their way into the egg when there is already another copy there, generating at best a syndrome like Down, but for all other chromosomes than #21, certain death. This seems rather self-defeating. Or does it?

The latest paper observes that mammals devote a great deal of care to their offspring, making them different from fish, amphibians, and even birds, which put most of their effort into producing the very large egg, and relatively less (though still significant amounts) into care of infants. This huge investment of resources means that causing a miscarriage or earlier termination is not a total loss at all, for the rudely trisomic extra chromosome. No, it allows resource recovery in the form of another attempt at pregnancy, typically quite soon thereafter, at which point the pushy chromosome gets another chance to form a proper egg. It is a classic case of extortion at the molecular scale. 


  • Do we have rules, or not?
  • How low will IBM go, vs its retirees?

Saturday, September 10, 2022

Sex in the Brain

The cognitive effects of gonadotropin-releasing hormone.

If you watch the lesser broadcast TV channels, there are many ads for testosterone- elixir of youth, drive, manliness, blaring sales pitches, etc. Is it any good? Curiously, taking testosterone can cause alot of sexual dysfunctions, due to feedback loops that carefully tune its concentration. So generally no, it isn't much good. But that is not to say that it isn't a powerful hormone. A cascade of other events and hormones lead to the production of testosterone, and a recent paper (review) discussed the cognitive effects of one of its upstream inducers, gonadotropin-releasing hormone, or GnRH. 

The story starts on the male Y chromosome, which carries the gene SRY. This is a transcription activator that (working with and through a blizzard of other regulators and developmental processes) is ultimately responsible for switching the primitive gonad to the testicular fate, from its default which is female / ovarian. This newly hatched testis contains Sertoli cells, which secrete anti-Mullerian hormone (AMH, a gene that is activated by SRY directly), which in the embryo drives the regression of female characteristics. At the same time testosterone from testicular Leydig cells drives development of male physiology. The initial Y-driven setup of testosterone is quickly superceded by hormones of the gonadotropin family, one form of which is provided by the placenta. Gonadotropins continue to be essential through development and life to maintain sexual differentiation. This source declines by the third trimester, by which time the pituitary has formed and takes over gonadotropin secretion. It secretes two gondotropin family members, follicular stimulating hormone (FSH) and leutinizing hormone (LH), which each, despite their names, actually have key roles in male as well as female reproductive development and function. After birth, testosterone levels decline and everything is quiescent until puberty, when the hormonal axis driven by the pituitary reactivates.

Some of the molecular/genetic circuitry leading to very early sex differentiation. Note the leading role of SRY in driving male development. Later, ongoing maintenance of this differentiation depends on the gonadotropin hormones.

This pituitary secretion is in turn stimulated by gonadotropin releasing hormone (GnRH), which is the subject of the current story. GnRH is produced by neurons that, in embryogenesis, originate in the nasal / olfactory epithelium and migrate to the hypothalamus, close enough to the pituitary to secrete directly into its blood supply. This circuit is what revs up in puberty and continues in fine-tuned fashion throughout life to maintain normal (or declining) sex functions, getting feedback from the final sex hormones like estrogen and testosterone in general circulation. The interesting point that the current paper brings up is that GnRH is not just generated by neurons pointing at the pituitary. There is a whole other set of neurons in the hypothalamus that also secrete GnRH, but which project (and secrete GnRH) into the cortex and hippocampus- higher regions of the brain. What are these neurons, and this hormone, doing there?

The researchers note that people with Down Syndrome characteristically have both cognitive and sexual defects resembling incomplete development, (among many other issues), the latter of which resemble or reflect a lack of GnRH, suggesting a possible connection. Puberty is a time of heightened cognitive development, and they guessed that this is perhaps what is missing in Down Syndrome. Down Syndrome typically winds up in early-onset Alzheimer disease, which is also characterized by lack of GnRH, as is menopause, and perhaps other conditions. After going through a bunch of mouse studies, the researchers supplemented seven men affected by Down Syndrome with extra GnRH via miniature pumps to their brains, aimed at target areas of this hormone in the cortex. It is noteworthy that GnRH secretion is highly pulsitile, with a roughly 2 hour period, which they found to be essential for a positive effect. 

Results from the small-scale intervention with GnRH injection. Subjects with Down Syndrome had higher cortical connectivity (left) and could draw from a 3-D model marginally more accurately.

The result (also seen in mouse models of Down Syndrome and of Alzheimer's Disease) was that the infusion significantly raised cognitive function over the ensuing months. It is an amazing and intriguing result, indicating that GnRH drives significant development and supports ongoing higher function in the brain, which is quite surprising for a hormone thought to be confined to sexual functions. Whether it can improve cognitive functions in fully developed adults lacking impeding developmental syndromes remains to be seen. Such a finding would be quite unlikely, though, since the GnRH circuit is presumably part of the normal program that establishes the full adult potential of each person, which evolution has strained to refine to the highest possible level. It is not likely to be a magic controller that can be dialed beyond "max" to create super-cognition.

Why does this occur in Down Syndrome? The authors devote a good bit the paper to an interesting further series of experiments, focusing on regulatory micro-RNAs, several of which are encoded in genomic regions duplicated in Down Syndrome. microRNAs are typically regulators that repress transcription, explaining how this whole circuitry of normal development, now including key brain functions, is under-activated in those with Down Syndrome.

The authors offer a subset of regulatory circuitry focusing on micro-RNA repressors of which several are encoded on the trisomic chromosome regions.

"HPG [hypothalamus / pituitary / gonadal hormone] axis activation through GnRH expression at minipuberty (P12; [the phase of testoserone expression in late mouse gestation critical for sexual development]) is regulated by a complex switch consisting of several microRNAs, in particular miR-155 and the miR-200 family, as well as their target transcriptional repressor-activator genes, in particular Zeb1 and Cebpb. Human chromosome 21 and murine chromosome 16 code for at least five of these microRNAs (miR-99a, let-7c, miR-125b-2, miR-802, and miR-155), of which all except miR-802 are selectively enriched in GnRH neurons in WT mice around minipuberty" - main paper

So, testosterone (or estrogen, for that matter) isn't likely to unlock better cognition, but a hormone a couple of steps upstream just might- GnRH. And it does so not through the bloodstream, but through direct injection into key areas of the brain both during development, and also on an ongoing basis through adulthood. Biology as a product of evolution comprises systems that are highly integrated, not to say jury-rigged, which makes biology as a science difficult, being the quest to separate all the variables and delineate what each component and process is doing.