Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

Sunday, July 30, 2023

To Sleep- Perchance to Inactivate OX2R

The perils of developing sleeping, or anti-sleeping, drugs.

Sleep- the elixir of rest and repose. While we know of many good things that happen during sleep- the consolidation of memories, the cardiovascular rest, the hormonal and immune resetting, the slow waves and glymphatic cleansing of the brain- we don't know yet why it is absolutely essential, and lethal if repeatedly denied. Civilized life tends to damage our sleep habits, given artificial light and the endless distractions we have devised, leading to chronic sleeplessness and a spiral of narcotic drug consumption. Some conditions and mutations, like narcolepsy, have offered clues about how sleep is regulated, which has led to new treatments, though to be honest, good sleep hygiene is by far the best remedy.

Genetic narcolepsy was found to be due to mutations in the second receptor of the hormone orexin (OX2R), or also due to auto-immune conditions that kill off a specialized set of neurons in the hypothalamus- a basal part of the brain that sits just over the brain stem. This region normally has ~ 50,000 neurons that secrete orexin (which comes in two kinds as well, 1 and 2), and project to areas all over the brain, especially basal areas like the basal forebrain and amygdala, to regulate not just sleep but feeding, mood, reward, memory, and learning. Like any hormone receptor, the orexin receptors can be approached in two ways- by turning them on (agonist) or by turning them off (antagonist). Antagonist drugs were developed which turn off both orexin receptors, and thus promote sleep. The first was named suvorexant, using the "orex" and "ant" lexical elements to mark its functions, which is now standard for generic drug names

 This drug is moderately effective, and is a true sleep enhancer, promoting falling to sleep, restful sleep, and length of sleep, unlike some other sleep aids. Suvorexant antagonizes both receptors, but the researchers knew that only the deletion of OX2R, not OX1R, (in dogs, mice, and other animals), generates narcolepsy, so they developed a drug more specific to OX2R only. But the result was that it was less effective. It turned out that binding and turning off OX1R was helpful to sleep promotion, and there were no particularly bad side effects from binding both receptors, despite the wide ranging activities they appear to have. So while the trial of Merck's MK-1064 was successful, it was not better than their exising two-receptor drug, so its development was shelved. And we learned something intriguing about this system. While all animals have some kind of orexin, only mammals have the second orexin family member and receptor, suggesting that some interesting, but not complete, bifurcation happened in the functions of this system in evolution. 

What got me interested in this topic was a brief article from yet another drug company, Takeda, which was testing an agonist against the orexin receptors in an effort to treat narcolepsy. They created TAK-994, which binds to OX2R specifically, and showed a lot of promise in animal trials. It is a pill form, orally taken drug, in contrast to the existing treatment, danavorexton, which must be injected. In the human trial, it was remarkably effective, virtually eliminating cataleptic / narcoleptic episodes. But there was a problem- it caused enough liver toxicity that the trial was stopped and the drug shelved. Presumably, this company will try again, making variants of this compound that retain affinity and activity but not the toxicity. 

This brings up an underappreciated peril in drug design- where drugs end up. Drugs don't just go into our systems, hopefully slipping through the incredibly difficult gauntlet of our digestive system. But they all need to go somewhere after they have done their jobs, as well. Some drugs are hydrophilic enough, and generally inert enough, that they partition into the urine by dilution and don't have any further metabolic events. Most, however, are recognized by our internal detoxification systems as foreign, (that is, hydrophobic, but not recognizable as fats/lipids that are usual nutrients), and are derivatized by liver enzymes and sent out in the bile. 

Structure of TAK-994, which treats narcolepsy, but at the cost of liver dysfunction.

As you can see from the chemical structure above, TAK-994 is not a normal compound that might be encountered in the body, or as food. The amino sulfate is quite unusual, and the fluorines sprinkled about are totally unnatural. This would be a red flag substance, like the various PFAS materials we hear about in the news. The rings and fluorines create a relatively hydrophobic substance, which would need to be modified so that it can be routed out of the body. That is what a key enzyme of the liver, CYP3A4 does. It (and many family members that have arisen over evolutionary time) oxidizes all manner of foreign hydrophobic compounds, using a heme cofactor to handle the oxygen. It can add OH- groups (hydroxylation), break open double bonds (epoxidation), and break open phenol ring structures (aromatic oxidation). 

But then what? Evolution has met most of the toxic substances we meet with in nature with appropriate enzymes and routes out of the body. But these novel compounds we are making with modern chemistry are something else altogether. Some drugs are turned on by this process, waiting till they get to the liver to attain their active form. Others, apparently such as this one, are made into toxic compounds (as yet unknown) by this process, such that the liver is damaged. That is why animal studies and safety trials are so important. This drug binds to its target receptor, and does what it is supposed to do, but that isn't enough to be a good drug. 

 

Saturday, May 20, 2023

On the Spectrum

Autism, broader autism phenotype, temperament, and families. It turns out that everyone is on the spectrum.

The advent of genomic sequencing and the hunt for disease-causing mutations has been notably unhelpful for most mental diseases. Possible or proven disease-causing mutations pile up, but they do little to illuminate the biology of what is going on, and even less towards treatment. Autism is a prime example, with hundreds of genes now identified as carrying occasional variants with causal roles. The strongest of these variants affect synapse formation among neurons, and a second class affects long-term regulation of transcription, such as turning genes durably on or off during developmental transitions. Very well- that all makes a great deal of sense, but what have we gained?

Clinically, we have gained very little. What is affected are neural developmental processes that can't be undone, or switched off in later life with a drug. So while some degree of understanding slowly emerges from these studies, translating that to treatment remains a distant dream. One aspect of the genetics of autism, however, is highly informative, which is the sheer number of low-effect and common mutations. Autism can be thought of as coming in two types, genetically- those due to a high effect, typically spontaneous or rare mutation, and those due to a confluence of common variants. The former tends to be severe and singular- an affected child in a family that is otherwise unaffected. The latter might be thought of as familial, where traits that have appeared (mildly) elsewhere in the family have been concentrated in one child, to a degree that it is now diagnosable.

This pattern has given rise to the very interesting concept of the "Broader Autism Phenotype", or BAP. This stems from the observation that families of autistic children have higher rates where ... "the parents, grandparents, and collaterals are persons strongly preoccupied with abstractions of a scientific, literary, or artistic nature, and limited in genuine interest in people." Thus there is not just a wide spectrum of autism proper, based on the particular confluence of genetic and other factors that lead to a diagnosis and its severity, but there is also, outside of the medical spectrum, quite another spectrum of traits or temperaments which tend toward autism and comprise various eccentricities, but have not, at least to date, been medicalized.


The common nature of these variants leads to another question- why are they persistent in the population? It is hard to believe that such a variety and number of variations are exclusively deleterious, especially when the BAP seems to have, well, rather positive aspects. No, I would suggest that an alternative way to describe BAP is "an enhanced ability to focus", and develop interests in salient topics. Ever meet people who are technically useless, but warm-hearted? They are way off on the non-autistic part of the spectrum, while the more technically inclined, the fixers of the world and scholars of obscure topics, are more towards the "ability to focus" part of the spectrum. Only when such variants are unusually concentrated by the genetic lottery do children appear with frank autistic characteristics, totally unable to deal with social interactions, and given to obsessive focus and intense sensitivities.

Thus autism looks like a more general lens on human temperament and evolution, being the tip of a very interesting iceberg. As societies, we need the politicians, backslappers, networkers, and con men, but we also need, indeed increasingly as our societies and technologies developed over the centuries, people with the ability and desire to deal with reality- with technical and obscure issues- without social inflection, but with highly focused attention. Militaries are a prime example, fusing critical needs of managing and motivating people, with a modern technical base of vast scope, reliant on an army of specialists devoted to making all the machinery work. Why does there have to be this tradeoff? Why can't everyone be James Bond, both technically adept and socially debonaire? That isn't really clear, at least to me, but one might speculate that in the first place, dealing with people takes a great deal of specialized intelligence, and there may not be room for everything in one brain. Secondly, the enhanced ability to focus on technical or artistic topics may actively require, as is implicit in doing science and as was exemplified by Mr. Spock, an intentional disregard of social niceties and motivations, if one is to fully explore the logic of some other, non-human, world.


Saturday, May 6, 2023

The Development of Metamorphosis

Adulting as a fly involves a lot of re-organization.

Humans undergo a slight metamorphosis, during adolescence. Imagine undergoing pupation like insects do and coming out with a totally new body, with wings! Well, Kafka did, and it wasn't very pleasant. But insects do it all the time, and have been doing it for hundreds of millions of years, taking to the air and dominating the biosphere. What goes on during metamorphosis, how complete is its refashioning of the body, and how did it evolve? A recent paper (review) considered in detail how the brains of insects change during metamorphosis, finding a curious blend of birth, destruction, and reprogramming among their neurons.

Time is on the Y axis, and the emergence of later, more advanced types of insects is on the X axis. This shows the progressive elaboration of non-metamorphosis (ametabolous), partially metamorphosing (hemimetabolous), and fully metamorphosing (holometabolous) forms. Dragonflies are only partially metamorphosing in this scheme, though their adult forms are often highly different from their larval (nymph) form.


Insects evolved from crustaceans, and took to land as small silvertail-like creatures with exoskeletons, roughly 450 million years ago. Over 100 million years, they developed the process of metamorphosis as a way to preserve the benefits of their original lifestyle for early development, in moist locations, while conquering the air and distance as adults. Early insect types are termed ametabolous, meaning that they have no metamorphosis at all, developing straight from eggs to an adult-style form. These go through several molts to accommodate growth, but don't redesign their bodies. Next came hemimetabolous development, which is exemplified by grasshoppers and cockroaches. Also dragonflies, which significantly refashion themselves during the last molt, gaining wings. In the nymph stage, those wings were carried around as small patches of flat embryonic tissue, and then suddenly grow out at the last molt. Dragonflies are extreme, and most hemimetabolous insects don't undergo such dramatic change. Last came holometabolous development, which involves pupation and a total redesign of the body that can go from a caterpillar to a butterfly.

The benefit of having wings is pretty clear- it allows huge increases in range for feeding and mating. Dragonflies are premier flying predators. But as a larva, wallowing in fruit juice or leaf sap or underwater, as dragonflies are, wings and long legs would be a hindrance. This conundrum led to the innovation of metamorphosis, based on the already somewhat dramatic practice of molting off the exoskeleton periodically. If one can grow a whole new skeleton, why not put wings on it, or legs? And metamorphosis has been tremendously successful, used by over 98% of insect species.

The adult insect tissues do not come from nowhere- they are set up as arrested embryonic tissues called imaginal discs. These are small patches that exist in the larva at specific positions. During pupation, while much of the rest of the body refashions itself, imaginal discs rapidly develop into future tissues like wings, legs, genitalia, antennas, and new mouth parts. These discs have a fascinating internal structure that prefigures the future organ. The leg disc is concentrically arranged with the more distant future parts (toes) at its center. Transplanting a disc from one insect to another or one place to another doesn't change its trajectory- it will still become a leg wherever it is put. So it is apparent that the larval stage is an intermediate stage of organismal development, where a bunch of adult features are primed but put on hold, while a simpler and much more primitive larval body plan is executed to accommodate its role in early growth and its niche in tight, moist, hidden places.

The new paper focuses on the brain, which larva need as well as adults. So the question is- how does the one brain develop from the other? Is the larval brain thrown away? The answer is that no, the brain is not thrown away at all, but undergoes its own quite dramatic metamorphosis. The adult brain is substantially bigger, so many neurons are added. A few neurons are also killed off. But most of the larval neurons are reprogrammed, trimmed back and regrown out to new regions to do new functions.

In this figure, the neurons are named as mushroom body outgoing neuron (MBON) or dopaminergic neuron (DAN, also MBIN for incoming mushroom body neuron), mushroom body extrinsic neuron to calyx (MBE-CA), and mushroom body protocerebral posterior lateral 1 (PPL1). MBON-c1 is totally reprogrammed, MBON-d1 changes its projections substantially, as do the (teal) incoming neurons, and MBON-12 was not operational in the larval stage at all. Note how MBON-c1 is totally reprogrammed to serve new locations in the adult.

The mushroom body, which is the brain area these authors focus on, is situated below the antennas and mediates smell reception, learning, and memory. Fly biologists regard it as analogous to our cortex- the most flexible area of the brain. Larvae don't have antennas, so their smell/taste reception is a lot more primitive. The mushroom body in drosophila has about a hundred neurons at first, and continuously adds neurons over larval life, with a big push during pupation, ending up with ~2200 neurons in adults. Obviously this has to wire into the antennas as they develop, for instance.

The authors find that, for instance, no direct connections between input and output neurons of the mushroom body (MBIN and MBON, respectively) survive from larval to adult stages. Thus there can be no simple memories of this kind preserved between these life stages. While there are some signs of memory retention for a few things in flies, for the most part the slate is wiped clean. 

"These MBONs [making feedback connections] are more highly interconnected in their adult configuration compared to their larval one: their adult configuration shows 13 connections (31% of possible connections), while their larval configuration has only 7 (17%). Importantly, only three of these connections (7%) are present in both larva and adult. This percentage is similar to the 5% predicted if the two stages were wired up independently at their respective frequencies."


Interestingly, no neuron changed its type- that is, which neurotransmitter it uses to communicate. So, while pruning and rewiring was pervasive, the cells did not fundamentally change their stripes. All this is driven by the hormonal system (juvenile hormone, which blocks adult development, and ecdysone, which drives molting, and in the absence of juvenile hormone, pupation) which in turn drives a program of transcription factors that direct the genes needed for development. While a great deal is known about neuronal pathfinding and development, this paper doesn't comment on those downstream events- how it is that selected neurons are pruned, turned around, and induced to branch out in totally new directions, for instance. That will be the topic of future work.


  • Corrupt business practices. Why is this lawful?
  • Why such easy bankruptcy for corporations, but not for poor countries?
  • Watch the world's mesmerizing shipping.
  • Oh, you want that? Let me jack up the price for you.
  • What transgender is like.
  • "China has arguably been the biggest beneficiary of the U.S. security system in Asia, which ensured the regional stability that made possible the income-boosting flows of trade and investment that propelled the country’s economic miracle. Today, however, General Secretary of the Chinese Communist Party Xi Jinping claims that China’s model of modernization is an alternative to “Westernization,” not a prime example of its benefits."

Saturday, April 1, 2023

Consciousness and the Secret Life of Plants

Could plants be conscious? What are the limits of consciousness and pain? 

Scientific American recently reviewed a book titled "Planta Sapiens". The title gives it all away, and the review was quite positive, with statements like: 

"Our senses can not grasp the rich communicative world of plants. We therefore lack language to describe the 'intelligence' of a root tip in conversation with the microbial life of the soil or the 'cognition' that emerges when chemical whispers ripple through a lacework of leaf cells."

This is provocative indeed! What if plants really do have a secret life and suffer pain with our every bite and swing of the scythe? What of our vaunted morals and ethics then?

I am afraid that I take a skeptical view of this kind of thing, so let's go through some of the aspects of consciousness, and ask how widespread it really is. One traditional view, from the ur-scientific types like Descartes, is that only humans have consciousness, and all other creatures, have at best a mechanism, unfeeling and mechanical, that may look like consciousness, but isn't. This, continued in a sense by B. F. Skinner in the 20th century, is a statement from ignorance. We can not fully communicate with animals, so we can not really participate in what looks like their consciousness, so let's just ignore it. This position has the added dividend of supporting our unethical treatment of animals, which was an enormous convenience, and remains the core position of capitalism generally, regarding farm animals (though its view of humans is hardly more generous).

Well, this view is totally untenable, from our experience of animals, our ability to indeed communicate with them to various degrees, to see them dreaming, not to mention from an evolutionary standpoint. Our consciousness did not arise from nothing, after all. So I think we can agree that mammals can all be included in the community of conscious fellow-beings on the planet. It is clear that the range of conscious pre-occupations can vary tremendously, but whenever we have looked at the workings of memory, attention, vision, and other components assumed to be part of or contributors to conscious awareness, they all exist in mammals, at least. 

But what about other animals like insects, jellyfish, or bacteria? Here we will need a deeper look at the principles in play. As far as we understand it, consciousness is an activity that binds various senses and models of the world into an experience. It should be distinguished from responsiveness to stimuli. A thermostat is responsive. A bacterium is responsive. That does not constitute consciousness. Bacteria are highly responsive to chemical gradients in their environment, to food sources, to the pheromones of fellow bacteria. They appear to have some amount of sensibility and will. But we can not say that they have experience in the sense of a conscious experience, even if they integrate a lot of stimuli into a holistic and sensitive approach to their environment. 


The same is true of our own cells, naturally. They also are highly responsive on an individual basis, working hard to figure out what the bloodstream is bringing them in terms of food, immune signals, pathogens, etc. Could each of our cells be conscious? I would doubt it, because their responsiveness is mechanistic, rather than being an independent as well as integrated model of their world. Simlarly, if we are under anaesthesia and a surgeon cuts off a leg, is that leg conscious? It has countless nerve cells, and sensory apparatus, but it does not represent anything about its world. It rather is built to send all these signals to a modeling system elsewhere, i.e. our brain, which is where consciousness happens, and where (conscious) pain happens as well.

So I think the bottom line is that consciousness is rather widely shared as a property of brains, thus of organisms with brains, which were devised over evolutionary time to provide the kind of integrated experience that a neural net can not supply. Jellyfish, for instance, have neural nets that feel pain, respond to food and mates, and swim exquisitely. They are highly responsive, but, I would argue, not conscious. On the other hand, insects have brains and would count as conscious, even though their level of consciousness might be very primitive. Honey bees map out their world, navigate about, select the delicacies they want from plants, and go home to a highly organized hive. They also remember experiences and learn from them.

This all makes it highly unlikely that consciousness is present in quantum phenomena, in rocks, in bacteria, or in plants. They just do not have the machinery it takes to feel something as an integrated and meaningful experience. Where exactly the line is between highly responsive and conscious is probably not sharply defined. There are brains that are exceedingly small, and neural nets that are very rich. But it is also clear that it doesn't take consciousness to experience pain or try to avoid it, (which plants, bacteria, and jellyfish all do). Where is the limit of ethical care, if our criterion shifts from consciousness to pain? Wasn't our amputated leg in pain after the operation above, and didn't we callously ignore its feelings? 

I would suggest that the limit remains that of consciousness, not that of responsiveness to pain. Pain is not problematic because of a reflex reaction. The doctor can tap our knee as often as he wants, perhaps causing pain to our tendon, but not to our consciousness. Pain is problematic because of suffering, which is a conscious construct built around memory, expectations, and models of how things "should" be. While one can easily see that a plant might have certain positive (light, air, water) and negative (herbivores, fungi) stimuli that shape its intrinsic responses to the environment, these are all reflexive, not reflective, and so do not appear (to an admittedly biased observer) to constitute suffering that rises to ethical consideration.

Saturday, February 11, 2023

A Gene is Born

Yes, genes do develop out of nothing.

The "intelligent" design movement has long made a fetish of information. As science has found, life relies on encoded information for its genetic inheritance and the reliable expression of its physical manifestations. The ID proposition is, quite simply, that all this information could not have developed out of a mindless process, but only through "design" by a conscious being. Evidently, Darwinian natural selection still sticks on some people's craw. Michael Behe even developed a pseudo-mathematical theory about how, yes, genes could be copied mindlessly, but new genes could never be conjured out of nothing, due to ... information.

My understanding of information science equates information to loss of entropy, and expresses a minimal cost of the energy needed to create, compute or transmit information- that is, the Shannon limits. A quite different concept comes from physics, in the form of information conservation in places like black holes. This form of information is really the implicit information of the wave functions and states of physical matter, not anything encoded or transmitted in the sense of biology or communication. Physical state information may be indestructable (and un-create-able) on this principle, but coded information is an entirely different matter.

In a parody of scientific discussion, intelligent design proponents are hosted by the once-respectable Hoover Institution for a discussion about, well, god.

So the fecundity that life shows in creating new genes out of existing genes, (duplications), and even making whole-chromosome or whole-genome duplications, has long been a problem for creationists. Energetically, it is easy to explain as a mere side-effect of having plenty of energy to work with, combined with error-prone methods of replication. But creationistically, god must come into play somewhere, right? Perhaps it comes into play in the creation of really new genes, like those that arise from nothing, such as at the origin of life?

A recent paper discussed genes in humans that have over our recent evolutionary history arisen from essentially nothing. It drew on prior work in yeast that elegantly laid out a spectrum or life cycle of genes, from birth to death. It turns out that there is an active literature on the birth of genes, which shows that, just like duplication processes, it is entirely natural for genes to develop out of humble, junky precursors. And no information theory needs to be wheeled in to show that this is possible.

Yeast provides the tools to study novel genes in some detail, with rich genetics and lots of sequenced relatives, near and far. Here is portrayed a general life cycle of a gene, from birth out of non-gene DNA sequences (left) into the key step of translation, and on to a subject of normal natural selection ("Exposed") for some function. But if that function decays or is replaced, the gene may also die, by mutation, becoming a pseudogene, and eventually just some more genomic junk.

The death of genes is quite well understood. The databases are full of "pseudogenes" that are very similar to active genes, but are disabled for some reason, such as a truncation somewhere or loss of reading frame due to a point mutation or splicing mutation. Their annotation status is dynamic, as they are sometimes later found to be active after all, under obscure conditions or to some low level. Our genomes are also full of transposons and retroviruses that have died in this fashion, by mutation.

Duplications are also well-understood, some of which have over evolutionary time given rise to huge families of related proteins, such as kinases, odorant receptors, or zinc-finger transcription factors. But the hunt for genes that have developed out of non-gene materials is a relatively new area, due to its technical difficulty. Genome annotators were originally content to pay attention to genes that coded for a hundred amino acids or more, and ignore everything else. That became untenable when a huge variety of non-coding RNAs came on the scene. Also, occasional cases of very small genes that encoded proteins came up from work that found them by their functional effects.

As genome annotation progressed, it became apparent that, while a huge proportion of genes are conserved between species, (or members of families of related proteins), other genes had no relatives at all, and would never provide information by this highly convenient route of computer analysis. They are orphans, and must have either been so heavily mutated since divergence that their relationships have become unrecognizable, or have arisen recently (that is, since their evolutionary divergence from related species that are used for sequence comparison) from novel sources that provide no clue about their function. Finer analysis of ever more closely related species is often informative in these cases.

The recent paper on human novel genes makes the finer point that splicing and export from the nucleus constitute the major threshold between junk genes and "real" genes. Once an RNA gets out of the nucleus, any reading frame it may have will be translated and exposed to selection. So the acquisition of splicing signals is a key step, in their argument, to get a randomly expressed bit of RNA over the threshold.

A recent paper provided a remarkable example of novel gene origination. It uncovered a series of 74 human genes that are not shared with macaque, (which they took as their reference), have a clear path of origin from non-coding precursors, and some of which have significant biological effects on human development. They point to a gradual process whereby promiscuous transcription from the genome gave rise by chance to RNAs that acquired splice sites, which piped them into the nuclear export machinery and out to the cytoplasm. Once there, they could be translated, over whatever small coding region they might possess, after which selection could operate on their small protein products. A few appear to have gained enough function to encourage expansion of the coding region, resulting in growth of the gene and entrenchment as part of the developmental program.

Brain "organoids" grown from genetically manipulated human stem cells. On left is the control, in middle is where ENSG00000205704 was deleted, and on the right is where ENSG00000205704 is over-expressed. The result is very striking, as an evolutionarily momentous effect of a tiny and novel gene.

One gene, "ENSG00000205704" is shown as an example. Where in macaque, the genomic region corresponding to this gene encodes at best a non-coding RNA that is not exported from the nucleus, in humans it encodes a spliced and exported mRNA that encodes a protein of 107 amino acids. In humans it is also highly expressed in the brain, and when the researchers deleted it in embryonic stem cells and used those cells to grow "organoids", or clumps of brain-like tissue, the growth was significantly reduced by the knockout, and increased by the over-expression of this gene. What this gene does is completely unknown. Its sequence, not being related to anything else in human or other species, gives no clue. But it is a classic example of gene that arose from nothing to have what looks like a significant effect on human evolution. Does that somehow violate physics or math? Nothing could be farther from the truth.

  • Will nuclear power get there?
  • What the heck happened to Amazon shopping?

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, September 10, 2022

Sex in the Brain

The cognitive effects of gonadotropin-releasing hormone.

If you watch the lesser broadcast TV channels, there are many ads for testosterone- elixir of youth, drive, manliness, blaring sales pitches, etc. Is it any good? Curiously, taking testosterone can cause alot of sexual dysfunctions, due to feedback loops that carefully tune its concentration. So generally no, it isn't much good. But that is not to say that it isn't a powerful hormone. A cascade of other events and hormones lead to the production of testosterone, and a recent paper (review) discussed the cognitive effects of one of its upstream inducers, gonadotropin-releasing hormone, or GnRH. 

The story starts on the male Y chromosome, which carries the gene SRY. This is a transcription activator that (working with and through a blizzard of other regulators and developmental processes) is ultimately responsible for switching the primitive gonad to the testicular fate, from its default which is female / ovarian. This newly hatched testis contains Sertoli cells, which secrete anti-Mullerian hormone (AMH, a gene that is activated by SRY directly), which in the embryo drives the regression of female characteristics. At the same time testosterone from testicular Leydig cells drives development of male physiology. The initial Y-driven setup of testosterone is quickly superceded by hormones of the gonadotropin family, one form of which is provided by the placenta. Gonadotropins continue to be essential through development and life to maintain sexual differentiation. This source declines by the third trimester, by which time the pituitary has formed and takes over gonadotropin secretion. It secretes two gondotropin family members, follicular stimulating hormone (FSH) and leutinizing hormone (LH), which each, despite their names, actually have key roles in male as well as female reproductive development and function. After birth, testosterone levels decline and everything is quiescent until puberty, when the hormonal axis driven by the pituitary reactivates.

Some of the molecular/genetic circuitry leading to very early sex differentiation. Note the leading role of SRY in driving male development. Later, ongoing maintenance of this differentiation depends on the gonadotropin hormones.

This pituitary secretion is in turn stimulated by gonadotropin releasing hormone (GnRH), which is the subject of the current story. GnRH is produced by neurons that, in embryogenesis, originate in the nasal / olfactory epithelium and migrate to the hypothalamus, close enough to the pituitary to secrete directly into its blood supply. This circuit is what revs up in puberty and continues in fine-tuned fashion throughout life to maintain normal (or declining) sex functions, getting feedback from the final sex hormones like estrogen and testosterone in general circulation. The interesting point that the current paper brings up is that GnRH is not just generated by neurons pointing at the pituitary. There is a whole other set of neurons in the hypothalamus that also secrete GnRH, but which project (and secrete GnRH) into the cortex and hippocampus- higher regions of the brain. What are these neurons, and this hormone, doing there?

The researchers note that people with Down Syndrome characteristically have both cognitive and sexual defects resembling incomplete development, (among many other issues), the latter of which resemble or reflect a lack of GnRH, suggesting a possible connection. Puberty is a time of heightened cognitive development, and they guessed that this is perhaps what is missing in Down Syndrome. Down Syndrome typically winds up in early-onset Alzheimer disease, which is also characterized by lack of GnRH, as is menopause, and perhaps other conditions. After going through a bunch of mouse studies, the researchers supplemented seven men affected by Down Syndrome with extra GnRH via miniature pumps to their brains, aimed at target areas of this hormone in the cortex. It is noteworthy that GnRH secretion is highly pulsitile, with a roughly 2 hour period, which they found to be essential for a positive effect. 

Results from the small-scale intervention with GnRH injection. Subjects with Down Syndrome had higher cortical connectivity (left) and could draw from a 3-D model marginally more accurately.

The result (also seen in mouse models of Down Syndrome and of Alzheimer's Disease) was that the infusion significantly raised cognitive function over the ensuing months. It is an amazing and intriguing result, indicating that GnRH drives significant development and supports ongoing higher function in the brain, which is quite surprising for a hormone thought to be confined to sexual functions. Whether it can improve cognitive functions in fully developed adults lacking impeding developmental syndromes remains to be seen. Such a finding would be quite unlikely, though, since the GnRH circuit is presumably part of the normal program that establishes the full adult potential of each person, which evolution has strained to refine to the highest possible level. It is not likely to be a magic controller that can be dialed beyond "max" to create super-cognition.

Why does this occur in Down Syndrome? The authors devote a good bit the paper to an interesting further series of experiments, focusing on regulatory micro-RNAs, several of which are encoded in genomic regions duplicated in Down Syndrome. microRNAs are typically regulators that repress transcription, explaining how this whole circuitry of normal development, now including key brain functions, is under-activated in those with Down Syndrome.

The authors offer a subset of regulatory circuitry focusing on micro-RNA repressors of which several are encoded on the trisomic chromosome regions.

"HPG [hypothalamus / pituitary / gonadal hormone] axis activation through GnRH expression at minipuberty (P12; [the phase of testoserone expression in late mouse gestation critical for sexual development]) is regulated by a complex switch consisting of several microRNAs, in particular miR-155 and the miR-200 family, as well as their target transcriptional repressor-activator genes, in particular Zeb1 and Cebpb. Human chromosome 21 and murine chromosome 16 code for at least five of these microRNAs (miR-99a, let-7c, miR-125b-2, miR-802, and miR-155), of which all except miR-802 are selectively enriched in GnRH neurons in WT mice around minipuberty" - main paper

So, testosterone (or estrogen, for that matter) isn't likely to unlock better cognition, but a hormone a couple of steps upstream just might- GnRH. And it does so not through the bloodstream, but through direct injection into key areas of the brain both during development, and also on an ongoing basis through adulthood. Biology as a product of evolution comprises systems that are highly integrated, not to say jury-rigged, which makes biology as a science difficult, being the quest to separate all the variables and delineate what each component and process is doing.


Saturday, June 11, 2022

God Save the Queen

Or is it the other way around? Deities and Royalties in the archetypes.

It has been entertaining, and a little moving, to see the recent celebration put on by Britain for its queen. A love fest for a "ruler" who is nearing the end of her service- a job that has been clearly difficult, often thankless, and a bit murky. A job that has evolved interestingly over the last millenium. What used to be a truly powerful rule is now a Disney-fied sop to tradition and the enduring archetypes of social hierarchy.


For we still need social hierarchy, don't we? Communists, socialists, and anarchists have fought for centuries against it, but social hierarchy is difficult to get away from. For one thing, at least half the population has a conservative temperament that demands it. For another, hierarchies are instinctive and pervasive throughout nature as ways to organize societies, keep everyone on their toes, and to bias reproduction to the fittest members. The enlightenment brought us a new vision of human society, one based on some level of equality, with a negotiated and franchise-based meritocracy, rather than one based on nature, tooth, and claw. But we have always been skittish about true democracy. Maximalist democracies like the Occupy movement never get anywhere, because too many people have veto power, and leadership is lacking. Leadership is premised naturally on hierarchy.

Hierarchy is also highly archetypal and instinctive. Maybe these are archetypes we want to fight against, but we have them anyhow. The communists were classic cases of replacing one (presumably corrupt and antiquated) social hierarchy with another which turned out to be even more anxiously vain and vicious, for all its doublespeak about serving the masses. Just looking at higher-ranking individuals is always a pleasant and rewarding experience. That is why movies are made about the high ranking and the glamorous, more than the downtrodden. And why following the royals remains fascinating.

But that is not all! The Queen is also head of the Anglican Church, another institution that has fallen from its glory days of power. It has also suffered defections and loss of faith, amid centuries-long assaults from the enlightenment. The deity itself has gone through a long transition, from classic patriarchial king in the old testament (who killed all humanity once over for its sins), to mystic cypher in the New Testament (who demanded the death of itself in order to save the shockingly persistent sinners of humanity from its own retribution), to deistic non-entity at the height of the enlightenment, to what appears to be the current state of utter oblivion. One of the deity's major functions was to explain the nature of the world in all its wonder and weirdness, which is now quite unnecessary. We must blame ourselves for climate change, not a higher power. 

While social hierarchy remains at the core of humanity, the need for deities is less clear. As a super-king, god has always functioned as the and ultimate pinnacle of the social and political system, sponsoring all the priests, cardinals, kings, pastors, and the like down the line. But if it remains stubbornly hidden from view, has lost its most significant rationales, and only peeps out from tall tales of scripture, that does not make for a functional regent at all. While the British monarchy pursues its somewhat comical, awkward performance of unmerited superintendence of state, church, and social affairs, the artist formerly known as God has vanished into nothing at all.


Sunday, January 16, 2022

Choices, Choices

Hippocampal maps happen in many modes and dimensions. How do they relate to conscious navigation?

How do our brains work? A question that was once mystical is now tantalizingly concrete. Neurobiology is, thanks to the sacrifices of countless rats, mice, and undergraduate research subjects, slowly bringing to light mechanisms by which thoughts flit about the brain. The parallel processing of vision, progressively through the layers of the visual cortex, was one milestone. Another has been work in the hippocampus, which is essential for memory formation as well as mapping and navigation. Several kinds of cells have been found there (or in associated brain areas) which fire when the animal is in a certain place, or crosses a subjective navigational grid boundary, or points its head in a certain direction. 

A recent paper reviewed recent findings about how such navigation signals are bound together and interact with the prefrontal cortex during decision making. One is that locations are encoded in a peculiar way, within the brain wave known as the theta oscillation. These run at about 4 to 12 cycles per second, and as an animal moves or thinks, place cells corresponding to locations behind play at the trough of the cycle, while locations progressively closer, and then in front of the animal play correspondingly higher on the wave. So the conscious path that the animal is contemplating is replayed on a sort of continuous loop in highly time-compressed fashion. And this happens not only while the animal is on the path, but at other times as well, if it is dreaming about its day, or is resting and thinking about its future options.

"For hippocampal place cells to code for both past and future trajectories while the animal navigates through an environment, the hippocampus needs to integrate multiple sensory inputs and self-generated cues by the animal’s movement for both retrospective and prospective coding."


These researchers describe a new piece of the story, that alternate theta cycles can encode different paths. That is, as the wave repeats, the first cycle may encode one future path out of a T-maze, while the next may encode another path out of the same maze, and then repeating back to A, B, etc. It is evident that the animal is trying to decide what to do, and its hippocampus (with associated regions) is helpfully providing mappings of the options. Not only that, but the connecting brain areas heading towards the prefrontal cortex (the nucleus reuniens, entorhinal cortex, and parahippocampal gyrus) separate these path representations into different cell streams, (still on the theta oscillation), and progressively filter one out. Ultimately, the prefrontal cortex represents only one path ... the one that the rat actually chooses to go down. The regions are connected in both directions, so there is clearly top-down as well as bottom-up processing going on. The conclusion is that in general, the hippocampus and allied areas provide relatively unbiased mapping services, while the cortex does the decision making about where to go, and while it may receive.

    "This alternation between left and right begins as early as 25 cm prior to the choice point and will continue until the animal makes its turn"


A rat considers its options. Theta waves are portrayed, as they appear in different anatomical locations in the brain. Hippocampal place cells, on the bottom right, give a mapping of the relevant path repeatedly encoded across single theta wave cycles. One path is encoded in one cycle, the other in the next. Further anatomical locations (heading left) separate the maps into different channels / cells, from which the prefrontal cortex finally selects only the one it intends to actually use.

The hippocampus is not just for visual navigation, however. It is now known to map many other senses in spatial terms, like sounds, smells. It also maps the flow of time in cognitive space, such as in memories, quite apart from spatial mapping. It seems to be a general facility to create cognitive maps of the world, given whatever the animal has experienced and is interested in, at any scale, and in many modalities. The theta wave embedding gives a structure that is highly compressed, and repeated, so that it is available to higher processing levels for review, re-enactment, dreaming, and modification for future planning. 

Thus using the trusty maze test on rats and mice, neuroscientists are slowly, and very painfully, getting to the point of deciphering how certain kinds of thoughts happen in the brain- where they are assembled, how their components combine, and how they relate to behavior. How they divide between conscious and unconscious processes naturally awaits more insight into what this dividing line really consists of.


  • Biochar!
  • More about the coup attempt.
  • Yes, there was electoral fraud.
  • The more you know about fossil fuels, the worse it gets.
  • Graph of the week. Our local sanitation district finds over a thousand omicron genomes per milliliter of intake, which seems astonishing.




Saturday, October 30, 2021

Genetics and Non-Genetics of Temperament

Some fish are shy, some honeybees are outgoing. What makes individuals out of a uniform genetic background?

Do flies have personalities? Apparently so. Drosophila have a long and storied history as perhaps the greatest model organism for genetic research. They have brains, intricate development, complex bodies and behaviors, but also rapid generation time, relatively easy handling, and mass rearing. A new paper describes a quest to define their personalities- behavioral traits that vary despite a uniform genetic background. Personality is a trait that may be genetically influenced, but may just as well have environmental or sporadic causes (that is, not determined by outside factors). Importantly, this kind of trait tends to recur in a population, indicating that while it may not be determined, it follows certain canalized pathways in development, which might themselves be amenable to genetic investigation. Human personality studies have a long history, with various systems trying to make sense of the typical forms and range of variation.

A recent paper did a massive screen of uniformly inbred flies for personality variations. Computerization and automation have revolutionized the animal screening field, as it has so many others, so now flies can be indivually put through a battery of tests with minimal effort to humans, looking for their individual responses to light, to maze choices, spontaneous activity, circadian preferences, sensitivity to odors, etc. These tests were compiled for hundreds of genetically identical flies from birth to death, followed by sequencing of their mRNA expression to see which genes were active. Another batch of more diverse wild-type flies were tested as well to compare what variable genetic influences might be afoot.

Firstly, the differences they observed in these flies were stable over time. They represent true "types" of behavior, despite the lack of genetic input. Secondly, they are limited in landscape. Those flies more active in one test tend to be more active in other tests as well. So the variations in behavior seem to flow from deep-seated categorical types that follow typical patterns within fly development. Which tests should yield correlated scores, and which other ones are more orthogonal, is a little hard to figure out and a matter of subjective taste, so these conclusions about wide-spread correlations in disparate behaviors reflecting personality types is based largely on these researchers knowing their flies on a pretty intimate basis.

A matrix of videos of flies just strolling along, captured by these researchers. Not all flies walk the same way.

For example, they emphasize correlations where they would not have expected them- between, say light sensitivity and overall activity- and non-correlations where they would have expected correlation- say between activity measures of maze walking and free activity. The main observation is that there were a lot of variations among these identical-twin flies. So, just as identical humans can have different personalities, sensitivities, and outlooks, so can flies. 

Is there anything one can say about this genetically? The behavioral variations were themselves not genetically based, but rather due to alternate paths taken down developmental pathways, via either sporadic or experience-based differences. The flies were raised in the same homes, so to speak, but as we know from humans, however similar things may seem on the outside, the individual subjective experience can be very different. At any rate, the developmental pathways leading to the variations are themselves genetically determined, so this exercise was really about learning about how they work, and what range of variation they support/allow.

This analysis of course boils down to how informative the behavioral traits are that the researchers are testing. And obviously, they were not very informative- how does one connect a propensity to turn left when going down a maze with some developmental process? These researchers threw a bunch of statistics at their data, including from the gene expression analysis performed in the sacrificed flies after their mortal trials were over. For instance, among known molecular pathways, metabolic pathway gene expression correlated with activity assays of behavior- not a big surprise. Expression of photo-transduction related genes also correlated with response to light. The biggest correlation was between oxidative phosphorylation gene expression (i.e. mitochondrial activity) with their various activity measurements, which were, after all, the essence of all their assays. In humans, some people are just high-energy, which informs everything they do.

"We found that in all cases, behavioral variation has high dimensionality, that is, many independent axes of variation."

In the end, they conclude that, yes, flies of identical genetic background grow up to have distinct behavioral profiles, or one can say, personalities. Many of these behavioral profiles or traits are independent of each other, indicating several, or even numerous, axes of development where such differences can arise. The researchers estimate 27 dimensions of trait variability, in fact, just from this smattering of tests. But others vary together, forming a sort of personality type, though the choice of assays was obviously very influential in these cross-correlations. These results give a very rough start to the project of figuring out where animal development is less than fully determined, and can thus give rise to the non-genetic variation that provides rich fodder for environmental and social adaptation / specialization. While genes are not directly responsible for this variation, they are responsible for the available range, and thus set the parameters of possible adaptation.

It is sadly typical that these researchers disposed of about 1/3 of their flies at the outset of the study for being insufficiently active. While they are surely correct that these flies would continue to be less active through the rest of the assays, thus giving less data to their automated tests, they did not ask themselves why some flies might choose to think before they leap - so to speak. Were they genetically defective? The flies were identical to a matter of a handful of single nucleotide variations. If inbreeding was a problem, all the flies would have been equally affected. So it is likely that one of the most significant personality traits was summarily excluded out of raw institutionalized bias against the more introverted fly, conveniently veiled by claims of technical limitations. Hey hey, ho ho!

  • Yes, they have a brain.
  • Technical talk on SARS COV2 evolution, which has been, obviously, rapid and devastating.
  • And a story about its endemic fate as a regular cold virus among us.
  • Manchin isn't a slouch in the corruption department either.
  • We need a lot more electricity.
  • The price of fish.
  • If you thing facebook is bad here, it is worse for other countries.
  • I was thinking about oculus. But now, maybe not.
  • A little bit of wonderfulness from the Muppets.

Saturday, October 9, 2021

Alzheimer's: Wnt or Lose

A molecular exploration of the causes of Alzheimer's disease.

What causes Alzheimer's disease remains a bit of a mystery, as there is no simple and single molecular explanation, as there is with, say, Huntington's disease, which is caused by a single gene defect. There is one leading candidate, however, which is the amyloid protein, one of the accumulated molecular signatures of the disease in post-mortem brains. Some genetic forms of Alzheimer's start with defects in the gene that encodes this protein, APP (amyloid precursor protein). And a protease processing system that cleaves out the toxic amyloid beta protein from the much larger original APP protein is also closely involved with Alzheimer risk. So while there are many other genetic risk factors and possible causes relating to the APP and other systems, this seems to be the dominant causal element in Alzheimer's disease.

The naming of this protein is rather backwards, focusing on the pathological roles of defective forms, rather than on what the normal protein does. But we don't really know what that normal function is yet, so have had little choice. A recent paper described one new function for the normal APP protein, which is as a receptor for a family of proteins called WNT (for wingless integration site, an obscure derivation combining findings from fly and mouse genetics). APP had long been known to interact with WNT functions, and a reduction of WNT signaling is one of the pathologic (and possibly pathogenic) hallmarks of Alzheimer's, but this seems to be the first time it has been tabbed as a direct receptor for WNT.

What is WNT? These proteins track back to the dawn of multicelled animals, where they first appear in order to orchestrate the migration and communication of cells of the blastopore. This is the invagination that performs the transition (gastrulation) from an egg-derived ball of cells to the sheets of what will become the endoderm and mesoderm on the inside, and the ectoderm on the outside. The endoderm becomes the gut and respiratory organs, the mesoderm becomes the skeleton, muscles, blood, heart, and connective tissue, and the ectoderm becomes the skin and nervous system. WNT proteins are the ligands expressed in one set of cells, and their receptors (Frizzled and a few other proteins) are expressed on other cells which are nearby and need to relate for some developmental / migration / identification, or other purpose. One other family, the NOTCH proteins and their respective cell surface receptors, have a similar evolutionary history and likewise function as core developmental cell-cell signaling and identification systems. 

Rough structure of the APP protein. The membrane  spanning portion is in teal at the bottom, showing also some key secretase protease cleavage sites, which liberate alpha and beta portions of the protein. The internal segment is at bottom, and functions, when cleaved from the rest of the protein, as a nuclear transcription activator. Above are various extracellular domains, including one for "ligand binding", which is thought by at least one research group to bind WNT. The dimerization domain can bind other APP proteins on other cells, and heparin, another binding partner is a common component of the extracellular environment.

Fast forward a billion years, and WNT family members are deeply involved in many decisions during animal development and afterwards, particularly in the brain, controlling nerve cell branching and synapse formation in adults. WNT, NOTCH, and APP are each ligand+receptor systems, where a ligand from one cell or in soluble form binds to a receptor on the surface of another cell, which "receives" the signal and can do a multitude of things in response. The usual receptors for WNT are a family of Frizzled proteins plus a bunch of other helper proteins, the receptors for NOTCH are Jagged proteins, and the APP protein is itself a receptor whose ligand has till now been unclear, though it can homodimerize, detecting APP on other cells. APP is a large protein, and one of its responses to signals is to be cleaved in several ways. Its short cell-interior tail can be cleaved, (by gamma secretase), upon which that piece travels to the nucleus and with other proteins acts as a transciption regulator, activating, among other genes, its own gene, APP. Another possible cleavage is done by alpha secretase, causing the release of soluble APP alpha (sAPPα), which has pro-survival activities for neurons and protects them against excessive activity (excito-toxicity). Lastly, beta-secretase can cleaves APP into the toxic beta (Aβ), which in tiny amounts is also neuro-protective, but in larger amounts is highly toxic to neurons, starting the spiral of death which characterizes the hollowing out of the brain in Alzheimer's disease.

The cleavages by alpha secretase and beta secretase are mutually exclusive- the cleavage sites and products overlap, so cleavage by one prevents cleavage by the other, or destroys its product. And WNT signaling plays an important role in which route is chosen. WNT signals by two methods, called canonical or non-canonical, depending on which receptor and which ligand meet. Canonical signaling is neuro-protective, opposed to Alzheimer development, and leads to alpha secretase cleavage. Non-canonical signaling tends to the opposite, leading to internalization of APP from the surface, and beta secretase cleavage which needs acidic conditions that are found in the internal endsomes where APP ends up. So the balance of WNT "tone" is critical, and is part of the miscellaneous other risk factors that make up the background for Alzheimer's disease. Additionally, cleavage by gamma secretase is needed following cleavage by beta secretase in order to make the final forms of APP beta. The gene for gamma secretase is PSEN1 (presenilin-1), mutations in which are the leading genetic cause of Alzheimer's disease. Yet these mutations have no clear relation with the activity of the resulting gamma secretase or the accumulation of particular APP cleaved forms, so this area of causality research remains open and active.

But getting back the WNT story, if APP is itself a WNT receptor, then that reinforces the centrality of WNT signaling in this syndrome. Indeed, attempts to treat Alzheimer's by reducing the toxic amyloid (APP beta) build up in various ways have not been successful, so researchers have been looking for causal factors antecedent to that stage. One clue is that a key WNT inhibitor, DKK (for dick-kopf, derived from fly genetics, which have had some prominent German practitioners), has been experimentally an effective therapy for mice with a model form of Alzheimers. DKK is an inhibitor of the canonical WNT pathway, (via the LRP6 co-receptor of Frizzled), shunting it towards more non-canonical signaling. This balance, or "tone" of WNT signaling seems to have broad effects in promoting neurite outgrowth and synapse formation, or the reverse. Once this balance is lost, APP beta induces the production of more DKK, which starts a non-virtuous feedback cycle that may form the core of Alzheimer's pathology. This cycle could be started by numerous genetic defects and influenced by other environmental risk factors, leading to the confusing nature of the syndrome (no pun intended!). And of course the cycle starts long before symptoms are apparent and even longer before autopsy can verify what happened, so getting to the bottom of this story has been hugely frustrating.


  • Even Forbes is covering these molecular details these days.
  • A new low for the US- as a sleazy tax haven.
  • No hypocrisy at the Bible museum!
  • Senator from coal is now in control.
  • Facebook has merely learned from the colleagues at FOX- the Sith network.
  • But does add its own wrinkles.
  • Bill Mitchell on the Australian central bank accounts.