Showing posts with label cell biology. Show all posts
Showing posts with label cell biology. Show all posts

Saturday, May 6, 2023

The Development of Metamorphosis

Adulting as a fly involves a lot of re-organization.

Humans undergo a slight metamorphosis, during adolescence. Imagine undergoing pupation like insects do and coming out with a totally new body, with wings! Well, Kafka did, and it wasn't very pleasant. But insects do it all the time, and have been doing it for hundreds of millions of years, taking to the air and dominating the biosphere. What goes on during metamorphosis, how complete is its refashioning of the body, and how did it evolve? A recent paper (review) considered in detail how the brains of insects change during metamorphosis, finding a curious blend of birth, destruction, and reprogramming among their neurons.

Time is on the Y axis, and the emergence of later, more advanced types of insects is on the X axis. This shows the progressive elaboration of non-metamorphosis (ametabolous), partially metamorphosing (hemimetabolous), and fully metamorphosing (holometabolous) forms. Dragonflies are only partially metamorphosing in this scheme, though their adult forms are often highly different from their larval (nymph) form.


Insects evolved from crustaceans, and took to land as small silvertail-like creatures with exoskeletons, roughly 450 million years ago. Over 100 million years, they developed the process of metamorphosis as a way to preserve the benefits of their original lifestyle for early development, in moist locations, while conquering the air and distance as adults. Early insect types are termed ametabolous, meaning that they have no metamorphosis at all, developing straight from eggs to an adult-style form. These go through several molts to accommodate growth, but don't redesign their bodies. Next came hemimetabolous development, which is exemplified by grasshoppers and cockroaches. Also dragonflies, which significantly refashion themselves during the last molt, gaining wings. In the nymph stage, those wings were carried around as small patches of flat embryonic tissue, and then suddenly grow out at the last molt. Dragonflies are extreme, and most hemimetabolous insects don't undergo such dramatic change. Last came holometabolous development, which involves pupation and a total redesign of the body that can go from a caterpillar to a butterfly.

The benefit of having wings is pretty clear- it allows huge increases in range for feeding and mating. Dragonflies are premier flying predators. But as a larva, wallowing in fruit juice or leaf sap or underwater, as dragonflies are, wings and long legs would be a hindrance. This conundrum led to the innovation of metamorphosis, based on the already somewhat dramatic practice of molting off the exoskeleton periodically. If one can grow a whole new skeleton, why not put wings on it, or legs? And metamorphosis has been tremendously successful, used by over 98% of insect species.

The adult insect tissues do not come from nowhere- they are set up as arrested embryonic tissues called imaginal discs. These are small patches that exist in the larva at specific positions. During pupation, while much of the rest of the body refashions itself, imaginal discs rapidly develop into future tissues like wings, legs, genitalia, antennas, and new mouth parts. These discs have a fascinating internal structure that prefigures the future organ. The leg disc is concentrically arranged with the more distant future parts (toes) at its center. Transplanting a disc from one insect to another or one place to another doesn't change its trajectory- it will still become a leg wherever it is put. So it is apparent that the larval stage is an intermediate stage of organismal development, where a bunch of adult features are primed but put on hold, while a simpler and much more primitive larval body plan is executed to accommodate its role in early growth and its niche in tight, moist, hidden places.

The new paper focuses on the brain, which larva need as well as adults. So the question is- how does the one brain develop from the other? Is the larval brain thrown away? The answer is that no, the brain is not thrown away at all, but undergoes its own quite dramatic metamorphosis. The adult brain is substantially bigger, so many neurons are added. A few neurons are also killed off. But most of the larval neurons are reprogrammed, trimmed back and regrown out to new regions to do new functions.

In this figure, the neurons are named as mushroom body outgoing neuron (MBON) or dopaminergic neuron (DAN, also MBIN for incoming mushroom body neuron), mushroom body extrinsic neuron to calyx (MBE-CA), and mushroom body protocerebral posterior lateral 1 (PPL1). MBON-c1 is totally reprogrammed, MBON-d1 changes its projections substantially, as do the (teal) incoming neurons, and MBON-12 was not operational in the larval stage at all. Note how MBON-c1 is totally reprogrammed to serve new locations in the adult.

The mushroom body, which is the brain area these authors focus on, is situated below the antennas and mediates smell reception, learning, and memory. Fly biologists regard it as analogous to our cortex- the most flexible area of the brain. Larvae don't have antennas, so their smell/taste reception is a lot more primitive. The mushroom body in drosophila has about a hundred neurons at first, and continuously adds neurons over larval life, with a big push during pupation, ending up with ~2200 neurons in adults. Obviously this has to wire into the antennas as they develop, for instance.

The authors find that, for instance, no direct connections between input and output neurons of the mushroom body (MBIN and MBON, respectively) survive from larval to adult stages. Thus there can be no simple memories of this kind preserved between these life stages. While there are some signs of memory retention for a few things in flies, for the most part the slate is wiped clean. 

"These MBONs [making feedback connections] are more highly interconnected in their adult configuration compared to their larval one: their adult configuration shows 13 connections (31% of possible connections), while their larval configuration has only 7 (17%). Importantly, only three of these connections (7%) are present in both larva and adult. This percentage is similar to the 5% predicted if the two stages were wired up independently at their respective frequencies."


Interestingly, no neuron changed its type- that is, which neurotransmitter it uses to communicate. So, while pruning and rewiring was pervasive, the cells did not fundamentally change their stripes. All this is driven by the hormonal system (juvenile hormone, which blocks adult development, and ecdysone, which drives molting, and in the absence of juvenile hormone, pupation) which in turn drives a program of transcription factors that direct the genes needed for development. While a great deal is known about neuronal pathfinding and development, this paper doesn't comment on those downstream events- how it is that selected neurons are pruned, turned around, and induced to branch out in totally new directions, for instance. That will be the topic of future work.


  • Corrupt business practices. Why is this lawful?
  • Why such easy bankruptcy for corporations, but not for poor countries?
  • Watch the world's mesmerizing shipping.
  • Oh, you want that? Let me jack up the price for you.
  • What transgender is like.
  • "China has arguably been the biggest beneficiary of the U.S. security system in Asia, which ensured the regional stability that made possible the income-boosting flows of trade and investment that propelled the country’s economic miracle. Today, however, General Secretary of the Chinese Communist Party Xi Jinping claims that China’s model of modernization is an alternative to “Westernization,” not a prime example of its benefits."

Saturday, April 8, 2023

Molecules That See

Being trans is OK: retinal and the first event of vision.

Our vision is incredible. If I was not looking right now and experiencing it myself, it would be unbelievable that a biological system made up of motley molecules could accomplish the speed, acuity and color that our visual system provides. It was certainly a sticking point for creationists, who found (and perhaps still find) it incredible that nature alone can explain it, not to mention its genesis out of the mists of evolutionary time. But science has been plugging away, filling in the details of the pathway, which so far appear to arise by natural means. Where consciousness fits in has yet to be figured out, but everything else is increasingly well-accounted. 

It all starts in the eye, which has a curiously backward sheet of tissue at the back- the retina. Its nerves and blood vessels are on the surface, and after light gets through those, it hits the photoreceptor cells at the rear. These photoreceptor cells come in two types, rods (non-color sensitive) and cones (sensitive to either red, green, or blue). The photoreceptor cells have a highly polarized and complicated structure, where photosensitive pigments are bottom-most in a dense stack of membranes. Above these is a segment where the mitochondria reside, which provide power, as vision needs a lot of energy. Above these is the nucleus of the cell (the brains of the operation) and top-most is the synaptic output to the rest of the nervous system- to those nerves that network on the outside of the retina. 

A single photoreceptor cell, with the outer segment at the very back of the retina, and other elements in front.

Facing the photoreceptor membranes at the bottom of the retina is the retinal pigment epithelium, which is black with melanin. This is finally where light stops, and it also has very important functions in supporting the photoreceptor cells by buffering their ionic, metabolic, and immune environment, and phagocytosing and digesting photoreceptor membranes as they get photo-oxidized, damaged, and sloughed off. Finally, inside the photoreceptor cells are the pigment membranes, which harbor the photo-sensitive protein rhodopsin, which in turn hosts the sensing pigment, retinal. Retinal is a vitamin A-derived long-chain molecule that is bound inside rhodopsin or within other opsins which respectively confer slightly shifted color sensitivity. 

These opsins transform the tickle that retinal receives from a photon into a conformational change that they, as GPCRs (G-protein coupled receptors), transmit to G-proteins, called transducin. For each photon coming in, about 50 transducin molecules are activated. Each of activated transducin G-protein alpha subunits induce (in its target cGMP phosphodisterase) about 1000 cGMP molecules to be consumed. The local drop in cGMP concentration then closes the cGMP-gated cation channels in the photoreceptor cell membrane, which starts the electrical impulse that travels out to the synapse and nervous system. This amplification series provides the exquisite sensitivity that allows single photons to be detected by the system, along with the high density of the retinal/opsin molecules packed into the photoreceptor membranes.

Retinal, used in all photoreceptor cell types. Light causes the cis-form to kick over to the trans form, which is more stable.

The central position of retinal has long been understood, as has the key transition that a photon induces, from cis-retinal to all-trans retinal. Cis-retinal has a kink in the middle, where its double bond in the center of the fatty chain forms a "C" instead of a "W", swinging around the 3-carbon end of the chain. All-trans retinal is a sort of default state, while the cis-structure is the "cocked" state- stable but susceptible to triggering by light. Interestingly, retinal can not be reset to the cis-state while still in the opsin protein. It has to be extracted, sent off to a series of at least three different enzymes to be re-cocked. It is alarming, really, to consider the complexity of all this.

A recent paper (review) provided the first look at what actually happens to retinal at the moment of activation. This is, understandably, a very fast process, and femtosecond x-ray analysis needed to be brought in to look at it. Not only that, but as described above, once retinal flips from the dark to the light-activated state, it never reverses by itself. So every molecule or crystal used in the analysis can only be used once- no second looks are possible. The authors used a spray-crystallography system where protein crystals suspended in liquid were shot into a super-fine and fast X-ray beam, just after passing by an optical laser that activated the retinal. Computers are now helpful enough that the diffractions from these passing crystals, thrown off in all directions, can be usefully collected. In the past, crystals were painstakingly positioned on goniometers at the center of large detectors, and other issues predominated, such as how to keep such crystals cold for chemical stability. The question here was what happens in the femto- and pico-seconds after optical light absorption by retinal, ensconced in its (temporary) rhodopsin protein home.

Soon after activation, at one picosecond, retinal has squirmed around, altering many contacts with its protein. The trans (dark) conformation is shown in red, while the just-activated form is in yellow. The PSB site on the far end of the fatty chain (right) is secured against the rhodopsin host, as is the retinal ring (left side), leaving the middle of the molecule to convey most of the shape change, a bit like a bicycle pedal.

And what happens? As expected, the retinal molecule twists from cis to trans, causing the protein contacts to shift. The retinal shift happens by 200 femtoseconds, and the knock-on effects through the protein are finished by 100 picoseconds. It all makes a nanosecond seem impossibly long! As imaged above, the shape shift of retinal changes a series of contacts it has with the rhodopsin protein, inducing it to change shape as well. The two ends of the retinal molecule seem to be relatively tacked down, leaving the middle, where the shape change happens, to do most of the work. 

"One picosecond after light activation, rhodopsin has reached the red-shifted Batho-Rh intermediate. Already by this early stage of activation, the twisted retinal is freed from many of its interactions with the binding pocket while structural perturbations radiate away as a transient anisotropic breathing motion that is almost entirely decayed by 100 ps. Other subtle and transient structural rearrangements within the protein arise in important regions for GPCR activation and bear similarities to those observed by TR-SFX during photoactivation of seven-TM helix retinal-binding proteins from bacteria and archaea."

All this speed is naturally lost in the later phases, which take many milliseconds to send signals to the brain, discern movement and shape, to identify objects in the scene, and do all the other processing needed before consciousness can make any sense of it. But it is nice to know how elegant and uniform the opening scene in this drama is.


  • Down with lead.
  • Medicare advantage, cont.
  • Ukraine, cont.
  • What the heck is going on in Wisconsin?
  • Graph of the week- world power needs from solar, modeled to 2050. We are only scratching the surface so far.



Saturday, December 31, 2022

Hand-Waving to God

A decade on, the Discovery Institute is still cranking out skepticism, diversion, and obfuscation.

A post a couple of weeks ago mentioned that the Discovery Institute offered a knowledgeable critique of the lineages of the Ediacaran fauna. They have raised their scientific game significantly, and so I wanted to review what they are doing these days, focusing on two of their most recent papers. The Discovery Institute has a lineage of its own, from creationism. It has adapted to the derision that entailed, by retreating to "intelligent design", which is creationism without naming the creators, nailing down the schedule of creation, or providing any detail of how and from where creation operates. Their review of the Ediacaran fauna raised some highly skeptical points about whether these organisms were animals or not. Particularly, they suggested that cholesterol is not really restricted to animals, so the chemical traces of cholesterol that were so clearly found in the Dickinsonia fossil layers might not really mean that these were animals- they might also be unusual protists of gigantic size, or odd plant forms, etc. While the critique is not unreasonable, it does not alter the balance of the evidence which does indeed point to an animal affinity. These fauna are so primitive and distant that it is fair to say that we can not be sure, and particularly we can not be sure that they had any direct ancestral relationship to any later organisms of the ensuing Cambrian period, when recognizable animals emerged.

Fair enough. But what of their larger point? The Discovery Institute is trying to make the point, I believe, about the sudden-ness of early Cambrian evolution of animals, and thus its implausibility under conventional evolutionary theory. But we are traversing tens of millions of years through these intervals, which is a long time, even in evolutionary terms. Secondly, the Ediacaran period, though now represented by several exquisite fossil beds, spanned a hundred million years and is still far from completely characterized paleontologically, even supposing that early true animals would have fossilized, rather than being infinitesimal and very soft-bodied. So the Cambrian biota could easily have predecessors in the Ediacaran that have or have not yet been observed- it is as yet not easy to say. But what we can not claim is the negative, that no predecessors existed before some time X- say the 540 MYA point at the base of the Cambrian. So the implication that the Discovery Institute is attempting to suggest has very little merit, particularly since everything that they themselves cite about the molecular and paleontological sequence is so clearly progressive and in proper time sequence, in complete accord with the overall theory of evolution.

For we should always keep in mind that an intelligent designer has a free hand, and can make all of life in a day (or in six, if absolutely needed). The fact that this designer works in the shadows of slightly altered mutation rates, or in a few million years rather than twenty million, and never puts fossils out of sequence in the sedimentary record, is an acknowledgement that this designer is a bit dull, and bears a strong resemblence to evolution by natural selection. To put it in psychological terms, the institute is in the "negotiation" stage of grief- over the death of god.

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, December 3, 2022

Senescent, Cancerous Cannibals

Tumor cells not only escape normal cell proliferation controls, but some of them eat nearby cells.

Our cells live in an uneasy truce. Cooperation is prized and specialization into different cell types, tissues, and organs is pervasive. But deep down, each cell wants to grow and survive, prompting many mechanisms of control, such as cell suicide (apoptosis) and immunological surveillance (macrophages, killer T-cells). Cancer is the ultimate betrayal, not only showing disregard for the ruling order, but in its worst forms killing the whole organism in a pointless drive for growth.

A fascinating control mechanism that has come to prominence recently is cellular senescence. In petri dishes, cells can only be goosed along for a few dozen cycles of division until they give out, and become senescent. Which is to say, they cease replicating but remain alive. It was first thought that this was another mechanism to keep cancer under control, restricting replication to "stem" cells and their recent progeny. But a lot of confusing and interesting observations indicate that the deeper meaning of senescence lies in development, where it appears to function as an alternate form of cell suicide, delayed so that tissues are less disrupted. 

Apoptosis is used very widely during development to reshape tissues, and senescence is used extensively as well in these programs. Senescent cells are far from quiescent, however. They have high metabolic activity and are particularly notorious for secreting a witches' brew of inflammatory cytokines and other proteins- the senescence-associated secretory phenotype, or SASP. in the normal course of events, this attracts immune system cells which initiate repair and clearance operations that remove the senescent cells and make sure the tissue remains on track to fulfill its program. These SASP products can turn nearby cells to senescence as well, and form an inflammatory micro-environment that, if resolved rapidly, is harmless, but if persistent, can lead to bad, even cancerous local outcomes. 

The significance of senescent cells has been highlighted in aging, where they are found to be immensely influential. To quote the wiki site:

"Transplantation of only a few (1 per 10,000) senescent cells into lean middle-aged mice was shown to be sufficient to induce frailty, early onset of aging-associated diseases, and premature death."

The logic behind all this seems to be another curse of aging, which is that while we are young, senescent cells are cleared with very high efficiency. But as the immune system ages, a very small proportion of senescent cells are missed, which are, evolutionarily speaking, an afterthought, but gradually accumulate with age, and powerfully push the aging process along. We are, after all, anxious to reduce chronic inflammation, for example. A quest for "senolytic" therapies to clear senescent cells is becoming a big theme in academia and the drug industry and may eventually have very significant benefits. 

Another odd property of senescent cells is that their program, and the effects they have on nearby cells, resemble to some partial degree those of stem cells. That is, the prevention of cell death is a common property, as is the prevention of certain controls preventing differentiation. This brings us to tumor cells, which frequently enter senescence under stress, like that of chemotherapy. This fate is highly ambivalent. It would have been better for such cells to die outright, of course. Most senescent tumor cells stay in senescence, which is bad enough for their SASP effects in the local environment. But a few tumor cells emerge from senescence, (whether due to further mutations or other sporadic properties is as yet unknown), and they do so with more stem-like character that makes them more proliferative and malignant.

A recent paper offered a new wrinkle on this situation, finding that senescent tumor cells have a novel property- that of eating neighboring cells. As mentioned above, senescent cells have high metabolic demands, as do tumor cells, so finding enough food is always an issue. But in the normal body, only very few cells are empowered to eat other cells- i.e. those of the immune system. To find other cells doing this is highly unusual, interesting, and disturbing. It is one more item in the list of bad things that happen when senescence and cancer combine forces.

A senescent tumor cell (green) phagocytoses and digests a normal cell (red).


  • Shockingly, some people are decent.
  • Tangling with the medical system carries large risks.
  • Is stem cell therapy a thing?
  • Keep cats indoors.

Saturday, November 5, 2022

LPS: Bacterial Shield and Weapon

Some special properties of the super-antigen lipopolysaccharide.

Bacteria don't have it easy in their tiny Brownian world. While we have evolved large size and cleverness, they have evolved miracles of chemistry and miniaturization. One of the key classifications of bacteria is between Gram positive and negative, which refers to the Gram stain. This chemical stains the peptidoglycan layer of all bacteria, which is their "cell wall", wrapped around the cell membrane and providing structural support against osmotic pressure. For many bacteria, a heavy layer of peptidoglycan is all they have on the outside, and they stain strongly with the Gram stain. But other bacteria, like the paradigmatic E. coli, stain weakly, because they have a thin layer of peptidoglycan, outside of which is another membrane, the outer membrane (OM, whereas the inner membrane is abbreviated IM).

Structure of the core of LPS, not showing the further "poly" saccharide tails that would go upwards, hitched to the red sugars. At bottom are the lipid tails that form a strong membrane barrier. These, plus the blue sugar core, form the lipid-A structure that is highly antigenic.

This outer membrane doesn't do much osmotic regulation or active nutrient trafficking, but it does face the outside world, and for that, Gram-negative bacteria have developed a peculiar membrane component called lipopolysaccharide, or LPS for short. The outer membrane is assymetric, with normal phospholipids used for the inner leaflet, and LPS used for the outside leaflet. Maintaining such assymetry is not easy, requiring special "flippases" that know which side is which and continually send the right lipid type to its correct side. LPS is totally different from other membrane lipids, using a two-sugar core to hang six lipid tails (a structure called lipid-A), which is then decorated with chains of additional sugars (the polysaccharide part) going off in the other direction, towards the outside world.

The long, strange trip that LPS takes to its destination. Synthesis starts on the inner leaflet of the inner membrane, at the cytoplasm of the bacterial cell. The lipid-A core is then flipped over to the outer leaflet, where extra sugar units are added, sometimes in great profusion. Then a train of proteins (Lpt-A,B,C,D,E,D) extract the enormous LPS molecule out of the inner membrane, ferry it through the periplasm, through the peptidoglycan layer, and through to the outer leaflet of the outer membrane.

A recent paper provided the structural explanation behind one transporter, LptDE, from Neisseria gonerrhoeae. This is the protein that receives LPS from a its synthesis inside the cell, after prior transport through the inner membrane and inter-membrane space (including the peptidoglycan layer), and places LPS on the outer leaflet of the outer membrane. It is an enormous barrel, with a dynamic crack in its side where LPS can squeeze out, to the right location. It is a structure that explains neatly how directionality can be imposed on this transport, which is driven by ATP hydrolysis (by LtpB) at the inner membrane, that loads a sequence of transporters sending LPS outward.

Some structures of LptD (teal or red), and LPS (teal, lower) with LptE (yellow), an accessory protein that loads LPS into LptD. This structure is on the outer leaflet of the outer membrane, and releases LPS (bottom center) through its "lateral gate" into the right position to join other LPS molecules on the outer leaflet.

LPS shields Gram-negative bacteria from outside attack, particularly from antibiotics and antimicrobial peptides. These are molecules made by all sorts of organisms, from other bacteria to ourselves. The peptides typically insert themselves into bacterial membranes, assemble into pores, and kill the cell. LPS is resistant to this kind of attack, due to its different structure from normal phospholipids that have only two lipid tails each. Additionally, the large, charged sugar decorations outside fend off large hydrophobic compounds. LPS can be (and is) altered in many additional ways by chemical modifications, changes to the sugar decorations, extra lipid attachments, etc. to fend off newly evolved attacks. Thus LPS is the result of a slow motion arms race, and differs in its detailed composition between different species of bacteria. One way that LPS can be further modified is with extra hydrophobic groups such as lipids, to allow the bacteria to clump together into biofilms. These are increasingly understood as a key mode of pathogenesis that allow bacteria to both physically stick around in very dangerous places (such as catheters), and also form a further protective shield against attack, such as by antibiotics or whatever else their host throws at them.

In any case, the lipid-A core has been staunchly retained through evolution and forms a super-antigen that organisms such as ourselves have evolved to sense at incredibly low levels. We encode a small protein, called LY96 (or MD-2), that binds the lipid-A portion of LPS very specifically at infinitesimal concentrations, complexes with cell surface receptor TLR4, and sets off alarm bells through the immune system. Indeed, this chemical was originally called "endotoxin", because cholera bacteria, even after being killed, caused an enormous and toxic immune response- a response that was later, through painstaking purification and testing, isolated to the lipid-A molecule.

LPS (in red) as it is bound and recognized by human protein MD-2 (LY96) and its complex partner TLR4. TLR4 is one of our key immune system alarm bells, detecting LPS at picomolar levels. 

LPS is the kind of antigen that is usually great to detect with high sensitivity- we don't even notice that our immune system has found, moved in, and resoved all sorts of minor infections. But if bacteria gain a foothold in the body and pump out a lot of this antigen, the result can be overwhelmingly different- cytokine storm, septic shock, and death. Rats and mice, for instance, have a fraction of our sensitivity to LPS, sparing them from systemic breakdown from common exposures brought on by their rather more gritty lifestyles.


  • Econometrics gets some critique.
  • Clickbait is always bad information, but that is the business model.
  • Monster bug wars.
  • Customer-blaming troll due to lose a great deal of money.

Saturday, October 15, 2022

From Geo-Logic to Bio-Logic

Why did ATP become the central energy currency and all-around utility molecule, at the origin of life?

The exploration of the solar system and astronomical objects beyond has been one of the greatest achievements of humanity, and of the US in particular. We should be proud of expanding humanity's knowledge using robotic spacecraft and space-based telescopes that have visited every planet and seen incredibly far out in space, and back in time. But one thing we have not found is life. The Earth is unique, and it is unlikely that we will ever find life elsewhere within traveling distance. While life may concievably have landed on Earth from elsewhere, it is more probable that it originated here. Early Earth had as conducive conditions as anywhere we know of, to create the life that we see all around us: carbon-based, water-based, precious, organic life.

Figuring out how that happened has been a side-show in the course of molecular biology, whose funding is mostly premised on medical rationales, and of chemistry, whose funding is mostly industrial. But our research enterprise thankfully has a little room for basic research and fundamental questions, of which this is one of the most frustrating and esoteric, if philosphically meaningful. The field has coalesced in recent decades around the idea that oceanic hydrothermal vents provided some of the likeliest conditions for the origin of life, due to the various freebies they offer.

Early earth, as today, had very active geology that generated a stream of reduced hydrogen and other compounds coming out of hydrothermal vents, among other places. There was no free oxygen, and conditions were generally reducing. Oxygen was bound up in rocks, water, and CO2. The geology is so reducing that water itself was and still is routinely reduced on its trip through the mantle by processes such as serpentinization.

The essential problem is how to jump the enormous gap from the logic of geology and chemistry, over to the logic of biology. It is not a question of raw energy- the earth has plenty of energetic processes, from vocanoes and tectonics to incoming solar energy. The question is how a natural process that has resolutely chemical logic, running down the usual chemical and physical gradients from lower to higher entropy, could have generated the kind of replicating and coding molecular system where biological logic starts. A paper from 2007 gives a broad and scrupulous overview of the field, featuring detailed arguments supporting the RNA world as the probable destination (from chemical origins) where biological logic really began. 

To rehearse very briefly, RNA has, and still retains in life today, both coding capacity and catalytic capacity, unifying in one molecule the most essential elements of life. So RNA is thought to have been the first molecule with truly biological ... logic, being replaced later with DNA for some of its more sedentary roles. But there is no way to get to even very short RNA molecules without some kind of metabolic support. There has to be an organic soup of energy and small organic molecules- some kind of pre-biological metabolism- to give this RNA something to do and chemical substituents to replicate itself out of. And that is the role of the hydrothermal vent system, which seems like a supportive environment. For the trick in biology is that not everything is coded explicitly. Brains are not planned out in the DNA down to their crenelations, and membranes are not given size and weight blueprints. Biology relies heavily on natural chemistry and other unbiological physical processes to channel its development and ongoing activity. The coding for all this, which seems so vast with our 3 Gb genome, is actually rather sparse, specifying some processes in exquisite detail, (large proteins, after billions of years of jury-rigging, agglomeration, and optimization), while leaving a tremendous amount still implicit in the natural physical course of events.

A rough sketch of the chemical forces and gradients at a vent. CO2 is reduced into various simple organic compounds at the rock interfaces, through the power of the incoming hydrogen rich (electron-rich) chemicals. Vents like this can persist for thousands of years.

So the origin of life does not have to build the plane from raw aluminum, as it were. It just has to explain how a piece of paper got crumpled in a peculiar way that allowed it to fly, after which evolution could take care of the rest of the optimization and elaboration. Less metaphorically, if a supportive chemical environment could spontaneously (in geo-chemical terms) produce an ongoing stream of reduced organic molecules like ATP and acyl groups and TCA cycle intermediates out of the ambient CO2, water, and other key elements common in rocks, then the leap to life is a lot less daunting. And hydrothermal vents do just that- they conduct a warm and consistent stream of chemically reduced (i.e. extra electrons) and chemical-rich fluid out of the sea floor, while gathering up the ambient CO2 (which was highly concentrated on the early Earth) and making it into a zoo of organic chemicals. They also host the iron and other minerals useful in catalytic conversions, which remain at the heart of key metabolic enzymes to this day. And they also contain bubble-like stuctures that could have confined and segregated all this activity in pre-cellular forms. In this way, they are thought to be the most probable locations where many of the ingredients of life were being generated for free, making the step over to biological logic much less daunting than was once thought.

The rTCA cycle, portrayed in the reverse from our oxidative version, as a cycle of compounds that spontaneously generate out of simple ingredients, due to their step-wise reduction and energy content values. The fact that the output (top) can be easily cleaved into the inputs provides a "metabolic" cycle that could exist in a reducing geological setting, without life or complicated enzymes.

The TCA cycle, for instance, is absolutely at the core of metabolism, a flow of small molecules that disassemble (or assemble, if run in reverse) small carbon compounds in stepwise fashion, eventually arriving back at the starting constituents, with only outputs (inputs) of hydrogen reduction power, CO2, and ATP. In our cells, we use it to oxidize (metabolize) organic compounds to extract energy. Its various stations also supply the inputs to innumerable other biosynthetic processes. But other organisms, admittedly rare in today's world, use it in the forward direction to create organic compounds from CO2, where it is called reductive or reverse (rTCA). An article from 2004 discusses how this latter cycle and set of compounds very likely predates any biological coding capacity, and represents an intrisically natural flow of carbon reduction that would have been seen in a pre-biotic hydrothermal vent setting. 

What sparked my immediate interest in all this was a recent paper that described experiments focused on showing why ATP, of all the other bases and related chemicals, became such a central part of life's metabolism, including as a modern accessory to the TCA cycle. ATP is the major energy currency in cells, giving the extra push to thousands of enzymes, and forming the cores of additional central metabolic cofactors like NAD (nicotine adenine dinucleotide), and acetyl-CoA (the A is for adenine), and participating as one of the bases of DNA and RNA in our genetic core processes. 

Of all nucleoside diphosphates, ADP is most easily converted to ATP in the very simple conditions of added acyl phosphate and Fe3+ in water, at ambient temperatures or warmer. Note that the trace for ITP shows the same absorbance before and after the reaction. The others show no reaction either. Panel F shows a time course of the ADP reaction, in hours. The X axis refers to time of chromatography of the sample, not of the reaction.

Why ATP, and not the other bases, or other chemicals? Well, bases appear as early products out of pre-biotic reaction mixtures, so while somewhat complicated, they are a natural part of the milieu. The current work compares how phosphorylation of all the possible di-phosphate bases works, (that is, adenosine, cytidine, guanosine, inosine, and uridine diphosphates), using the plausible prebiotic ingredients ferric ion (Fe3+) and acetyl phosphate. They found surprisingly that only ADP can be productively converted to ATP in this setting, and it was pretty insensitive to pH, other ions, etc. This was apparently due to the special Fe3+ coordinating capability that ADP has due to its pentose N and neighboring amino group that allows an easy electron transfers to the incoming phosphate group. Iron remains common as an enzymatic cofactor today, and it is obviously highly plausible in this free form as a critical catalyst in a pre-biotic setting. Likewise, acetyl phosphate could hardly be simpler, occurs naturally under prebiotic conditions, and remains an important element of bacterial metabolism (and transiently one of eukaryotic metabolism) today. 

Ferric iron and ATP make a unique mechanistic pairing that enables easy phosphorylation at the third position, making ATP out of ADP and acyl phosphate. At step b, the incoming acyl phosphate is coordinated by the amino group while the iron is coordinated by the pentose nitrogen and two existing phosphates.

The point of this paper was simply to reveal why ATP, of all the possible bases and related chemicals, gained its dominant position of core chemical and currency. It is rare in origin-of-life research to gain a definitive insight like this, amid the masses of speculation and modeling, however plausible. So this is a significant step ahead for the field, while it continues to refine its ideas of how this amazing transition took place. Whether it can demonstrate the spontaneous rTCA cycle in a reasonable experimental setting is perhaps the next significant question.


  • How China extorts celebrities, even Taiwanese celebrities, to toe the line.
  • Stay away from medicare advantage, unless you are very healthy, and will stay that way.
  • What to expect in retirement.

Saturday, September 17, 2022

Death at the Starting Line- Aneuploidy and Selfish Centromeres

Mammalian reproduction is unusually wasteful, due to some interesting processes and tradeoffs.

Now that we have settled the facts that life begins at conception and abortion is murder, a minor question arises. There is a lot of murder going on in early embryogenesis, and who is responsible? Probably god. Roughly two-thirds of embryos that form are aneuploid (have an extra chromosome or lack a chromosome) and die, usually very soon. Those that continue to later stages of pregnancy cause a high rate of miscarriages-about 15% of pregnancies. A recent paper points out that these rates are unusual compared with most eukaryotes. Mammals are virtually alone in exhibiting such high wastefulness, and the author proposes an interesting explanation for it.

First, some perspective on aneupoidy. Germ cells go through a two-stage process of meiosis where their DNA is divided two ways, first by homolog pairs, (that is, the sets inherited from each parent, with some amount of crossing-over that provides random recombination), and second by individual chromosomes. In more primitive organisms (like yeast) this is an efficient, symmetrical, and not-at-all wasteful process. Any loss of genetic material would be abhorrent, as the cells are putting every molecule of their being into the four resulting spores, each of which are viable.

A standard diagram of meiosis. Note that the microtubules (yellow) engage in a gradual and competitive process of capturing centromeres of each chromosome to arrive at the final state of regular alignment, which can then be followed by even division of the genetic material and the cell.


In animals, on the other hand, meiosis of egg cells is asymmetric, yielding one ovum / egg and three polar bodies, which  have various roles in some species to assist development, but are ultimately discarded. This asymmetric division sets up a competition between chromosomes to get into the egg, rather than into a polar body. One would think that chromosomes don't have much say in the matter, but actually, cell division is a very delicate process that can be gamed by "strong" centromeres.

Centromeres are the central structures on chromosomes that form attachments to the microtubules forming the mitotic spindle. This attachment process is highly dynamic and even competitive, with microtubules testing out centromere attachment sites, and using tension ultimately as the mark of having a properly oriented chromosome with microtubules from each side of the dividing cell (i.e. each microtubule organizing center) attached to each of the centromeres, holding them steady and in tension at the midline of the cell. Well, in oocytes, this does not happen at the midline, but lopsidedly towards one pole, given that one of the product cells is going to be much larger than the others. 

In oocytes, cell division is highly asymmetric with a winner-take-all result. This opens the door to a mortal competition among chromosomes to detect which side is which and to get on the winning side. 

One of the mysteries of biology is why the centromere is a highly degenerate, and also a speedily evolving, structure. They are made up of huge regions of monotonously repeated DNA, which have been especially difficult to sequence accurately. Well, this competition to get into the next generation can go some way to explain this structure, and also why it changes rapidly, (on evolutionary time scales), as centromeric repeats expand to capture more microtubules and get into the egg, and other portions of the machinery evolve to dampen this unsociable behavior and keep everyone in line. It is a veritable arms race. 

But the funny thing is that it is only mammals that show a particularly wasteful form of this behavior, in the form of frequent aneuploidy. The competition is so brazen that some centromeres force their way into the egg when there is already another copy there, generating at best a syndrome like Down, but for all other chromosomes than #21, certain death. This seems rather self-defeating. Or does it?

The latest paper observes that mammals devote a great deal of care to their offspring, making them different from fish, amphibians, and even birds, which put most of their effort into producing the very large egg, and relatively less (though still significant amounts) into care of infants. This huge investment of resources means that causing a miscarriage or earlier termination is not a total loss at all, for the rudely trisomic extra chromosome. No, it allows resource recovery in the form of another attempt at pregnancy, typically quite soon thereafter, at which point the pushy chromosome gets another chance to form a proper egg. It is a classic case of extortion at the molecular scale. 


  • Do we have rules, or not?
  • How low will IBM go, vs its retirees?

Sunday, August 21, 2022

What Holds up the Nucleus?

Cell nuclei are consistently sized with respect to cell volume, and pleasingly round. How does that happen?

An interesting question in biology is why things are the size they are. Why are cells so small, and what controls their size? Why are the various organelles within them a particular size and shape, and is that controlled in some biologically significant way, or just left to some automatic homeostatic process? An interesting paper come out recently about the size of the nucleus, home of our DNA and all DNA-related transactions like transcription and replication. (Note to reader/pronouncer: "new clee us", not "new cue lus".) 

The nucleus, with parts labeled. Pores are large structures that control traffic in and out. 

The nucleus is surrounded by a double membrane (the nuclear membrane) studded with structurally complex and interesting pores. These pores are totally permeable to small molecules like ions, water, and very small proteins, but restrict the movement of larger proteins and RNAs, and naturally, DNA. To get out, (or in), these molecules need to have special tags, and cooperate with nuclear transport proteins. But very large complexes can be transported in this way, such as just-transcribed RNAs and half-ribosomes that get assembled in the nucleolus, a small sub-compartment within the nucleus (which has no membrane, just a higher concentration of certain molecules, especially the portion of the genomic DNA that encodes ribosomal RNA). So the nuclear pore is restrictive in some ways, but highly permissive in other ways, accommodating transmitted materials of vastly different sizes.

Nuclear pores are basket-shaped structures that are festooned, particularly inside the channel, with disordered phenylalanine/glycine rich protein strands that act as size, tag, and composition-based filters over what gets through.

The channels of nuclear pores have a peculiar composition, containing waving strands of protein with repetitive glycine/phenylalanine composition, plus interspersed charged segments (FG domains). This unstructured material forms a unique phase, neither oily nor watery, that restricts the passage of immiscible molecules, (i.e., most larger molecules), unless accompanied by partners that bind specifically to these FG strands, and thus melt right through the barrier. This mechanism explains how one channel can, at the same time block all sorts of small to medium sized RNAs and proteins, but let through huge ribosomal components and specifically tagged and spliced mRNAs intended for translation.

But getting back to the overall shape and size of the nucleus, a recent paper made the case in some detail that colloid pressure is all that is required. As noted above, all small molecules equilibrate easily across the nuclear membrane, while larger molecules do not. It is these larger molecules that are proposed to provide a special form of osmotic pressure, called colloid osmotic pressure, which gently inflates the nucleus, against the opposing force of the nuclear membrane's surface tension. No special mechanical receptors are needed, or signaling pathways, or stress responses.

The paper, and an important antecedent paper, make some interesting points. First is that DNA takes up very little of the nuclear volume. Despite being a huge molecule (lengthwise), DNA makes up less than 1% of nuclear volume in typical mammalian cells. Ribosomal RNA, partially constructed ribosomal components, tRNAs, and other materials are far more abundant and make up the bulk of large molecules. This means that nuclear size is not very sensitive to genome copy number, or ploidy in polyploid species. Secondly, they mention that a vanishingly small number of mutants have been found that affect nuclear size specifically. This is what one would expect for a simple- even chemical- homeostatic process, not dependent on the usual signaling pathways of cellular stress, growth regulation, etc., of which there are many.

Where does colloid osmotic pressure come from? That is a bit obscure, but this Wiki site gives a decent explanation. When large molecules exist in solution, they exclude smaller molecules from their immediate vicinity, just by taking up space, including a surface zone of exclusion, a bit like national territorial waters. That means that the effective volume available to the small solutes (which generally control osmotic pressure) is slightly reduced. But when two large molecules collide by random diffusion, the points where they touch represent overlapping exclusion zones, which means that globally, the net exclusion zone from large molecules has decreased, giving small solutes slightly more room to move around. And this increased entropy of the smaller solutes drives the colloid osmotic pressure, which rises quite rapidly as the concentration of large molecules increases. The prior paper argues that overall, cells have quite low colloid osmotic pressure, despite their high concentrions of complex large molecules. They are, in chemical terms, dilute. This helps our biochemistry do its thing with unexpectedly rapid diffusion, and is explained by the fact that much of our molecular machinery is bound up in large complexes that reduce the number of independent colloidal particles, even while increasing their individual size.

So much for theory- what about the experiment? The authors used yeast cells (Schizosaccharomyces pombe), which are a common model system. But they have cell walls, which the researchers digested off before treating them with a variety of osmolytes, mostly sorbitol, to alter their osmotic environment (not to mention adding fluorescent markers for the nuclear and plasma membranes, so they could see what was going on). Isotonic concentration was about 0.4 Molar (M) sorbitol, with treatments going up to 4M sorbitol (hypertonic). The question was.. is the nucleus (and the cell as a whole) a simple osmometer, reacting as physical chemistry would expect to variations in osmotic pressure from outside? Recall that high concentrations of any chemical outside a cell will draw water out of it, to equalize the overall water / osmotic pressure on both sides of the membrane.

Schizosaccharomyces pombe are oblong cells (left) with plasma membrane marked with a green fluorescent marker, and the nuclear membrane marked with a purple fluorescent marker. If one removes the chitin-rich cell wall, the cells turn round, and one can experiment on their size response to osmotic pressure/treatment. Hypertonic (high-sorbitol, top) treatment causes the cell to shrink, and causes the  nucleus to shrink in strictly proportional fashion, indicating that both have simple composition-based responses to osmotic variation.


They found that not only does the outer cell membrane shrink as the cell comes under hypertonic shock, but the nucleus shinks proportionately. A number of other experiments followed, all consistent with the same model. One of the more interesting was treatment with leptomycin B (LMB), which is a nuclear export inhibitor. Some materials build up inside the nucleus, and one would expect that, under this simple model of nuclear volume homeostasis, the nuclei would gradually gain size relative to the surrounding cell, breaking the general observation of strict proportionality of nuclear to cell volumes.

Schizosaccharomyces pombe cells treated with a drug that inhibits nuclear export of certain proteins causes the nuclear volume to blow up a little bit, relative to the rest of the cell.

That is indeed what is seen, not really immediately discernable, but after measuring the volumes from micrographs, evident on the accompanying graph (panel C). So this looks like a solid model of nuclear size control, elegantly explaining a small problem in basic cell biology. While there is plenty of regulation occuring over traffic into and out of the nucleus, that has critical effects on gene expression, translation, replication, division, and other processes, the nucleus can leave its size and shape to simple biophysics and not worry about piling on yet more ornate mechanisms.


  • About implementing the climate bill and related policies.
  • We should have given Ukraine to Russia, apparently. Or something.
  • Big surprise- bees suffer from insecticides.

Sunday, July 10, 2022

Tooth Development and Redevelopment

Wouldn't it be nice to regrow teeth? Sharks do.

Imagine for a minute if instead of fillings, crowns, veneers, posts, bridges, and all the other advanced technologies of dental restoration, a tooth could be removed, and an injection prompt the growth of a complete replacement tooth. That would be amazing, right? Other animals, such as sharks and fish, regrow teeth all the time. But we only get two sets- our milk teeth and mature teeth. While mature mammalian teeth are incredibly tough and generally last a lifetime, modern agriculture and other conditions have thrown a wrench into human dental health, which modern dentistry has only partially restored. As evolution proceeded into the mammalian line, tooth development became increasingly restricted and specialized, so that the generic teeth that sharks spit out throughout their lives have become tailored for various needs across the mouth, firmly anchored into the jaw bone, and precisely shaped to fit against each other. But the price for this high-level feature set seems to be that we have lost the ability to replace them.

So researchers are studying tooth development in other animals- wondering how similar they are to human development, and whether some of their tricks can be atavistically re-stimulated in our own tissues. While the second goal remains a long way off, the first has been productively pursued, with teeth forming a model system of complex tissue development. A recent paper (with review) looked at similarities between molecular details of shark and mammalian tooth development.

Teeth are the result of an interaction between epithelial tissues and mesenchymal tissues- two of the three fundamental tissues of early embryogenesis. Patches of epithelium form dental arches around the two halves of the future mouth. Spots around these arches expand into dental placodes, which grow into buds, and as they interact continuously with the inner mesenchyme, form enamel knots. The epithelial cells of the knot then eventually start producing enamel as they pull away from interface, while the mesenchymal cells produce dentin and then the pulp and other bone-anchoring tissues of the inner tooth and root as they pull away in the opposite direction. 

Embryonic tooth development, which depends heavily on the communication between epithelial tissue (white) and mesenchymal tissue (pink). An epithelial "enamel knot" (PEK/ SEK) develops at the future cusp(s), where enamel will be laid down by the epithelial cells, and dentin by the mesenchymal cells. Below are some of the molecules known to orchestrate the activities of all these cells. Some of these molecules are extracellular signals (BMP, FGF, WNT), while others are cell-internal components of the signaling systems (LEF, PAX, MSX).

Naturally, all this doesn't happen by magic, but by a symphony of gene expression and molecular signals going back and forth. These signals are used in various combinations in many developmental processes, but given the cell types located here, due to the prior location-based patterning of the embryo in larger coordinate schemes, and the particular combination of signals, they orchestrate tooth development. Over evolution, these signals have been diverse in the highest degree across mammals, creating teeth of all sorts of conformations and functions, from whale baleen to elephant tusks. The question these researchers posed was whether sharks use the same mechanisms to make their teeth, which across that phylum are also highly diverse in form, including complicated cusp patterns. Indeed, sharks even develop teeth on their skin- miniature teeth called denticles.

Shark skin is festooned with tiny teeth, or denticles.

These authors show detailed patterns of expression of a variety of the known gene-encoded components of tooth development, in a shark. For example, WNT11(C)  is expressed right at the future cusp, also known as the enamel knot, an organizing center for tooth development. Dental epithelium (de) and dental mesenchyme (dm) are indicated. Cell nuclei are stained with DAPI, in gray. Dotted lines indicate the dental lamina composed of he dental epithelium, and large arrows indicate the presumptive enamel knot, which prefigures the cusp of the tooth and future enamel deposition.

The answer- yes indeed. For instance, sharks use the WNT pathway (panel C) and associated proteins (panels A, B, D) in the same places as mammals do, to determine the enamel knot, cusp formation, and the rest. The researchers use some chemical enhancers and inhibitors of WNT signaling to demonstrate relatively mild effects, with the inhibitor reducing tooth size and development, and the enhancer causing bigger teeth, occasionally with additional cusps. While a few differences were seen, overall, tooth development in sharks and mammals is quite similar in molecular detail. 

The researchers even went on to deploy a computer model of tooth development that incorporates twenty six gene and cellular parameters, which had been developed for mammals. They could use it to model the development of shark teeth quite well, and also model their manipulations of the WNT pathway to come out with realistic results. But they did not indicate that the overall differences in detail between mouse and shark tooth development were recapitulated faithfully by these model alterations. So it is unlikely that strict correspondence of all the network functions could be achieved, even though the overall system works similarly.

The authors offer a general comparison of mouse and shark tooth development, centered around the dental epithelium, with mesenchyme in gray. Most genes are the same (that is, orthologous) and expressed in the same places, especially including an enamel knot organizing center. For mouse, a WNT analog is not indicated, but does exist and is an important class of signal.

These authors did not, additionally, touch on the question of why tooth production stops in mammals, and is continuous in sharks. That is probably determined at an earlier point in the tissue identity program. Another paper indicated that a few of the epithelial stem cells that drive tooth development remain about in our mouths through adulthood. Indeed, these cells cause rare cancers (ameloblastoma). It is these cells that might be harnessed, if they could be prodded to multiply and re-enter their developmental program, to create new teeth.


  • Boring, condescending, disposable, and modern architecture is hurting us.
  • Maybe attacking Russia is what is needed here.