Saturday, December 31, 2022

Hand-Waving to God

A decade on, the Discovery Institute is still cranking out skepticism, diversion, and obfuscation.

A post a couple of weeks ago mentioned that the Discovery Institute offered a knowledgeable critique of the lineages of the Ediacaran fauna. They have raised their scientific game significantly, and so I wanted to review what they are doing these days, focusing on two of their most recent papers. The Discovery Institute has a lineage of its own, from creationism. It has adapted to the derision that entailed, by retreating to "intelligent design", which is creationism without naming the creators, nailing down the schedule of creation, or providing any detail of how and from where creation operates. Their review of the Ediacaran fauna raised some highly skeptical points about whether these organisms were animals or not. Particularly, they suggested that cholesterol is not really restricted to animals, so the chemical traces of cholesterol that were so clearly found in the Dickinsonia fossil layers might not really mean that these were animals- they might also be unusual protists of gigantic size, or odd plant forms, etc. While the critique is not unreasonable, it does not alter the balance of the evidence which does indeed point to an animal affinity. These fauna are so primitive and distant that it is fair to say that we can not be sure, and particularly we can not be sure that they had any direct ancestral relationship to any later organisms of the ensuing Cambrian period, when recognizable animals emerged.

Fair enough. But what of their larger point? The Discovery Institute is trying to make the point, I believe, about the sudden-ness of early Cambrian evolution of animals, and thus its implausibility under conventional evolutionary theory. But we are traversing tens of millions of years through these intervals, which is a long time, even in evolutionary terms. Secondly, the Ediacaran period, though now represented by several exquisite fossil beds, spanned a hundred million years and is still far from completely characterized paleontologically, even supposing that early true animals would have fossilized, rather than being infinitesimal and very soft-bodied. So the Cambrian biota could easily have predecessors in the Ediacaran that have or have not yet been observed- it is as yet not easy to say. But what we can not claim is the negative, that no predecessors existed before some time X- say the 540 MYA point at the base of the Cambrian. So the implication that the Discovery Institute is attempting to suggest has very little merit, particularly since everything that they themselves cite about the molecular and paleontological sequence is so clearly progressive and in proper time sequence, in complete accord with the overall theory of evolution.

For we should always keep in mind that an intelligent designer has a free hand, and can make all of life in a day (or in six, if absolutely needed). The fact that this designer works in the shadows of slightly altered mutation rates, or in a few million years rather than twenty million, and never puts fossils out of sequence in the sedimentary record, is an acknowledgement that this designer is a bit dull, and bears a strong resemblence to evolution by natural selection. To put it in psychological terms, the institute is in the "negotiation" stage of grief- over the death of god.

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, December 17, 2022

The Pillow Creatures That Time Forgot

Did the Ediacaran fauna lead to anything else, or was it a dead end?

While to a molecular biologist, the evolution of the eukaryotic cell is probably the greatest watershed event after the advent of life itself, most others would probably go with the rise of animals and plants, after about three billion years of exclusively microbial life. This event is commonly located at the base of the Cambrian, (i.e. the Cambrian explosion), which is where the fossils that Darwin and his contemporaries were familiar with began, about 540 million years ago. Darwin was puzzled by this sudden start of the fossil record, from apparently nothing, and presciently held (as he did in the case of the apparent age of the sun) that the data were faulty, and that the ancient character of life on earth would leave other traces much farther back in time.

That has indeed proved to be the case. There are signs of microbial life going back over three billion years, and whole geologies in the subsequent time dependent on its activity, such as the banded iron formations prevalent around two billion years ago that testify to the slow oxygenation of the oceans by photosynthesizing microbes. And there are also signs of animal life prior to the Cambrian, going back roughly to 600 million years ago that have turned up, after much deeper investigations of the fossil record. This immediately pre-Cambrian period is labeled the Ediacaran, for one of its fossil-bearing sites in Australia. A recent paper looked over this whole period to ask whether the evolution of proto-animals during this time was a steady process, or punctuated by mass extinction event(s). They conclude that, despite the patchy record, there is enough to say that there was a steady (if extremely slow) march of ecological diversification and specialization through the time, until the evolution of true animals in the Cambrian literally ate up all the Ediacaran fauna. 

Fossil impression of Dickinsonia, with trailing impressions that some think might be a trail from movement. Or perhaps just friends in the neighborhood.
 
For the difference between the Ediacaran fauna and that of the Cambrian is stark. The Ediacaran fauna is beautiful, but simple. There are no backbones, no sensory organs. No mouth, no limbs, no head. In developmental terms, they seem to have had only two embryological cell layers, rather than our three, which makes all the difference in terms of complexity. How they ate remains a mystery, but they are assumed to have simply osmosed nutrients from their environment, thanks to their apparently flat forms. A bit like sponges today. As they were the most complex animals at the time, (and some were large, up to 2 meters long), they may have had an easy time of it, simply plopping themselves on top of rich microbial mats, oozing with biofilms and other nutrients.

The paper provides a schematic view of the ecology at single locations, and also of longer-term evolution, from a sequence of views (i.e. fossils) obtained from different locations around the world of roughly ten million year intervals through the Ediacaran. One noticeable trend is the increasing development or prevalence of taller fern-like forms that stick up into the water over time, versus the flatter bottom-dwelling forms. This may reflect some degree of competition, perhaps after the bottom microbial mats have been over-"grazed". A second trend is towards slightly more complexity at the end of the period, with one very small form (form C (a) in the image below) even marked by shell remains, though what its animal inhabitant looked like is unknown. 

Schematic representation of putative animals observed during the Ediacaran epoch, from early, (A, ~570 MYA, Avalon assemblage), middle, (B, ~554 MYA, White River and other assemblages), and late (C, ~545 MYA, Nama assemblage). The A panel is also differentiated by successional forms from early to mature ecosystems, while the C panel is differentiated by ocean depth, from shallow to deep. The persistence of these forms is quite impressive overall, as is their common simplicity. But lurking somewhere among them are the makings of far more complicated animals.

Very few of these organisms have been linked to actual animals of later epochs, so virtually all of them seem to have been superceded by the wholly different Cambrian fauna- much of which itself remains perplexing. One remarkable study used mass-spec chemical analysis on some Dickinsonia fossils from the late Ediacaran to determine that they bore specific traces of cholesterol, marking them as probable animals, rather than overgrown protists or seaweed. But beyond that, there is little that can be said. (Note a very critical and informed review of all this from the Discovery Institute, of all places.) Their preservation is often remarkable, considering the age involved, and they clearly form the sole fauna known from pre-Cambrian times. 

But the core question of how the Cambrian (and later) animals came to be remains uncertain, at least as far as the fossil record is concerned. One relevant observation is that there is no sign of burrowing through the sediments of the Ediacaran epoch. So the appearance of more complex animals, while it surely had some kind of precedent deep in the Ediacaran, or even before, did not make itself felt in any macroscopic way then. It is evident that once the triploblastic developmental paradigm arose, out of the various geologic upheavals that occurred at the bases of both the Ediacaran and the Cambrian, its new design including mouths, eyes, spines, bones, plates, limbs, guts, and all the rest that we are now so very familiar with, utterly over-ran everything that had gone before.

Some more fine fossils from Canada, ~ 580 MYA.


  • A video tour of some of the Avalon fauna.
  • An excellent BBC podcast on the Ediacaran.
  • We need to measure the economy differently.
  • Deep dive on the costs of foreign debt.
  • Now I know why chemotherapy is so horrible.
  • Waste on an epic scale.
  • The problem was not the raids, but the terrible intelligence... by our intelligence agency.

Saturday, December 10, 2022

Mechanics of the ATP Synthesizing Machine

ATP sythase is a generator with two rotors, just like any other force-transducing generator.

Protein structural determination has progressed tremendously, with the advent of cryo-electron microscopy which allows much faster determinations of more complex structures than previously. One beneficiary is the enzyme at the heart of the mitochondrion that harnesses the proton motive force (pmf; difference of pH and charge across the inner mitochondrial membrane) to make ATP. The pmf is created by the electron transport chains of respiration, powered by the breakdown of our food, and ATP is the most general currency of energy in our cells. And in bacteria as well. The work discussed today was all done using E. coli, which in this ancient and highly conserved respect is a very close stand-in for our own biology.

The ATP synthase is rotary device. Just like a water wheel has one wheel that harnesses a running stream, linked by gears or other mechanism to a second wheel that grinds corn, or generates electricity, the ATP synthase has one wheel that is powered by protons flowing inwards, linked to another wheel that synthesizes ATP. The second wheel doesn't turn. Rather, the linking rotor from the proton wheel (called Fo) has an asymmetric cam at the end that pokes into the center of the ATP synthase wheel, (called F1), and deforms that second wheel as it rotates around inside. The deformations are what induces the ATP sythase to successively (1) bind ADP and phosphate, (2) close access and join them together into ATP, and lastly (3) release the ATP back out. This wheel has three sections, thus one turn yields three ATPs, and it takes 120 degrees of turn to create one ATP. This mechanism is nicely illustrated in a few videos.

The ATP synthase has several parts. The top rotor (yellow, orange; proton rotor, or "c" rotor) is embedded in the inner mitochondrial membrane, and rotates as it conducts protons from outside (top) inwards. The center rotor (white, red) is attached to it and also rotates as it sticks into the bottom ATP synthesizing subunits (green, khaki). That three-fold symmetric protein complex is static, (held in place by the non-moving stator subunits (blue, teal), and synthesizes ATP as its conformation is progressively banged around by the rotor. At the bottom are diagrams of the ATP generating strokes (three per rotation), with pauses (green) reflecting the strain of synthesizing ATP. All this was detected from the single molecules tracked by polarized light coming from the polarizing gold rods attached to the proton rotor (AuNR- gold nano rod).


Some recent papers focus on the other end of the machine- the proton rotor. It has ten subunits, (termed "c", so this is also called the c rotor), each of which binds a proton. Thus the ultimate stoichiometry is that 10 protons yield 3 ATP, for a 3.33 protons per ATP efficiency. (The pH difference needs to be about 3 units, or 1000 to 5000 fold in proton concentration, to create sufficient pmf.) But there are certain asymmetries involved. For one, there is a "stator" that holds the ATP synthetase stable vs the proton rotor and spans across them, attaching stably to the former and gliding along the rotations of the latter. This stator creates some variation in how the rotors at both ends operate. Also, the 10:3 ratio means that some power strokes that force the ATP sythase along will behave differently, either with more power at the beginning or at the end of the 120 degree arc. 

These papers posit that there is enough flexibility in the linkage to smooth out these ebbs and flows. Within the stator is a critical subunit ("a") which conducts the protons in both directions, both from outside onto the "c" rotor, and then off the "c" rotor and into the inner mitochondrial matrix. Interestingly, the protein rotor of "c" subunits ferries those protons all the way around, so that they come in and go back off at nearly the same point, at the "a" subunit surface. This means that they are otherwise stably bound to the proton rotor as it flies around in the membrane, a hydrophobic environment that presumably offers no encouragement for those protons to leave. So in summary, the protons from outside (the intermembrane space of the mitochondrion) enter by the outer "a" channel, then land on one of the proton rotor's "c" subunits, take one trip around the rotor, and then exit off via the inner "a" channel.

One question is the nature of these channels. There are, elsewhere in biology, channels that find ways to conduct protons in specific fashion, despite their extremely small size and similarity to other cations like sodium and potassium. But a more elegant way has been devised, called the Grotthuss mechanism. The current authors conduct extensive analysis of key mutations in these channels to show that this mechanism is used by the "a" subunit of the Fo protein. By this mechanism, a chain of water molecules are very carefully lined up through the protein. The natural hydrogen exchange property of water, by which the pH character and so many other properties of water occur, then allow an incoming proton to create a chain reaction of protonations and de-protonations along the water chain (nicely illustrated on the Wikipedia page) that, without really moving any of the water molecules, (or requiring much movement of the protons either), effectively conducts a net proton inwards with astonishing efficiency.

It is evident that the interface of the "a" and "c" subunits is such that a force-fed sequence of protons creates power that induces the rotation and eventually through the rotor linkage, the energy to synthesize ATP against its concentration gradient. It should be said parenthetically that this enzyme complex can be driven in reverse, and E. coli do occasionally use up ATP in reverse to re-establish their pmf gradient, which is used for many other processes.

One techical note is of interest. The authors of the main paper used single molecules of the whole ATP sythase, embedded in nano-membranes that they could observe optically and treat with different pH levels on each site to drive their activity. They also attached tiny gold bars (35 × 75 nm) to the top of each proton rotor to track its rotation by polarized light. This allowed very fine observations, which they used to look at the various pauses induced by the jump of each ATP synthesis event, and of each proton as it hopped on/off. Then they mutated selected amino acids in the supposed water channels that conduct proteins through the "a" subunit, which created greater delays, diagnostic of the Grotthuss mechanism. The channel is not lined with ions or ionizable groups, but is simply polar to accommodate a string of waters threading through the membrane and the "a" protein. Additionally, they estimate an "antenna" of considerable size composed of a "b" subunit and some of the "a" subunit of Fo that is exposed to the outside and by its negatively charged nature attracts and lines up plenty of protons, ready to transit through the rotor.

Another presentation of the proton rotor behavior. The stator "a" subunit is orange, and the "c" subunits are circles arranged in a rotor, seen from the top. The graph at right shows some of the matches or mismatches between the three-fold ATP synthesizing rotor (F1) and the ten-fold symmetric proton rotor (Fo, or "c"), leading to quite variable coupling of their power strokes. Yet there is enough elastic give in their coupling to allow continuous and reasonably rapid rotation (100 / sec).

In the end, incredible technical feats of optics, chemistry, and molecular biology are needed to decipher increasing levels of detail about the incredible feat of evolution that is embodied in this tiny powerhouse.


Saturday, December 3, 2022

Senescent, Cancerous Cannibals

Tumor cells not only escape normal cell proliferation controls, but some of them eat nearby cells.

Our cells live in an uneasy truce. Cooperation is prized and specialization into different cell types, tissues, and organs is pervasive. But deep down, each cell wants to grow and survive, prompting many mechanisms of control, such as cell suicide (apoptosis) and immunological surveillance (macrophages, killer T-cells). Cancer is the ultimate betrayal, not only showing disregard for the ruling order, but in its worst forms killing the whole organism in a pointless drive for growth.

A fascinating control mechanism that has come to prominence recently is cellular senescence. In petri dishes, cells can only be goosed along for a few dozen cycles of division until they give out, and become senescent. Which is to say, they cease replicating but remain alive. It was first thought that this was another mechanism to keep cancer under control, restricting replication to "stem" cells and their recent progeny. But a lot of confusing and interesting observations indicate that the deeper meaning of senescence lies in development, where it appears to function as an alternate form of cell suicide, delayed so that tissues are less disrupted. 

Apoptosis is used very widely during development to reshape tissues, and senescence is used extensively as well in these programs. Senescent cells are far from quiescent, however. They have high metabolic activity and are particularly notorious for secreting a witches' brew of inflammatory cytokines and other proteins- the senescence-associated secretory phenotype, or SASP. in the normal course of events, this attracts immune system cells which initiate repair and clearance operations that remove the senescent cells and make sure the tissue remains on track to fulfill its program. These SASP products can turn nearby cells to senescence as well, and form an inflammatory micro-environment that, if resolved rapidly, is harmless, but if persistent, can lead to bad, even cancerous local outcomes. 

The significance of senescent cells has been highlighted in aging, where they are found to be immensely influential. To quote the wiki site:

"Transplantation of only a few (1 per 10,000) senescent cells into lean middle-aged mice was shown to be sufficient to induce frailty, early onset of aging-associated diseases, and premature death."

The logic behind all this seems to be another curse of aging, which is that while we are young, senescent cells are cleared with very high efficiency. But as the immune system ages, a very small proportion of senescent cells are missed, which are, evolutionarily speaking, an afterthought, but gradually accumulate with age, and powerfully push the aging process along. We are, after all, anxious to reduce chronic inflammation, for example. A quest for "senolytic" therapies to clear senescent cells is becoming a big theme in academia and the drug industry and may eventually have very significant benefits. 

Another odd property of senescent cells is that their program, and the effects they have on nearby cells, resemble to some partial degree those of stem cells. That is, the prevention of cell death is a common property, as is the prevention of certain controls preventing differentiation. This brings us to tumor cells, which frequently enter senescence under stress, like that of chemotherapy. This fate is highly ambivalent. It would have been better for such cells to die outright, of course. Most senescent tumor cells stay in senescence, which is bad enough for their SASP effects in the local environment. But a few tumor cells emerge from senescence, (whether due to further mutations or other sporadic properties is as yet unknown), and they do so with more stem-like character that makes them more proliferative and malignant.

A recent paper offered a new wrinkle on this situation, finding that senescent tumor cells have a novel property- that of eating neighboring cells. As mentioned above, senescent cells have high metabolic demands, as do tumor cells, so finding enough food is always an issue. But in the normal body, only very few cells are empowered to eat other cells- i.e. those of the immune system. To find other cells doing this is highly unusual, interesting, and disturbing. It is one more item in the list of bad things that happen when senescence and cancer combine forces.

A senescent tumor cell (green) phagocytoses and digests a normal cell (red).


  • Shockingly, some people are decent.
  • Tangling with the medical system carries large risks.
  • Is stem cell therapy a thing?
  • Keep cats indoors.