Showing posts with label emergence. Show all posts
Showing posts with label emergence. Show all posts

Saturday, August 13, 2022

Titrations of Consciousness

In genetics, we have mutation. In biochemistry, we have titration. In neuroscience, we have brain damage.

My theisis advisor had a favorite saying: "When in doubt, titrate!". That is to say, if you think you have your hands on a key biochemical component, its amount should be clearly influential on the reaction you are looking at. Adding more might make its role clearer, or bring out other dynamics, or, at very least, titration might allow you to not waste it by using just the right amount.

Neuroscience has reached that stage in studies of consciousness. While philosophers wring their hands about the "hardness" of the problem, scientists are realizing that it can be broken down like any other, and studied by its various broken states and disorders, and in its myriad levels / types as induced by drugs, damage, and by evolution in other organisms. A decade ago, a paper showed that the thalamus, a region of the brain right on top of the brain stem and the conduit of much of its traffic with the cortex, has a graded (i.e. titratable) relationship between severity of damage and severity of effects on consciousness. This led to an influential hypothesis- the mesocircuit hypothesis, which portrays wide-ranging circuitry from the thalamus that activates cortical regions, and is somewhat inhibited in return by circuits coming back. 


Degree of damage to a central part of the brain, the thalamus, correlates closely with degree of consciousness disability.

A classification of consciousness / cognition / communication deficits, ranging from coma to normal state. LIS = locked in state, MCS = minimally conscious state, VS = vegetative state (now unresponsive wakefulness syndrome, which may be persistent (PVS).

The anatomy is pretty clear, and subsequent work has focused on the dynamics, which naturally are the heart of consciousness. A recent paper, while rather turgid, supports the mesocircuit hypothesis by analyzing the activation dynamics of damaged brains (vegetative state, now called unresponsive wakefulness syndrome (UWS)), and less severe minimally conscious states (MCS). They did unbiased mathematical processing to find the relevant networks and reverberating communication modes. For example, in healthy brains there are several networks of activity that hum along while at rest, such as the default mode network, and visual network. These networks are then replaced or supplemented by other dynamics when activity takes place, like viewing something, doing something, etc. The researchers measured the metastability or "jumpiness" of these networks, and their frequencies (eigenmodes).

Naturally, there is a clear relationship between dynamics and consciousness. The worse off the patient, the less variable the dynamics, and the fewer distinct frequencies are observed. But the data is hardly crystal clear, so it got published in a minor journal. It is clear that these researchers have some more hunting to do to find better correlates of consciousness. This might come from finer anatomical localization, (hard to do with fMRI), or perhaps from more appropriate math that isolates the truly salient aspects of the phenomenon. In studies of other phenomena such as vision, memory, and place-sensing, the analysis of correlates between measurable brain activity and the subjective or mental aspects of that activity have become immensely more sophisticated and sensitive over time, and one can assume that will be true in this field as well.

Severity of injury correlates with metastability (i.e jumpiness) of central brain networks,  and with consciousness. (HC = healthy control)


  • Senator Grassley can't even remember his own votes anymore.
  • How are the Balkans doing?
  • What's in the latest Covid strain?
  • Economic graph of the week. Real income has not really budged in a very long time.

Saturday, May 14, 2022

Tangling With the Network

Molecular biology needs better modeling.

Molecular biologists think in cartoons. It takes a great deal of work to establish the simplest points, like that two identifiable proteins interact with each other, or that one phosphorylates the other, which has some sort of activating effect. So biologists have been satsified to achieve such critical identifications, and move on to other parts of the network. With 20,000 genes in humans, expressed in hundreds of cell types, regulated states and disease settings, work at this level has plenty of scope to fill years of research.

But the last few decades have brought larger scale experimentation, such as chips that can determine the levels of all proteins or mRNAs in a tissue, or the sequences of all the mRNAs expressed in a cell. And more importantly, the recognition has grown that any scientific field that claims to understand its topic needs to be able to model it, in comprehensive detail. We are not at that point in molecular biology, at all. Our experiments, even those done at large scale and with the latest technology, are in essence qualitative, not quantitative. They are also crudely interventionistic, maybe knocking out a gene entirely to see what happens in response. For a system as densely networked as the eukaryotic cell, it will take a lot more to understand and model it.

One might imagine that this is a highly detailed model of cellular responses to outside stimuli. But it is not. Some of the connections are much less important than others. Some may take hours to have the indicated effect, while others happen within seconds or less. Some labels hide vast sub-systems with their own dynamics. Important items may still be missing, or assumed into the background. Some connections may be contingent on (or even reversed by) other conditions that are not shown. This kind of cartoon is merely a suggestive gloss and far from a usable computational (or true) model of how a biological regulatory system works.


The field of biological modeling has grown communities interested in detailed modeling of metabolic networks, up to whole cells. But these remain niche activities, mostly because of a lack of data. Experiments remain steadfastly qualitative, given the difficulty of performing them at all, and the vagaries of the subjects being interrogated. So we end up with cartoons, which lack not only quantitative detail on the relative levels of each molecule, but also critical dynamics of how each relationship develops in time, whether in a time scale of seconds or milliseconds, as might be possible for phosphorylation cascades (which enable our vision, for example), or a time scale of minutes, hours, or days- the scale of changes in gene expression and longer-term developmental changes in cell fate.

These time and abundance variables are naturally critical to developing dynamic and accurate models of cellular activities. But how to get them? One approach is to work with simple systems- perhaps a bacterial cell rather than a human cell, or a stripped down minimal bacterial cell rather than the E. coli standard, or a modular metabolic sub-network. Many groups have labored for years to nail down all the parameters of such systems, work which remains only partially successful at the organismal scale.

Another approach is to assume that co-expressed genes are yoked together in expression modules, or regulated by the same upstream circuitry. This is one of the earliest forms of analysis for large scale experiments, but it ignores all the complexity of the network being observed, indeed hardly counts as modeling at all. All the activated genes are lumped together into one side, and all the down-regulated genes on the other side, perhaps filtered by biggest effect. The resulting collections are clustered by some annotation of those gene's functions, thereby helping the user infer what general cell function was being regulated in her experiment / perturbation. This could be regarded perhaps as the first step on a long road from correlation analysis of gene activities to a true modeling analysis that operates with awareness of how individual genes and their products interact throughout a network.

Another approach is to resort to a lot of fudge factors, while attempting to make a detailed model of the cell /components. Assume a stable network, and fill in all the values that could get you there, given the initial cartoon version of molecule interactions. Simple models thus become heuristic tools to hunt for missing factors that affect the system, which are then progressively filled in, hopefully by doing new experiments. Such factors could be new components, or could be unsuspected dynamics or unknown parameters of those already known. This is, incidentally, of intense interest to drug makers, whose drugs are intended to tweek just the right part of the system in order to send it to a new state- say, from cancerous back to normal, well-behaved quiescence.

A recent paper offered a version of this approach, modular response analysis (MRA). The authors use perturbation data from other labs, such as the inhibition of 1000 different genes in separately assayed cells, combined with a tentative model of the components of the network, and then deploy mathematical techniques to infer / model the dynamics of how that cellular system works in the normal case. What is observed in either case- the perturbed version, or the wild-type version- is typically a system (cell) at steady state, especially if the perturbation is something like knocking out a gene or stably expressing an inhibitor of its mRNA message. Thus, figuring out the (hidden) dynamic in between- how one stable state gets to another one after a discrete change in one or more components- is the object of this quest. Molecular biologists and geneticists have been doing this kind of thing off-the-cuff forever (with mutations, for instance, or drugs). But now we have technologies (like siRNA silencing) to do this at large scale, altering many components at will and reading off the results.

This paper extends one of the relevant mathematical methods (modular response analysis, MRA) to this large scale, and finds that, with a bit of extra data and some simplifications, it is competitive with other methods (mutual information) in creating dynamic models of cellular activities, at the scale of a thousand components, which is apparently unprecedented. At the heart of MRA are, as its name implies, modules, which break down the problem into manageable portions and allow variable amounts of detail / resolution. For their interaction model, they use a database of protein interactions, which is a reasonably comprehensive, though simplistic, place to start.

What they find is that they can assemble an effective system that handles both real and simulated data, creating quantitative networks from their inputs of gene expression changes upon inhibition of large numbers of individual components, plus a basic database of protein relationships. And they can do so at reasonable scale, though that is dependent on the ability to modularize the interaction network, which is dangerous, as it may ignore important interactions. As a state of the art molecular biology inference system, it is hardly at the point of whole cell modeling, but is definitely a few steps ahead of the cartoons we typically work with.

The authors offer this as one result of their labors. Grey nodes are proteins, colored lines (edges) are activating or inhibiting interactions. Compared to the drawing above, it is decidedly more quantitative, with strengths of interactions shown. But timing remains a mystery, as do many other details, such as the mechanisms of the interactions


  • Fiscal contraction + interest rate increase + trade deficit = recession.
  • The lies come back to roost.
  • Status of carbon removal.
  • A few notes on stuttering.
  • A pious person, on shades of abortion.
  • Discussion on the rise of China.

Saturday, April 2, 2022

E. O. Wilson, Atheist

Notes on the controversies of E. O. Wilson.

E. O. Wilson was one of our leading biologists and intellectuals, combining a scholarly career of love for the natural world (particularly ants) with a cultural voice of concern about what we as a species are doing to it. He was also a dedicated atheist, perched in his ivory tower at Harvard and tilting at various professional and cultural windmills. I feature below a long quote from one of his several magnum opuses, Sociobiology (1975). This was putatively a textbook by which he wanted to establish a new field within biology- the study of social structures and evolution. This was a time when molecular biology was ascendent, in his department and in biology broadly, and he wanted to push back and assert that truly important and relevant science was waiting to be done at higher levels of biology, indeed the highest level- that of whole societies. It is a vast tome, where he attempted to synthesize everything known in the field. But it met with significant resistance across the board, even though most of its propositions are now taken as a matter of course ... that our social instincts and structures are heavily biological, and have evolved just as our physical features have.

Saturday, February 19, 2022

DNA Mambo in the Nucleus

Some organizational principles for nuclear DNA to organize genes for local regulation.

There has been a long and productive line of research on the mechanisms of transcription from DNA to RNA- the process that reads the genome and translates its code into a running stream of instructions going out to the cell through development and all through life. This search has generally gone from the core of the process outwards to its regulatory apparatus. The opening of DNA by simple RNA polymerases was one of the first topics of study, followed by how the polymerase is positioned at the start site by "promoter" DNA sequences, with ever more ornate and distant surrounding machinery coming under scrutiny over time, as researchers climbed the evolutionary trajectory of life, from viruses and bacteria to mammals. 

But how this process fits into the larger structure of the nucleus, and how it is globally organized eukaryotes has long been an intriguing question, and tools are finally available to bring this level of organization into focus. For example, genes are known to be activated by direct contact with "enhancer" elements located thousands, even many tens of thousands, of basepairs away on the DNA- so why can't those enhancers activate other genes elsewhere in the nucleus, rather than the genes they are nearest to on the one-dimensional DNA? The nucleus is a small place with a lot of DNA. Roughly 1/100 of its physical space is taken up by DNA, and it is highly likely that such enhancers could be closer in 3-D space to other genes than the ones they are supposed to regulate, if everything were arranged randomly. Similarly, how do such enhancer elements find their proper targets, amid the welter of other DNA and proteins? A hundred thousand base pairs is long enough to traverse the entire nucleus.

So there has to be some organization, and new techniques have come along to illuminate it. These are crosslinking methods where the cells are treated with a chemical to crosslink / freeze a fraction of protein and DNA interactions in place, then enzymes are introduced to chop everything up, to various degrees of completeness. What is left are little clumps of DNA and protein that hopefully include distant cross-links, between enhancers and promoters, between key organizational sites and the genes they interact with, etc. Then comes the sequencing magic. These clumped stray DNAs are diluted and ligated together (only to local ends), amplified and sequenced, generating a slew of DNA sequences. Those hybrid sequences can be interpreted, (given the known sequence of the reference genome), to say whether some genomic location X got tangled up with some other location Y, reflecting their 3-D interaction in the cell when it was originally treated.

A recent paper pushed this method forward a bit, with finer-grained enzymatic digestion and deeper sequencing, to come up with the most detailed look ever at a drosophila genome, and at some particular genes that have long held interest as key regulators of development. This refined detail, plus some experiments mutating some of the key DNA sites involved, allowed them to come up with a new class of organizing elements and a theory of how the nuclear tangle works.

Long range contacts in the Antennapedia locus of flies. Micro-C refers to the crosslinking and sequencing method that maps long-range DNA contacts mediated by proteins. Pyramids in the top diagram map binary location-to-location contacts. Local contacts generally predominate over distant ones, but a few distant connections are visible, such as between the ends of the ftz gene. TAD stands for topologically associating domain, mapping out the connections seen above between pink sites. This line also lists the genes residing in each zone (Deformed, micro RNA 10, Sex combs reduced, fushi terazu, and Antennapedia promoters P1 and P2). The contacts track shows where the authors map specific sites where organizing factors (including Trl (trithorax-like) and CP190 (centrosomal protein of 190 kDa)) bind. The overall idea is that there are two kinds of contacts, boundaries and tethers. Boundaries insulate one region from the next, preventing regulatory spill-over to the wrong gene. Tethers serve as pro-regulatory staging points, helping enhancers contact their proper promoter targets, even though the tether complex does not itself promote RNA transcription.

Insulator elements have been recognized for some time. These are locations that seem to block regulatory interactions across them, thus defining, between two such sites, a topologically associated domain, (TAD). How they work is not entirely clear, but they may stitch themselves to the nuclear membrane. They are thought to interact with a DNA pump called cohesin to extrude a loop of DNA between two insulator sites, thereby keeping that DNA clear of other interactions, at least temporarily, and locally clumped. The authors claim to find a new element called a distal tethering element (DTE), which works like an enhancer in promoting interaction between distant activating regulatory sites and genes, but doesn't actually activate. They just structure the region so that when a signal comes, the gene is ready to be activated efficiently. 

One theory of how insulator elements work. The insulator sites "CTCF motif" are marked on the DNA with dark blue arrow heads. They control the boundaries of action by the protein complex cohesin, which forms dimeric doughnuts around DNA and can pump DNA. Cohesins are central to the mechanisms of meiosis and mitosis. The net effect is to produce a segregated region of DNA as portrayed at the bottom, which should have a much higher rate of local interactions (as seen in the Micro-C method) than distant interactions.

At the largest scale, these authors claim that there are, in the whole fly genome and at this particular (early) point in development, 2034 insulator locations (TADs) and 620 tethering elements (TEs or DTEs). They show that DTEs in the locus they study closely play an active role in turning the nearby genes on at early times in development, and in directing activation from enhancers near the DTE, rather than ones farther away. What binds to the DTEs? So-called "pioneer" regulatory factors(such as Zelda) that have the power to make way through nucleosomes and other chromatin proteins to bind their target DNA. The authors say that these tether sites, once set up, are then stable on a permanent basis, through all developmental stages, even though the genes they assist may only be active transiently. 

The "poised" nature of some genes had been observed long ago, so it is not entirely surprising to see this mechanism get fleshed out a little, as a structural connection that is made between genes and their regulatory sites in advance of the actual activator proteins arriving at the associated enhancers and turning them on.

 

Final model: the normal case around the Antennapedia locus is shown at top, with insulator sites shown in pink, and tethering sites shown in teal. If one of the tethering elements is removed (middle), then the enhancer EE has less effect on the gene Scr, whose expression is reduced. If an insulator is removed (bottom), the re-organized domain allows the ftz gene's regulators, including the enhancer AE1, to affect Scr expression, altering its timing and location of expression.


  • Don't hold your breath for capitalism to address climate change.
  • How the Russian skating machine works.
  • Russia, solved.
  • Solar tax for all! Or at least a separation of grid costs and electricity generation costs.

Sunday, January 16, 2022

Choices, Choices

Hippocampal maps happen in many modes and dimensions. How do they relate to conscious navigation?

How do our brains work? A question that was once mystical is now tantalizingly concrete. Neurobiology is, thanks to the sacrifices of countless rats, mice, and undergraduate research subjects, slowly bringing to light mechanisms by which thoughts flit about the brain. The parallel processing of vision, progressively through the layers of the visual cortex, was one milestone. Another has been work in the hippocampus, which is essential for memory formation as well as mapping and navigation. Several kinds of cells have been found there (or in associated brain areas) which fire when the animal is in a certain place, or crosses a subjective navigational grid boundary, or points its head in a certain direction. 

A recent paper reviewed recent findings about how such navigation signals are bound together and interact with the prefrontal cortex during decision making. One is that locations are encoded in a peculiar way, within the brain wave known as the theta oscillation. These run at about 4 to 12 cycles per second, and as an animal moves or thinks, place cells corresponding to locations behind play at the trough of the cycle, while locations progressively closer, and then in front of the animal play correspondingly higher on the wave. So the conscious path that the animal is contemplating is replayed on a sort of continuous loop in highly time-compressed fashion. And this happens not only while the animal is on the path, but at other times as well, if it is dreaming about its day, or is resting and thinking about its future options.

"For hippocampal place cells to code for both past and future trajectories while the animal navigates through an environment, the hippocampus needs to integrate multiple sensory inputs and self-generated cues by the animal’s movement for both retrospective and prospective coding."


These researchers describe a new piece of the story, that alternate theta cycles can encode different paths. That is, as the wave repeats, the first cycle may encode one future path out of a T-maze, while the next may encode another path out of the same maze, and then repeating back to A, B, etc. It is evident that the animal is trying to decide what to do, and its hippocampus (with associated regions) is helpfully providing mappings of the options. Not only that, but the connecting brain areas heading towards the prefrontal cortex (the nucleus reuniens, entorhinal cortex, and parahippocampal gyrus) separate these path representations into different cell streams, (still on the theta oscillation), and progressively filter one out. Ultimately, the prefrontal cortex represents only one path ... the one that the rat actually chooses to go down. The regions are connected in both directions, so there is clearly top-down as well as bottom-up processing going on. The conclusion is that in general, the hippocampus and allied areas provide relatively unbiased mapping services, while the cortex does the decision making about where to go, and while it may receive.

    "This alternation between left and right begins as early as 25 cm prior to the choice point and will continue until the animal makes its turn"


A rat considers its options. Theta waves are portrayed, as they appear in different anatomical locations in the brain. Hippocampal place cells, on the bottom right, give a mapping of the relevant path repeatedly encoded across single theta wave cycles. One path is encoded in one cycle, the other in the next. Further anatomical locations (heading left) separate the maps into different channels / cells, from which the prefrontal cortex finally selects only the one it intends to actually use.

The hippocampus is not just for visual navigation, however. It is now known to map many other senses in spatial terms, like sounds, smells. It also maps the flow of time in cognitive space, such as in memories, quite apart from spatial mapping. It seems to be a general facility to create cognitive maps of the world, given whatever the animal has experienced and is interested in, at any scale, and in many modalities. The theta wave embedding gives a structure that is highly compressed, and repeated, so that it is available to higher processing levels for review, re-enactment, dreaming, and modification for future planning. 

Thus using the trusty maze test on rats and mice, neuroscientists are slowly, and very painfully, getting to the point of deciphering how certain kinds of thoughts happen in the brain- where they are assembled, how their components combine, and how they relate to behavior. How they divide between conscious and unconscious processes naturally awaits more insight into what this dividing line really consists of.


  • Biochar!
  • More about the coup attempt.
  • Yes, there was electoral fraud.
  • The more you know about fossil fuels, the worse it gets.
  • Graph of the week. Our local sanitation district finds over a thousand omicron genomes per milliliter of intake, which seems astonishing.




Saturday, December 4, 2021

Supergroups in Search of Their Roots

The early stages of eukaryotic evolution are proving hard to reconstruct.

There is normal evolution, and then there are great evolutionary transitions. Not to say that the latter don't obey the principles of normal evolution, but they go by so fast, and render so many transitional forms obsolete along the way, that there is little record left of what happened. Among those great transitions are the origin of life itself, the origin of humans, and the origin of eukaryotes. We are slowly piecing together human evolution, from the exceedingly rare fossils of intermediate forms and branch off-shoots. But looking at the current world, we are the lone hominin, having displaced or killed off all competitors and predecessors to stand alone atop the lineage of primates, and over the biosphere generally. Human evolution didn't violate any natural laws, but it seems to have operated under uniquely directional selection, especially for intelligence and social sophistication, which led to a sort of arms race of rapid evolution that laid the groundwork for an exponential rate in the invention of technologies and collective social forms over the last million years.

Similarly, it is clear that however the origin of life started out, it was a very humble affair, with each innovation quickly displacing its progenitors, just as the early cell phones came out in quick succession, until a technological plateau was reached from which further development was / is less obvious. While the origin and success of eukaryotes did not erase the prokaryotic kingdoms from which they sprang, it does seem to have erased the early stages of its own development, to the point that those stages are very hard to reconstruct, especially given the revolutionary and multifarious nature of their innovations.

Eukaryotes differ from prokaryotes in possessing: nuclei and a nuclear membrane with specialized pores; mitochondria descended from a separate bacterial ancestor (and photosynthetic plastids descended from yet other bacterial ancestors in some cases); sex and meiosis; greater size by several orders of magnitude; phagocytosis by amoeboid cells; internal membrane organelles like golgi, peroxisomes, lysosomes, endocytic and exocytic vesicles; cyclins that run the cell cycle; microtubules that participate in the cell cycle, cytoskeleton, and cilia; cilia, as distinct from flagella; an active actin-based cytoskeleton, with novel motor proteins; a greatly elaborated transcriptional apparatus with modular enhancers and novel classes of transcription regulators; histones; mRNA splicing and introns; nucleolus and small nucleolar RNAs; telomeres on linear chromosomes; a significant increment in the size of both ribosomal subunits. Indeed, the closer one looks at the molecular landscape, the more differences accumulate. This was quite simply a quantum leap in cellular organization, which happened sometime between 1.8 and 3 billion years ago. Indeed, eukaryotes are not just the McMansions of the microbial world, but the Downton Abbeys- with dutiful servants and complex and luxurious internal economies that prokaryotic cells couldn't conceive of.

Major lineages of eukaryotes are traced back to their origins in a molecular-based phylogeny. Animals (and fungi!) are in the Opisthokonta, plants in the Chloroplastida. So many groups connect right to the "root" of this tree that there is little way to figure out which came first. Also, the dashed lines indicate uncertainty about those orderings/rootings as well, which leaves a great deal of early eukaryotic evolution obscure. Some abbreviations / links are- CRuMs: collodictyonids (syn. diphylleids) + rigifilida + mantamonas; excavates, hemimastigophora, haptista, TSAR:  telonemids, stramenopiles, alveolates, and rhizaria.


A recent paper recounts the current phylogenetic state of affairs, and a variety of other papers over the last decade delve into the many questions surrounding eukaryotic origins. While molecular phylogenies have improved tremendously with the advent of faster, whole-genome sequencing and the continued collection of obscure single-celled eukaryotes, (aka protists), the latest phylogeny, as shown above, remains inconclusive. The deepest root is both uncertain with regard to its bacterial progenitor, and to which current eukaryotes bear the closest relation. There are occasional fossil kelps, algae, and other biochemical traces back to 2.0 to 2.7 billion years, (though some do not put the origin earlier than 1.8 billion years) but these have not been able to shed any light on the order of events either.

Nevertheless, the field can agree on a few ideas. One is that the assimilation of mitochondria (whether willing or unwilling) is perhaps the dominant event in the sequence. That doesn't mean it was necessarily the first event, but means that it created a variety of conditions that led to a cascade of other consequences and features. The energy mitochondria provided enabled large cell sizes and the accumulation of a whole new household full of junk, like lipids in several new membrane compartments. The genome that they contributed brought in thousands of new genes, including introns. 

Secondly, the loss of cell walls and the adoption of amoeboid carnivory is likely one of the first events in the evolutionary sequence. Shedding the obligatory cell wall that all bacteria have necessitates a cytoskeleton of some kind, and it is also conducive to the engulfment of the proto-mitochondrion. For while complicated co-symbiotic metabolic arguments have been devised to explain why these two cells may have engaged in a long-term mutual relationship long before their ultimate consumation, the most convenient hypothesis for assimilation remains the simplest- that one engulfed the other, in a meal that lasted well over a billion years.

Thirdly, the question of what the progenitor cell was has been refined somewhat. One of the most intriguing findings of the last half-century of biology was the discovery of archaebacteria (also called archaea)- a whole new kingdom of bacteria characterized by their tendency to occupy extreme habitats, their clear separation from bacteria by chemical and genetic criteria, and also their close relationship to eukaryotes, especially what is presumed to be the original host genome. Many proposals have been made, (including that archaea are the original cell, preceding other bacteria), but the best one currently is that archaea split from the rest of bacteria rather late, after which eukaryotes split off from archaea, thus making the latter two sister groups. This explains the many common traits they share, while allowing significant divergence, plus the incorporation of many bacterial features into eukaryotes, either through the original lineage, or by later transfer from the proto-mitochondrion. So here at last is one lineage that survived out of the gradual development of eukaryotes- the archaea, though one wouldn't guess it from looking at them. It took analysis at the molecular level to even know that archaea existed, let alone that they are the last extant eukaryotic sister group.

comically overstuffed figure from an argument for the late development of archaebacteria out of pre-existing bacteria (prokaryotes), with subsequent split and diversification of eukaryotes out of a proto-archaeal lineage. Many key molecular and physiological characters are mentioned.

Lastly, surveying the various outlying protist lineages for clues about which might hearken back to primitive eukaryotic forms, one research group suggests that the collodictyonids might fit the bill. Being an ancient lineage means that it is lonesome, without a large family of evolutionary development to show diversification and change. It also means that in molecular terms, it is highly distinct, branching deeply from all other groups. Whether that all means that it resembles an ancient / early form of the eukaryotic cell, or went its own way on a unique evolutionary trajectory, is difficult to say. For each trait, (including sequence traits), a phylogenetic analysis is done to figure out whether it is differential- shared with some other lineages but not all- whether those without the trait lost it at some later point, or whether it was gained by a sub-group. After analyzing enough such traits, one can make a statement about the overall picture, and thus the "ancient-ness", of an organism.

Is anything special about collodictyon? Not really. It is predatory, and has four flagella and a feeding groove, which functions as a sort of mouth. It can make pseudopods, has normal microtubule organizing centers for its flagella, and generally all the accoutrements of a eukaryotic cell. It lacks nothing, and thus may be an early branching eukaryote, but is not in any way a transitional form.

An unassuming protist (collodictyon) as possible representative of early eukaryotes. Its cilia are numbered.


At this point, we are left still peering darkly into the past, though obscure living protists and their molecular fossils, trying to figure out what happened when they split from the bacteria and archaea. A tremendous amount happened, but little record survives of the path along the way. That tends to be characteristic of the most momentous evolutionary events, which cause internal and external cataclysms, (including the opening of whole new lifestyles to exploit), that necessitate a rapid dynamic of further adaptation before their descendents achieve a stable and successful state sufficient to ride out the ensuing billion or more years ... before we come on the scene with the ability and interest to contemplate what went before.


  • Red regions have three times the death rates from Covid as blue regions. Will that change electoral math?
  • Annals of secession, cont.
  • Sad spectacle at the court.
  • Analysis of how the energy transition might go. Again, a carbon tax would help.

Saturday, August 14, 2021

All Facts are Theories, But Not All Theories are Facts

Are theories and facts different in kind, or are they related and transform into each other?

During the interminable debates about "Intelligent Design" and evolution, there was much hand-wringing about fact vs theory. Evolution was, to some, "just" a theory, to others a well-attested theory, and to others, a fact, whether in the observation of life's change through time (vs the straight creationists), or in the causal mechanism of natural selection (vs the so-called intelligent design proponents). Are theories just speculations, or are they, once accepted by their relevant community, the rock-like edifice of science? And are facts even plain as such, or are they infected by theory? Our late descent into unhinged right-wingery poses related, though far more complex, questions about the nature of facts and who or what can warrant them. But here, I will stick to the classic question as posed in philosophy and science- what is the distinction and or relation between facts and theories? This follows, but disagrees with, a recent discussion in Free Inquiry.

The official scientific organs (NCSE) have generally taken the position that theories are different from facts, making a pedagogically bright line distinction where things like tectonic plate theory and evolution are theories, while rock compositions and biochemistry are facts. In this way, science is made up, at a high level, of theories, which constantly evolve and broaden in their scope, while the facts they are built on arrive on a conveyor belt of normal scientific progress, via lab experiments, field work, etc. Facts help to support or refute theories, which are such abstract, dynamic, and wide-ranging bags of concepts that they can not rightly be regarded as facts.

All very pat, but what are facts? It turns out that nothing we observe and call a fact escapes some amount of interpretation, or the need to be based on theories of how the world works. We grow up with certain axiomatic and built-in conditions, like gravity, vision, and physical cause and effect. Thus we think that anything we "observe" directly is a fact. But all such observations are built on a history of learning about how things work, which is in essence starting with a bunch of theories, some instinctively inborn, which are gradually satisfied by evidence as we grow up ... to the extent that we take many things for granted as fact, like being able to count on gravity as we are walking, that the sun comes up every day, etc. Facts are not automatic or self-attested, but rather are themselves essentially theories, however simple, that have been put to the test and found reliable.

And therein lies a clue to how we, and especially scientists, evaluate information and use the categories of fact and theory in a practical and dynamic way. Lawyers often talk of coming up with a theory of the case, which is to say, a story that is going to convince a jury, which has the job of finding the facts of the case. When the jury finds the theory convincing, and vote for the lawyer's side, the facts are found insofar and the law is concerned. Their determination may come far short of philosophic rigor, but the movement is typical- the movement from theory to fact. 

On the other hand, what is a theory? I think it can be described as a proposed fact. No one would propose a theory if they didn't think it was true and explanatory of reality. Whether broad or narrow, it is a set of interpretations that seek to make sense of the world in a way that we limited humans can categorize, into our store of knowledge. For instance, Freudian theories of repression, Oedipal complexes, castration fears, etc. would have been, if borne out, facts about our mental lives. Being rather vague, they may have needed a great deal of refinement before getting there, but all the same, they were proposed facts regarding what we feel and do, and the psychic mechanisms that lead to those feelings. 

In science, it is the experiment and its communication that is the key event in the alchemy of transforming theories into facts. Science is unusual in its explicit and purposeful interaction with theories that are unproven. Tectonic theory was once a mere theory, and a crackpot one at that. But as observations came in, which were proposed on the basis of that theory, or retroactively appreciated as support for it, such as the lengthy hunt for mid-ocean ridges where tectonic plates separate, and other faults where they converge, that theory gained "fact-ness". Now it is simply a fact, and the science of geology has gone other to other frontiers of theory, working to transform them into fact, or back off and try some others.

The mid-Atlantic ridge, straining to be understood by observers equipped with the theory of plate tectonics. Also, a video of the longer term.

Another example is the humble molecular biology experiment, such as cloning a gene responsible for some disease. The theory can be so simple as to be hardly enunciated- that disease X is in part genetic, and the responsible mutation must occur in some gene, and thus if we find it, we can establish a new fact about that disease as well as about that gene. Then the hunt goes on, the family lineages are traced, the genetic mapping happens, and the sequencing is done, and the gene is found. What was once a theory, if an unsurprising one, has now been transformed into a fact, one perhaps with practical, medical applications.

But the magic of experiments is usually only discernable to the few people who are sufficiently knowledgeable or interested to appreciate the transformation that just happened. The boundary between theory and fact depends on the expertise of the witnesses, and can be sociologically hazy. Does homeopathy cure disease? Well, hemeopathic practitioners regard that as fact, and have gone on to an elaborate practice and pharmacopeia of dilute solutions to effect various cures. Others disagree and regard the whole thing as not only a theory, but a stunningly wrong-headed one at that- as far as can be imagined from having gained fact-hood. Real science revolves around experiments done to what is essentially a standard of philosophical proof. Techniques are reported and consistently applied, controls are done to isolate variables of interest, materials are described and made publicly available, and the logic of the demonstration is clarified so that readers knowledgble in the arts of the field can be confident that the conclusions follow from the premises. And the practitioners themselves are culturally vetted through lengthy apprenticeships of training and critique. 

The practice of peer review is a natural part of this series of events, putting the experiment through a critique by the (hopefully) most knowledgeable practitioners in the field, who can stand in for the intended audience for whom the experiment is supposed to perform this alchemical transformation of theory. The scientific literature is full of the most varied and imaginative efforts to "factify" hypotheses, hunches, and theories. Very few of these will ever be appreciated by the lay public, but they lay the ever-advancing frontier of facts from which new hypotheses are made, new theories tested, and occasionally, some of their resulting facts are discovered to be useful, such as the advent of gene therapy via the Crisper/Cas9 gene editing system, liposomes, and associated technologies. 

Another aspect of the public nature of science and peer critique is that if a knowledgeable observer disagrees with the theory-fact transition purported by some experiment, they are duty-bound and encouraged to replicate those experiments themselves, or do other experiments to demonstrate their counter-vailing ideas. On a cheaper level, they are welcome and encouraged to ask uncomfortable questions during seminars and write tart letters to the editors of journals, since pointing out the errors of others is one of the most enjoyable activities humans pursue, and doubles as a core of the integrity that characterizes the culture of science. In this way, facts sometimes reverse course and travel back into the realm of theory, to sweat it out in the hands of some disgruntled grad student and her overbearing supervisor, destined to never again see the light of day.

Experiments crystallize most clearly the transition from theory to fact. They create, though careful construction, a situation that banishes incidental distractions, focuses attention on a particular phenomenon, and establishes a logic of causation that forms (hopefully) convincing evidence for a theory, transforming it into fact, for knowledgeable observers. They create controlled and monitored conditions where knowlegeable people can "see" the truth of a theory being put to a decisive test. Just as we can now see the truth of the heliocentric theory directly with the use of spaceships sent out across the solar system, the observation of a fact is a matter of the prepared mind meeting with a set of observations, either tailored specifically in the form of an experiment to test a theory, or else taken freely from nature to illuminate a theory's interpretation of reality. Nothing is intrisically obvious, but needs an educated observer to discern truth. Nothing is completely theory-free. Nevertheless, facts can be established.


  • Lies are power.
  • On social contagion.
  • Code red.
  • The electricity interconnect of the Eastern US slowly grapples with reality.
  • How many has Covid killed?
  • In Afghanistan, the US has spent decades building a political and military paper tiger.

Saturday, July 24, 2021

American Occupations and Preoccupations

Douglass North on the role of institutions in our society, part 2. "Understanding the process of economic change". Also, "Violence and Social Orders". American occupations of Germany, Japan, and Afghanistan and Iraq are case studies of institutions at work. 

In part 1, I discussed the role of ideology and thought patterns in the context of institutional economics, which is the topic of North's book. This post will look at the implications for developmental economics. In this modern age, especially with the internet, information has never been more free. All countries have access to advanced technological information as well as the vast corpus of economics literature on how to harness it for economic development and the good of their societies. Yet everywhere we look, developing economies are in chains. What is the problem? Another way to put it that we have always had competition among relatively free and intelligent people, but have not always had civilization, and have had the modern civilization we know today, characterized by democracy and relatively free economic diversity, for only a couple of centuries, in a minority of countries. This is not the normal state of affairs, despite being a very good state of affairs.

The problem is clearly not that of knowledge, per se, but of its diffusion (human capital), and far more critically, the social institutions that put it to work. The social sciences, including economics, are evidently still in their infancy when it comes to understanding the deep structure of societies and how to make them work better. North poses the basic problem of the transition between primitive ("natural") economies, which are personal and small-scale, to advanced economies that grew first in the West after the Renaissance, and are characterized by impersonal, rule-based exchange, with a flourishing of independent organizations. Humans naturally operate on the first level, and it requires the production of a "new man" to suit him and her to the impersonal system of modern political economies. 

This model of human takes refuge in the state as the guarantor of property, contracts, money, security, law, political fairness, and many other institutions foundational to the security and prosperity of life as we know it. This model of human is comfortable interacting with complete strangers for all sorts of transactions from mundane products using the price system to complex and personal products like loans and health care using other institutions, all regulated by norms of behavior as well as by the state, where needed. This model of human develops intense specialization after a long education in very narrow productive skills, in order to live in a society of astonishing diversity of work. There is an organized and rule-based competition to develop such skills to the most detailed and extensive manner. This model of human relies on other social institutions such as the legal system, consumer review services, and standards of practice in each field to ensure that the vast asymmetry of information between the specialized sellers of other goods and services that she needs is not used against her, in fraud and other breaches of implicit faith. 

All this is rather unlike the original model, who took refuge in his or her clan, relying on the social and physical power of that group to access economic power. That is, one has to know someone to use land or get a job, to deal with other groups, to make successful trades, and for basic security. North characterizes this society as "limited access", since it is run by and for coalitions of the powerful, like the lords and nobility of medieval Europe or the warlords of Afghanistan today. For such non-modern states, the overwhelming problem is not that of economic efficiency, but of avoiding disintegration and civil war. They are made up of elite coalitions that limit violence by allocating economic rewards according to political / military power. If done accurately on that basis, each lord gets a stable share, and has little incentive to start a civil war, since his (or her) power is already reflected in his or her economic share, and a war would necessarily reduce the whole economic pie, and additionally risks reducing the lord to nothing at all. This is a highly personalized, and dynamic system, where the central state's job is mostly to make sure that each of the coalition members is getting their proper share, with changes reflecting power shifts through time.

Norman castles locations in Britain. The powers distributed through the country were a coalition that required constant maintenance and care from the center to keep privileges and benefits balanced and shared out according to the power of each local lord.

For example, the Norman invasion of Britain installed a new set of landlords, who cared nothing for the English peasants, but carried on an elite society full of jealousies and warfare amongst themselves to grab more of the wealth of the country. Most of the time, however, there was a stable balance of power, thus of land allotments, and thus of economic shares, making for a reasonably peaceful realm. All power flowed through the state, (the land allotments were all ultimately granted by the king, and in the early days were routinely taken away again if the king was displeased by the lord's loyalty or status), which is to say through this coalition of the nobles, and they had little thought for economic efficiency, innovation, legal niceties, or perpetual non-political institutions to support trade, scholarship, and innovation. (With the exception of the church, which was an intimate partner of the state.)

Notice that in the US and other modern political systems, the political system is almost slavishly devoted to "the economy", whereas in non-modern societies, the economy is a slave to the political system, which cavalierly assigns shares to the powerful and nothing to anyone else, infeudating them to the lords of the coalition. The economy is assumed to be static in its productivity and role, thus a sheer source of plunder and social power, rather than a subject of nurture and growth. And the state is composed of the elite whose political power translates immediately into shares of a static economic pie. No notion of democracy here!

This all comes to mind when considering the rather disparate fates of US military occupations that have occurred over the last century, where we have come directly up against societies that we briefly controlled and tried to steer in economically as well as socially positive directions. The occupations of Germany, Japan, Afghanistan, and Iraq came to dramatically different ends, principally due to the differing levels of ingrained beliefs and institutional development of each culture (one could add a quasi-occupation of Vietnam here as well). While Germany and Japan were each devastated by World War 2, and took decades to recover, their people had long been educated into an advanced instutional framework of economic and civic activity. Some of the devastation was indeed political and social, since the Nazis (as well as the imperial Japanese system) had set up an almost medieval (i.e. fascist) system of economic control, putting the state in charge of directing production in a cabal with leading industrialists. Yet despite all that, the elements were still in place for both nations to put their economies back together and in short order rejoin the fully developed world, in political and economic terms. How much of that was due to the individual human capital of each nation, (i.e. education in both technical and civic aspects), and how much was due to the residual organizational and institutional structures, such as impersonal legal and trade expectations, and how much due to the instructive activities of the occupying administration?

One would have to conclude that very little was due to the latter, for try as we might in Iraq and Afghanistan, their culture was not ready for full-blown modernity (elections, democracy, capitalism, rule of law, etc.) in the political-economic sense. Many of their people were ready, and the models abroad were and remain ready for application. Vast amounts of information and good will is at their disposal to build a modern state. But, alas, their real power structures were not receptive. Indeed, in Afghanistan, each warlord continued to maintain his own army, and civil war was a constant danger, until today, when a civil war is in full swing, conducted by the Taliban against a withering central state. The Taliban has historically been the only group with the wide-spread cultural support (at least in rural areas), and the ruthlessness to bring order to (most of) Afghanistan. Its coalition with the other elites is based partly on doctrinaire Islam (which all parties across the spectrum pay lip service to) and brutal / effective authoritarianism. When the US invaded, we took advantage of the few portions outside the existing power coalition, (in the north), arming them to defeat the Taliban. That was an instance of working with the existing power structures.

But replacing or reforming them was an entirely different project. The fact is that the development of modern economies took Western countries centuries, and takes even the most avid students (Taiwan, South Korea, China to a partial degree) several decades of work to retrace. North emphasizes that development from primitive to modern political-economic systems is not a given, and progress is as likely to go backward as forward, depending at each moment on the incentives of those in power. To progress, they need to see more benefit in stability and durable institutions, as opposed to their own freedom of action to threaten the other members of the coalition, keep armies, extort economic rents, etc. Only as chaos recedes, stability starts being taken for granted, and the cost of keeping armies exceeds their utility, does the calculus gradually shift. That process is fundamentally psychological- it reflects the observations and beliefs of the actors, and takes a long time, especially in a country such as Afghanistan with such a durable tradition of militarized independence and plunder.

So what should we have done, instead of dreaming that we could build, out of the existing culture and distribution of power, a women-friendly capitalist modern democracy in Afghanistan? First, we should have seen clearly at the outset that we had only two choices. First was to take over the culture root and branch, with a million soldiers. The other was to work within the culture on a practical program of reform, whose goal would have been to take them a few steps down the road from a "fragile" limited access state- where civil war is a constant threat- to a "basic" limited access state, where the elites are starting to accept some rules, and the state is stable, but still exists mostly to share out the economic pie to current power holders. Indeed the "basic" state is the only substantial social organization- all other organizations have to be created by it or affiliated with it, because any privilege worth having is jealously guarded by the state, in very personal terms.

Incidentally, the next step in North's taxonomy of states would be the mature limited access order, where laws begin to be made in a non-personal way, non-state organizations are allowed to exist more broadly, like commercial guilds, but the concepts of complete equality before the law and free access to standardized organization types has not yet been achieved. That latter would be an "open access order", which modern states occupy. There, the military is entirely under the democratic and lawful control of a central state, and the power centers that are left in the society have become more diffuse, and all willing to compete within an open, egalitarian legal framework in economic as well as political matters. It was this overall bargain that was being tested with the last administration's flirtation with an armed coup at the Capital earlier this year.

In the case of Afghanistan, there is a wild card in the form of the Taliban, which is not really a localized warlord kind of power, which can be fairly dealt out a share of the local and national economic pie. They are an amalgam of local powers from many parts of the country, plus an ideological movement, plus a pawn of Pakistan, the Gulf states, and the many other funders of fundamentalist Islam. Whatever they are, they are a power the central government has to reckon with, both via recognition and acceptance, as well as competition and strategies to blunt their power.

Above all, peace and security has always been the main goal. It is peace that moderates the need for every warlord to maintain his own army, and which nudges all the actors toward a more rule-based, regular way to harvest economic rents from the rest of the economy, and helps that economy grow. The lack of security is also the biggest calling card for the Taliban, as an organization that terrorizes the countryside and foments insecurity as its principal policy (an odd theology, one might think!). How did we do on that front? Well, not very well at all. The presence of the US and allies was in the first place an irritant. Second, our profusion of policies of reform, from poppy eradication, to women's education, to showpiece elections, to relentless, and often aimless, bombing, took our eyes off the ball, and generated ill will virtually across the spectrum. One gets the sense that Hamid Karzai was trying very hard to keep it all together in the classic pattern of a fragile state, by dealing out favors to each of the big powers across the country in a reasonably effective way, and calling out the US occasionally for its excesses. But from a modern perspective, that all looks like hopeless corruption, and we installed the next government under Ashraf Ghani which tried to step up modernist reforms without the necessary conditions of even having progressed from a fragile to a basic state, let alone to a mature state or any hint of the "doorstep conditions" of modernity that North emphasizes. This is not even to mention that we seem to have set up the central state military on an unsustainable basis, dependent on modern (foreign) hardware, expertise, and funding that were always destined to dry up eventually.

So, nation-building? Yes, absolutely. But smarter nation-building that doesn't ask too much of the society being put through the wringer. Nation-building happens in gradual steps, not all at once, not by fiat, and certainly not by imposition by outsiders (Unless we have a couple of centuries to spare, as the Normans did). Our experience with the post-world war 2 reconstructions was deeply misleading if we came away with the idea that those countries did nothing but learn at the American's knee and copy the American template, and were not themselves abundantly prepared for institutional and economic reconstruction.

Saturday, June 12, 2021

Mitochondria and the Ratchet of Doom

How do mitochondria escape Muller's ratchet, the genetic degradation of non-mating cells?

Muller's ratchet is one of the more profound concepts in genetics and evolution. Mutations build up constantly, and are overwhelmingly detrimental. So a clonal population of cells which simply divide and live out their lives will all face degradation, and no matter how intense the selection, will eventually end up mutated in some essential function or set of functions, and die out. This gives rise to an intense desire for organisms to exchange and recombine genetic information. This shuffling process can, while producing a lot of bad hands, also deal out some genetically good hands, purifying away deleterious mutations and combining beneficial ones.

This is the principle behind the meiotic sex of eukaryotes with large genomes, and also the widespread genetic exchange done by bacterial cells, via conjugation and other means. In this way, bacteria can stave off genetic obsolescence, and also pick up useful tricks like antibiotic resistance. But what about our mitochondria? These are also, in origin and essence, bacterial cells with tiny genomes which are critically essential to our well-being. They are maternally inherited, which means that the mitochondria from sperm cells, which could have provided new genetic diversity, are, without the slightest compunction, thrown away. This seriously limits opportunities for genetic exchange and improvement, for a genome that is roughly 16 thousand bases long and codes for 37 genes, many of which are central to our metabolism.

One solution to the problem has been to move genes to the nucleus. Most bacteria have a few thousand genes, so the 37 of the mitochondrial genome are a small remnant, specialized to keep local regulation intact, while the vast majority of needed proteins are encoded in the nucleus and imported through rather ornate mechanisms to take their places in one of the variety of the organelle's locations- inner matrix, inner membrane, inter-membrane space, or outer membrane.

The more intriguing solution, however, has been to perform constant and intensive quality control (with recombination) on mitochondria via a fission and fusion cycle. It turns out that mitochondria are constantly dividing and re-fusing into large networks in our cells. And there are a lot of them- typically thousands in our cells. Mitochondria are also capable of recombination and gene conversion, where parts of one DNA are over-written by copying another DNA molecule. This allows a modicum of gene shuffling among mitochondria in our cells. 

The fusion and fission cycle of mitochondria, where fissioned mitochondria are subject to evaluation for function, and disposal.

Lastly, there is a tight control process that eliminates poorly functioning mitochondria, called mitophagy. Since mitochondria function like little batteries, their charge state is a fundamental measure of health. A nuclear-encoded protein called PINK1 enters the mitochondria, and if the charge state is poor, it remains on the outer membrane to recruit other proteins, including parkin and ubiquitin, which jointly mark the defective mitochondrion for degradation through mitophagy. That means that it is engulfed in an autophagosome and fused with a lysozome, which are the garbage disposal / recycling centers of the cell, filled with acidic conditions and degradative enzymes.

The key point is that during the fission / fusion cycle of mitochondria, which happens over tens of minutes, the fissioned state allows individual or small numbers of genomes to be evaluated, and if defective, disposed of. Meanwhile, the fused state allows genetic recombination and shuffling, to recreate genetic diversity from the ambient mutation rate. Since mitochondria are the centers of metabolism, especially redox reactions, they are especially prone to high rates of mutation. So this surveillance is particularly essential. If all else fails, the whole cell may be disposed of via apoptosis, which is also quite sensitive to the mitochondrial state.

In oocytes, mitochondria appear to go through a particularly stringent period of fission, allowing a high level of quality control at this key point. Additionally, mitochondria then go through exponential growth and energy generation to make the oocyte, at which point those which more quality control discards the oocytes that are not up to snuff.

All this adds up to a pretty thorough method of purifying selection. Admittedly, little or no genetic material comes from outside the clonal maternal genetic lineage, but mutations are probably common enough that beneficial mutations arise occasionally, and one can imagine that there may be additional levels of selection for more successful mitochondria over less successful ones, in addition to the charge-dependent rough cut made by this mitophagy selection.

As the penetrating reader my guess, parkin is related to Parkinson's disease, as one of its causal genes, when defective. Neurons are particularly prone to mitochondrial dysfunction, due to their sprawled-out geography. The nuclear genes needed for mitochondria are made only in the cell body / nucleus, and their products (either as proteins, or sometimes as mRNAs) have to be ferried out to the axonal and dendritic periphery to supply their targets with new materials. Neurons have very active transport systems to do this, but still it is a significant challenge. Second, the local population of mitochondria in outlying processes of neurons is going to be small, making the fission/fusion cycle much less effective and less likely to eliminate defective genes and individual mitochondria, or make up for their absence if they are eliminated, leading to local energetic crises.

Cross-section of a neuronal synapse, with a sprinkling of mitochondria available locally to power local operations.

Papers reviewed here:


  • Get back to work. A special, CEO-sponsored cartoon from Tom Tomorrow.
  • They are everywhere.
  • Shouldn't taxes be even a little bit fair?
  • The economics of shame.

Saturday, April 24, 2021

Way Too Much Dopamine

Schizophrenia and associated delusions/hallucinations as a Bayesian logic defect of putting priors over observations, partly due to excess dopamine or sensitivity to dopamine.

It goes without saying that our brains are highly tuned systems, both through evolution and through development. They are constantly active, with dynamic coalitions of oscillatory synchrony and active anatomical connection that appear to create our mental phenomena, conscious and unconscious. Neurotransmitters have long been talismanic keys to this kingdom, there being relatively few of them, with somewhat distinct functions. GABA, dopamine, serotonin, glutamate, acetylcholine are perhaps the best known, but there are dozens of others. Each transmitter tends to have a theme associated with it, like GABA being characteristic of inhibitory neurons, and glutamate the most common excitatory neurotransmitter. Each tends to have drugs associated with it as well, often from natural sources. Psilocybin stimulates serotonin receptors, for instance. Dopamine is central to reward pathways, making us feel good. Cocaine raises dopamine levels, making us feel great without having done anything particularly noteworthy.

As is typical, scientists thought they had found the secret to the brain when they found neurotransmitters and the variety of drugs that affect them. New classes of drugs like serotonin uptake inhibitors (imipramine, prozac) and dopamine receptor antagonists (haloperidol) took the world by storm. But they didn't turn out to have the surgical effects that were touted. Neurotransmitters function all over the brain, and while some have major themes in one area or another, they might be doing very different things elsewhere, and not overlap very productively with a particular syndrome such as depression or schizophrenia. Which is to say that such major syndromes are not simply tuning problems of one neurotransmitter or other. Messing with transmitters turned out to be a rather blunt medical instrument, if a helpful one.

All this comes to mind with a recent report of the connection between dopamine and hallucinations. As noted above, dopamine antagonists are widely used as antipsychotics (following the dopamine hypothesis of schizophrenia), but the premier hallucinogens are serotonin activators, such as Psilocybin and LSD, though their mode of action remains not fully worked out. (Indeed, ketamine, another hallucinogen, inhibits glutamine receptors.) There is nothing neat here, except that nature, and the occasional chemical accident, have uncovered amazing ways to affect our minds. Insofar as schizophrenia is characterized by over-active dopamine activity in some areas, (though with a curious lack of joy, so the reward circuitry seems to have been left out), and involves hallucinations which are reduced by dopamine antagonists, a connection between dopamine and hallucinations makes sense. 

"... there are multiple genes and neuronal pathways that can lead to psychosis and that all these multiple psychosis pathways converge via the high-affinity state of the D2 receptor, the common target for all antipsychotics, typical or atypical." - Wiki


So what do they propose? These researchers came up with a complex system to fool mice into pressing a lever based on uncertain (auditory) stimuli. If the mouse really thought the sound had happened, it would wait around longer for the reward, giving researchers a measure of its internal confidence in a signal which may have never been actually presented. The researchers thus presented a joint image and sound, but sometimes left out the sound, causing what they claim to be an hallucinated perception of the sound. Thus the mice, amid all this confusion, generated some hallucinations in the form of positive thinking that something good was coming their way. Ketamine increased this presumed hallucination rate, suggestively. The experiment was then to squirt some extra dopamine into their brains (via new-fangled optogenetic methods, which can be highly controllable in time and space) at a key area known to be involved in schizophrenia, the striatum, which is a key interface between the cortex and lower/inner areas of the brain involved in motion, emotion, reward, and cognition.

Normal perception is composed of a balance of bottom up observation and top-down organization. Too much of either one is problematic, sometimes hallucinatory.

This exercise did indeed mimick the action of a general dose of ketamine, increasing false assumptions, aka hallucinations, and confirming that dopamine is involved there. The work relates to a very abstract body of work on Bayesian logic in cognition, recognizing that perception rests on modeling. We need to have some model of the world before we can fit new observations into it, and we continually update this model by "noticing" salient "news" which differs from our current model. In the parlance, we use observation to update our priors to more accurate posterior probability distributions. The idea is that, in the case of hallucination, the top-down model is out-weighing (or making up for a lack of) bottom-up observation, running amok, and thus exposing errors in this otherwise carefully tuned Bayesian system. One aspect of the logic is that some evaluation needs to be made of the salience of a new bit of news. How much does it differ from what is current in the model? How reliable is the observation? How reliable is the model? The systems gone awry in schizophrenia appear to mess with all these key functions, awarding salience to unimportant things and great reliability to shaky models of reality. 

Putting neurotransmitters together with much finer anatomical specification is surely a positive step towards figuring out what is going on, even if this mouse model of hallucination is rather sketchy. So this new work constitutes a tiny step in the direction of boring, anatomically and chemically, into one tiny aspect of this vast syndrome, and into the interesting area of mental construction of perceptions.


  • Another crisis of overpopulation.
  • And another one.
  • Getting China to decarbonize will take a stiff carbon price, about $500 to $1000/ton.

Various policy scenarios of decarbonization in China, put into a common cost framework of carbon pricing (y-axis). Some policies are a lot more efficient than others.