Showing posts with label emergence. Show all posts
Showing posts with label emergence. Show all posts

Saturday, January 3, 2026

Tiny Tunings in a Buzz of Neural Activity

Lots of what neurons do adds up to zero: how the cerebellum controls muscle movement.

Our brains are always active. Meditating, relaxing, sleeping ... whatever we are up to, the brain doesn't take a holiday, except in the deepest levels of sleep, when slow waves help the brain to reset and heal itself. Otherwise, it is always going, with only slight variations based on computational needs and transient coalitions, which are extremely difficult to pick out of the background (fMRI signals are typically under 3% of the noise). That leads to the general principle that action in a productive sense is merely a tuned overlay on top of a baseline of constant activity, through most of the brain.

A recent paper discussed how this property manifests in the cerebellum, the "little" brain attached to our big brain, which is a fine-tuning engine especially for motion control, but also many other functions. The cerebellum has relatively simple and massively parallelized circuitry, a bit like a GPU to the neocortex's CPU. It gets inputs from the places like the the spinal cord and sensory areas of the brain, and relays a tuned signal out to, in the case of motor control, the premotor cortex. The current authors focused on the control of eye movement ("saccades") which is a well characterized system and experimentally tractable, in marmoset monkeys. 

After poking a massive array of electrodes into their poor monkey's brains, they recorded from hundreds of cells, including all the relevant neuron types (of which there are only about six). They found that inside the cerebellum, and inside the region they already knew is devoted to eye movement, neurons form small-world groups that interact closely with each other, revealing a new level of organization for this organ.

More significantly, they were able to figure out the central tendency or vector for Purkinje (P) cells they ran across. These are the core cells of the cerebellar circuit, so their firing should correlate in principle with the eye movements that they were simultaneously tracking in these monkeys. So one cell might be an upward directing cell, while another one might be leftward, and so forth. They also found that P cells come in two types- those (bursters) that fire positively around the preferred direction, and others (pausers) that fire all the other times, but fire less when eyes are heading in the "preferred" direction. These properties are already telling us that the neural system does not much care to save energy. Firing most of the time, but then pausing when some preferred direction is hit is perfectly OK. 

Two representative cells are recorded, around the cardinal/radial directions. The first of each pair is recorded from 90 degrees vs its preferred direction, and adding up the two perpendicular vectors (bottom) gives zero net activity. The second of each pair is recorded from the preferred direction (or potent axis), vs 180 degrees away, and when these are subtracted, a net signal is visible (difference). Theta is the direction the eyes are heading.

Their main finding, though, was that there is a lot of vector arithmetic going on, in which cells fire all over their preferred fields, and only when you carefully net their activity over the radial directions can you discern a preferred direction. The figure above shows a couple of cells, firing away no matter which direction the eyes are going. But if you subtract the forward direction from the backward direction, a small net signal is left over, which these authors claim is the real signal from the cerebellum, which signals a change in direction. When this behavior is summed across a large population, (below), the bursters and pausers cancel out, as do the signals going in stray directions. All you are left with is a relatively clean signal centered on the direction being instructed to the eye muscles. Wasteful? Yes. Effective? Apparently. 

The same analysis as above, but on a population basis. Now, in net terms, after all the canceled activity is eliminated, and adding up the pauser and burster activity, strong signals arise from the cerebellum as a whole, in this anatomical region, directing eye saccades

Why does the system work this way? One idea is that, like for the rest of the brain, default activity is the norm, and learning is, perforce, a matter of tuning ongoing activity, not of turning on cells that are otherwise off. The learning (or error) signal is part of the standard cerebellum circuitry- a signal so strong that it temporarily shuts off the normal chatter of the P cell output and retunes it slightly, rendering the subsequent changes seen in the net vector calculation.

A second issue raised by these authors is the nature of inputs to these calculations. In order to know whether and how to change the direction of eye movement, these cells must be getting information about the visual scene and also about the current state of the eyes and direction of movement. 

"The burst-pause pattern in the P cells implied a computation associated with predicting when to stop the saccade. For this to be possible, this region of the cerebellum should receive two kinds of information—a copy of the motor commands in muscle coordinates and a copy of the goal location in visual coordinates."

These inputs come from two types of mossy fiber neurons, which are the single inputs to the typical cerebellar circuit. One "state" type encodes the state of the motor system and current position of the eye. The other "goal" type encodes, only in case of a reward, where the eye "wants" to move, based on other cortical computations, such as the attention system. The two inputs go through separate individual cerebellar circuits, and then get added up on a population basis. When movement is unmotivated, the goal inputs are silent, and resulting cerebellar-directed saccades are more sporadic, scanning the scene haphazardly.

The end result is that we are delving ever more deeply into the details of mental computation. This paper trumpets itself as having revealed "vector calculus" in the cerebellum. However, to me it looks much more like arithmetic than calculus, and nor is the overall finding of small signals among a welter of noise and default activity novel either. All the same, more detail, and deeper technical means, and greater understanding of exactly how all these billions of dumb cells somehow add up to smart activity continues to be a great, and fascinating, quest.


  • How to have Christmas without the religion.
  • Why is bank regulation run by banks? "Member banks ... elect six of the nine members of each Federal Reserve Banks' boards of directors." And the other three typically represent regional businesses as well- not citizens or consumers.

Sunday, December 28, 2025

Lipid Pumps and Fatty Shields- Asymmetry in the Plasma Membrane

The two faces constituting eukaryotic cell membranes are asymmetric.

Membranes are one of those incredibly elegant things in biology. Simple chemicals forces are harnessed to create a stable envelope for the cell, with no need to encode the structure in complicated ways. Rather, it self-assembles, using the oil-vs-water forces of surface tension to form a huge structure with virtually no instruction. Eukaryotes decided to take membranes to the max, growing huge cells with an army of internal membrane-bound organelles, individually managed- each with its own life cycle and purposes.

Yet, there are complexities. How do proteins get into this membrane? How do they orient themselves? Does it need to be buttressed against rough physical insult, with some kind of outer wall? How do nutrients get across, while the internal chemistry is maintained as different from the outside? How does it choose which other cells to interact with, preventing fusion with some, but pursuing fusion with others? For all the simplicity of the basic structure, the early history of life had to come up with a lot of solutions to tough problems, before membrane management became such a snap that eukaryotes became possible.

The authors present their model (in atomic simulation) of a plasma membrane. Constituents are cholesterol, sphingomyelin (SM), phosphatidyl choline (PC) phosphatidyl serine (PS), phosphatidyl ethanolamine (PE), and phosphotidyl ethanolamine plasmalogen. Note how in this portrayal, there is far more cholesterol in the outer leaflet (top), facing the outside world, than there is in the inner leaflet (bottom).

The major constituents of the lipid bilayer are cholesterol, phospholipids, and sphingomyelin. The latter two have charged head groups and long lipid (fatty) tails. The head groups keep that side of the molecule (and the bilayer) facing water. The tails hate water and like to arrange themselves in the facing sheets that make up the inner part of the bilayer. Cholesterol, on the other hand, has only a mildly polar hydroxyl group at one end, and a very hydrophobic, stiff, and flat multi-ring body, which keeps strictly with the lipid tails. The lack of a charged head group means that cholesterol can easily flip between the bilayer leaflets- something that the other molecules with charged headgroups find very difficult. It has long been known that our genomes code for flippases and floppases: ATP-driven enzymes that can flip the charged phospholipids and sphingomyelin from one leaflet to the other. Why these enzymes exist, however, has been a conundrum.

Pumps that drive phospholipids against their natural equilibrium distribution, into one or the other leaflet.

It is not immediately apparent why it would be helpful to give up the natural symmetry and fluidity of the natural bilayer, and invest a lot of energy in keeping the compositions of each leaflet different. But that is the way it is. The outer leaflet of the plasma membrane tends to have more sphingomyelin and cholesterol, and the inner leaflet has more phospholipids. Additionally, those phospholipids tend to have unsaturated tails- that is, they have double bonds that break up the straight fatty tails that are typical in sphingomyelin. Membrane asymmetry has a variety of biological effects, especially when it is missing. Cells that lose their asymmetry are marked for cell suicide, intervention of the immune system, and also trigger coagulation in the blood. It is a signal that they have broken open or died. But these are doubtless later (maybe convenient) organismal consequences of universal membrane asymmetry. They do not explain its origin. 

A recent paper delved into the question of how and why this asymmetry happens, particularly in regard to cholesterol. Whether cholesterol even is asymmetric is controversial in the field, since measuring their location is very difficult. Yet these authors carefully show that, by direct measurement, and also by computer simulation, cholesterol, which makes up roughly forty percent of the membrane (its most significant single constituent, actually), is highly asymmetric in human erythrocyte membranes- about three fold more abundant in the outer leaflet than in the cytoplasmic leaflet. 

Cholesterol migrates to the more saturated leaflet. B shows a simulation where a poly-unstaturated (DAPC) phospholipid with 4 double bonds (blue) is contrasted with a saturated phospholipid (DPPC) with staight lipid tails (orange). In this simulation, cholesterol naturally migrates to the DPPC side as more DAPC is loaded, relieving the tension (and extra space) on the inner leaflet. Panel D shows that in real cells, scrambling the leaflet composition leads to greater cholesterol migration to the inner leaflet. This is a complex experiment, where the fluorescent signal (on the right-side graph) comes from a dye in an introduced cholesterol analog, which is FRET-quenched by a second dye that the experimenters introduced which is confined to the outer membrane. In the natural case (purple), signal is more quenched, since more cholesterol is in the outer leaflet, while after phospholipid scrambling, less quenching of the cholesterol signal is seen. Scrambling is verified (left side) by fluorescently marking the erythrocytes for Annexin 5, which binds to phosphatidylcholine, which is generally restricted to the inner leaflet. 

But no cholesterol flippase is known. Indeed, such a thing would be futile, since cholesterol equilibrates between the leaflets so rapidly. (The rate is estimated at milliseconds, in very rough terms.) So what is going on? These authors argue via experiment and chemical simulation that it is the pumped phospholipids that drive the other asymmetries. It is the straight lipid tails of sphingomyelin that attract the cholesterol, as a much more congenial environment than the broken/bent tails of the other phospholipids that are concentrated in the cytoplasmic leaflet. In turn, the cholesterol also facilitates the extreme phospholipid asymmetry. The authors show that without the extra cholesterol in the outer leaflet, bilayers of that extreme phospholipid composition break down into lipid globs.

When treated (time course) with a chemical that scrambles the plasma membrane leaflet lipid compositions, a test protein (top series) that normally (0 minutes) attaches to the inner leaflet floats off and distributes all over the cell. The bottom series shows binding of a protein (from outside these cells) that only binds phosphatidylcholine, showing that scrambling is taking place.

This sets up the major compositional asymmetry between the leaflets that creates marked differences in their properties. For example, the outer leaflet, due to the straight sphingomyelin tails and the cholesterol, is much stiffer, and packed much tighter, than the cytoplasmic leaflet. It forms a kind of shield against the outside world, which goes some way to explain the whole phenomenon. It is also almost twice as impermeable to water. Conversely, the cytoplasmic leaflet is more loosely packed, and indeed frequently suffers gaps (or defects) in its lipid integrity. This has significant consequences because many cellular proteins, especially those involved in signaling from the surface into the rest of the cytoplasm, have small lipid tails or similar anchors that direct them (temporarily) to the plasma membrane. The authors show that such proteins localize to the inner leaflet precisely because that leaflet has this loose, accepting structure, and are bound less well if the leaflets are scrambled / homogenized.

When the fluid mosaic model of biological membranes was first conceived, it didn't enter into anyone's head that the two leaflets could be so different, or that cells would have an interest in making them so. Sure, proteins in those membranes are rigorously oriented, so that they point in the right direction. But the lipids themselves? What for? Well, they do and now there are some glimmerings of reasons why. Whether this has implications for human disease and health is unknown, but just as a matter of understanding biology, it is deeply interesting.


Saturday, August 2, 2025

The Origin of Life

What do we know about how it all began? Will we ever know for sure?

Of all the great mysteries of science, the origin of life is maybe the one least likely to ever be solved. It is a singular event that happened four billion years ago in a world vastly different from ours. Scientists have developed a lot of ideas about it and increased knowledge of this original environment, but in the end, despite intense interest, the best we will be able to do is informed speculation. Which is, sure, better than uninformed speculation, (aka theology), but still unsatisfying. 

A recent paper about sugars and early metabolism (and a more fully argued precursor) piqued my interest in this area. It claimed that there are non-enzymatic ways to generate most or all of the core carbohydrates of glycolysis and CO2 fixation around pentose sugars, which are at the core of metabolism and the supply of sugars like ribose that form RNA, ATP, and other key compounds. The general idea is that at the very beginning of life, there were no enzymes and proteins, so our metabolism is patterned on reactions that originally happened naturally, with some kind of kick from environmental energy sources and mineral catalysts, like iron, which was very abundant. 

That is wonderful, but first, we had better define what we mean by life, and figure out what the logical steps are to cross this momentous threshold. Life is any chemical process that can accomplish Darwinian evolution. That is, it replicates in some fashion, and it has to encode those replicated descendants in some way that is subject to mutation and selection. With those two ingredients, we are off to the races. Without them, we are merely complex minerals. Crystals replicate, sometimes quite quickly, but they do not encode descendent crystals in a way that is complex at all- you either get the parent crystal, or you get a mess. This general theory is why the RNA world hypothesis was, and remains, so powerful. 

The RNA world hypothesis is that RNA is likely the first genetic material, before DNA (which is about 200 times more stable) was devised. RNA also has catalytic capabilities, so it could encode in its own structure some of the key mechanisms of life, therefore embodying both of the critical characteristics of life specified above. The fact that some key processes remain catalyzed by RNA today, such as ribosomal synthesis of proteins, spliceosomal re-arrangement of RNAs, and cutting of RNAs by RNAse P, suggest that proteins (as well as DNA) were the Johnny-come-latelies of the chemistry of life, after RNA had, in its lumbering, inefficient way, blazed the trail. 


In this image of the ribosome, RNA is gray, proteins are yellow. The active site is marked with a bright light. Which came first here-
protein or RNA?


But what kind of setting would have been needed for RNA to appear? Was metabolism needed? Does genetics come first, or does metabolism come first? If one means a cyclic system of organic transformations encoded by protein or RNA enzymes, then obviously genetics had to come first. But if one means a mess of organic chemicals that allowed some RNA to be made and provide modest direction to its own chemical fate, and to a few other reactions, then yes, those chemicals had to come first. A great deal of work has been done speculating what kind of peculiar early earth conditions might have been conducive to such chemistries. Hydrothermal vents, with their constant input of energy, and rich environment of metallic catalysts? Clay particles, with their helpful surfaces that can faux-crystalize formation of RNAs? Warm ponds, hot ponds, UV light.... the suggestions are legion. The main thing to realize is that early earth was surely highly diverse, had a lot of energy, and had lots of carbon, with a CO2-rich atmosphere. UV would have created a fair amount of carbon monoxide, which is the feedstock of the Fischer-Tropsch reactions that create complex organic compounds, including lipids, which are critical for formation of cells. Early earth very likely had pockets that could produce abundant complex organic molecules.

Thus early life was surely heterotrophic, taking in organic chemicals that were given by the ambient conditions for free. And before life really got going, there was no competition- there was nothing else to break those chemicals down, so in a sort of chemical pre-Darwinian setting, life could progress very slowly (though RNA has some instability in water, so there are limits). Later, when some of the scarcer chemicals were eaten up by other already-replicating life forms, then the race was on to develop those enzymes, of what we now recognize as metabolism, which could furnish those chemicals out of more common ingredients. Onwards the process then went, hammering out ever more extensive metabolic sequences to take in what was common and make what was precious- those ribose sugars, or nucleoside rings that originally had arrived for free. The first enzymes would have been made of RNA, or metals, or whatever was at hand. It was only much later that proteins, first short, then longer, came on the scene as superior catalysts, extensively assisted by metals, RNAs, vitamins, and other cofactors.

Where did the energy for all this come from? To cross the first threshold, only chemicals (which embodied outside energy cycles) were needed, not energy. Energy requirements accompanied the development of metabolism, as the complex chemicals become scarcer and they needed to be made internally. Only when the problem of making complex organic chemicals from simpler ones presented itself did it also become important to find some separate energy source to do that organic chemistry. Of course, the first complex chemicals absolutely needed were copies of the original RNA molecules. How that process was promoted, through some kind of activated intermediates, remains particularly unclear.

All this happened long before the last universal common ancestor, termed "LUCA", which was already an advanced cell just prior to the split into the archaeal and bacterial lineages, (much later to rejoin to create the most amazing form of life- eukaryotes). There has been quite a bit of analysis of LUCA to attempt to figure out the basic requirements of life, and what happened at the origin. But this ("top-down") approach is not useful. The original form of life was vastly more primitive, and was wholly re-written in countless ways before it became the true bacterial cell, and later still, LUCA. Only the faintest traces remain in our RNA-rich biochemistry. Just think about the complexity of the ribosome as an RNA catalyst, and one can appreciate the ragged nature of the RNA world, which was probably full of similar lumbering catalysts for other processes, each inefficient and absurdly wasteful of resources. But it could reproduce in Darwinian fashion, and thus it could improve. 

Today we find on earth a diversity of environments, from the bizarre mineral-driven hydrothermal vents under the ocean to the hot springs of Yellowstone. The geology of earth is wondrously varied, making it quite possible to credit one or more of the many theories of how complex organic molecules may have become a "soup" somewhere on the early Earth. When that soup produces ribose sugars and the other rudiments of RNA, we have the makings of life. The many other things that have come to characterize it, such as lipid membranes and metabolism of compounds are fundamentally secondary, though critically important for progress beyond that so-pregnant moment. 


Saturday, April 5, 2025

Psilocybin and the Normie Network

Psychedelic mushrooms desynchronize parts of the brain, especially the default mode network, disrupting our normal sense of reality.

Default mode network- sounds rather dull, doesn't it? But somewhere within that pattern of brain activity lies at least some of our consciousness. It is the hum of the brain while we are resting without focus. When we are intensely focused on something outside, in contrast, it is turned down, as we "lose ourselves" in the flow of other activities. This network remains active during light sleep and has altered patterns during REM sleep and dreaming, as one might expect. A recent paper tracked its behavior during exposure to the psychedelic drug psilocybin.

These researchers measured the level of active connectivity between brain regions (by functional MRI) on human subjects given a high dose of psilocybin, or Ritalin- which stands in as another psychoactive stimulant- or nothing. Compared with the control of no treatment, Ritalin caused a slight loss of connectivity (or desynchronization), (below, b), while psilocybin (a) caused huge loss of connectivity, clearly correlated with the subjective intensity of the trip. They also found that if, while on psilocybin, they gave their subjects some task to focus on, their connectivity increased again, again tracking directly with the subjective experiences of their subjects. 

The researchers show a metric of connectivity between distinct brain regions, under three conditions. FC stands for functional connectivity, where high numbers (and brighter colors) stand for more distance, i.e. less connectivity/synchrony. Methylphenidate is Ritalin. Synchrony is heavily degraded under psilocybin.

Of all the networks they analyzed, the default mode network (DNM) was most affected. This network runs between the prefrontal cortex, the posterior cingulate cortex, the hippocampus, the angular gyrus, and temporoparietal junction, among others. These are key areas for awareness, memory, time, social relations, and much else that is highly relevant to conscious awareness and personhood. There is a long thread of work in the same vein that shows that psychedelic drugs have these highly correlated effects on subjective experience and brain patterns. A recent review suggested that while subnetworks like the DNM are weakened, the brain as a whole experiences higher synchrony as well as higher receptivity to outside influence in an edgy process that increases its level of (to put it in terms of complexity theory) chaos or criticality. 

So that is what is happening! But that is not all. The effects of pychedelics, even from one dose, can be long-lasting, even life-changing. The current researchers note that the DMN desynchronization they see persists, at weaker levels, for weeks. This correlates with the subjective experience of changed senses that can result from a significant drug trip. And that trip, as noted above regarding receptivity and chaos, is a delicate thing, highly subject to the environment and mood the subject is experiencing at the time. 

But when a task is being done, the subjects come back down towards normalcy.

These researchers note that brain plasticity comes into play, evidently as a homeostatic response to the wildly novel patterns of brain activation that took the subject out of their rut of DMN self-hood. Synapses change, genes are activated, and the brain reshapes usually in a way that increases receptivity to novel experiences and lifts mood. For example, depression correlates with strong DMN coherence, while successful treatment correlates with less connectivity.

"We propose that psychedelics induce a mode of brain function that is more dynamically flexible, diverse, integrated, and tuned for information sharing, consistent with greater criticality."

So, consider the mind blown. Psychedelics appear to disrupt the usual day-to-day, which is apparently a strongly positive thing to do, both subjectively and clinically. That raises the question of why. Why do we settle into such durable and often negative images of the self and ways of thinking? And why does shaking things up have such positive effects? Jung had a deep conviction that our brains are healing machines, with deep wisdom and powers that are exposed during unusual events, like intense dreams. While there are bad trips, and people who go over the edge from excessive psychedelic use, with a modicum of care, it appears that positive trips, taken in a mood of physical and social safety, let the mind reset in important ways, back to its natural open-ness.


Sunday, July 7, 2024

Living on the Edge of Chaos

 Consciousness as a physical process, driven by thalamic-cortical communication.

Who is conscious? This is not only a medical and practical question, but a deep philosophical and scientific question. Over the last decades and century, we have become increasingly comfortable assigning consciousness to other animals, in ever-wider circles of understanding and empathy. The complex lives of chimpanzees, as brought into our consciousness by Jane Goodall, are one example. Birds are increasingly appreciated for their intelligence and strategic maneuvering. Insects are another frontier, as we appreciate the complex communication and navigation strategies honeybees, for one example, employ. Where does it end? Are bacteria conscious?

I would define consciousness as responsiveness crosschecked with a rapidly accessible model of the world. Responsiveness alone, such in a thermostat, is not consciousness. But a nest thermostat that checks the weather, knows its occupant's habits and schedules, and prices for gas and electricity ... that might be a little conscious(!) But back to the chain of being- are jellyfish conscious? They are quite responsive, and have a few thousand networked neurons that might well be computing the expected conditions outside, so I would count them as borderline conscious. That is generally where I would put the dividing line, with plants, bacteria, and sponges as not conscious, and organisms with brains, even quite decentralized ones like octopi and snails, as conscious, with jellyfish as slightly conscious. Consciousness is an infinitely graded condition, which in us reaches great heights of richness, but presumably starts at a very simple level.

These classifications imply that consciousness is a property of very primitive brains, and thus in our brains, is likely to be driven by very primitive levels of its anatomy. And that brings us to the two articles for this week, about current theories and work centered on the thalamus as a driver of human consciousness. One paper relates some detailed experimental and modeled tests of information transfer that characterizes a causal back-and-forth between the thalamus and the cortex, in which there is a frequency division between thalamic 10 - 20 Hz oscillations, whose information is then re-encoded and reflected in cortical oscillations of much higher frequency, at about 50 - 150 Hz. It also continues a long-running theme in the field, characterizing the edge-of chaos nature of electrical activity in these thalamus-cortex communications, as being just the kind of signaling suited to consciousness, and tracks the variation of chaoticity during anesthesia, waking, psychedelic drug usage, and seizure. A background paper provides a general review of this field, showing that the thalamus seems to be a central orchestrator of both the activation and maintenance of consciousness, as well as its contents and form.

The thalamus is at the very center of the brain, and, as is typical for primitive parts of the brain, it packs a lot of molecular diversity, cell types, and anatomy in a very small region. More recently evolved areas of the brain tend to be more anatomically and molecularly uniform, while supporting more complexity at the computational level. The thalamus has about thirty "nuclei", or anatomical areas that have distinct patterns of connections and cell types. It is known to relay sensory signals to the cortex, to be central to sleep control and alertness. It sits right over the brain stem, and has radiating connections out to, and back from, the cerebral cortex, suggestive of a hub-like role. 

The thalamus is networked with recurrent connections all over the cortex.


The first paper claims firstly that electrical oscillations in the thalamus and the cortex are interestingly related. Mouse, rats, and humans were all used as subjects and gave consistent results over the testing, supporting the idea that, at very least, we think alike, even if what we think about may differ. What is encoded in the awake brain at 1-13 Hz in the thalamus appears in correlated form (that is, in a transformed way) as 50-100+ Hz in the cortex. They study the statistics of recordings from both areas to claim that there is directional information flow, not just marked by the anatomical connections, but by active, harmonic entrainment and recoding. But this relationship fails to occur in unconscious states, even though the thalamus is at this time (in sleep, or anesthesia) helping to drive slow wave sleep patterns directly in the cortex. This supports the idea that there is a summary function going on, where richer information processed in the cortex is reduced in dimension into the vaguer hunches and impressions that make up our conscious experience. Even when our feelings and impressions are very vivid, they probably do not do justice to the vast processing power operating in the cortex, which is mostly unconscious.

Administering psychedelic drugs to their experimental animals caused greater information transfer backwards from the cortex to the thalamus, suggesting that the animal's conscious experience was being flooded. They also observe that these loops from the thalamus to cortex and back have an edge-of-chaos form. They are complex, ever-shifting, and information-rich. Chaos is an interesting concept in information science, quite distinct from noise. Chaos is deterministic, in that the same starting conditions should always produce the same results. But chaos is non-linear, where small changes to initial conditions can generate large swings in the output. Limited chaos is characteristic of living systems, which have feedback controls to limit the range of activity, but also have high sensitivity to small inputs, new information, etc., and thus are highly responsive. Noise is random changes to a signal that may not be reproducible, and are not part of a control mechanism.

Unfortunately, I don't think the figures from this paper support their claims very well, or at least not clearly, so I won't show them. It is exploratory work, on the whole. At any rate, they are working from, and contributing to, a by now quite well-supported paradigm that puts the thalamus at the center of conscious experience. For example, direct electrical stimulation of the center of the thalamus can bring animals immediately up from unconsciousness induced by anesthesia. Conversely, stimulation at the same place, with a different electrical frequency, (10 Hz, rather than 50 Hz), causes immediate freezing and vacancy of expression of the animal, suggesting interference with consciousness. Secondly, the thalamus is known to be the place which gates what sensory data enters consciousness, based on a long history of attention, ocular rivalry, and blind-sight experiments.

A schematic of how stimulation of the thalamus (in middle) interacts with the overlying cortex. CL stands for the ventral lateral nucleus of thalamus, where these stimulation experiments were targeted. The greek letters alpha and gamma stand for different frequency bands of the neural oscillation.

So, from both anatomical perspectives and functional ones, the thalamus appears at the center of conscious experience. This is a field that is not going to be revolutionized by a lightning insight or a new equation. It is looking very much like a matter of normal science slowly accumulating ever-more refined observations, simulations, better technologies, and theories that gain, piece by piece, on this most curious of mysteries.


Saturday, February 17, 2024

A New Form of Life is Discovered

An extremely short RNA is infectious and prevalent in the human microbiome.

While the last century might be called the DNA century, at least for molecular biology, the current century might be called that of RNA. A blizzard of new RNA types and potentials have been discovered in the normal eukaryotic milieu, including miRNA, eRNA, lincRNA. An RNA virus caused a pandemic, which was remedied by an RNA vaccine. Nobel prizes have been handed out in these fields, and we are also increasingly aware that RNA lies at the origin of life itself, as the first genetic and catalytic mechanism.

One of these Nobel prize winners recently undertook a hunt for small RNAs that might be lurking in the human microbiome- the soup of bacteria, fungi, and all the combined products that cover our surfaces, inside and out. What they found was astonishing- an RNA of merely 1164 nucleotides, which folds up into a rigid, linear rod, which they call "obelisks". This is not a product of the host genome, nor of any other known organism, but is rather some kind of extremely minimal pathogen that, like a transposon or self-splicing intron, is entirely nucleic-acid based. And the more they hunted, the more they found, ultimately finding thousands of obelisk-like entities hidden in the many databases of the world drawn from various environmental and microbiome samples. There is some precedent for this kind of structure, in the form of hepatitis D. This "viroid" of only 1682 nucleotides is a parasite of hepatitis B virus, depending on that virus for key replication functions. While normal viruses (like hepatitis B) encode many key functions of their own, like envelope proteins, genome packaging proteins, and replication enzymes, viroids tend to not encode anything, though hepatitis D does encode one antigenic protein, which exacerbates hepatitis B infections.

The obelisk RNA viroid-like species appear to encode one or two proteins, and possibly a ribozyme as well. The functions of all these are as yet unknown, but necessarily the RNAs rely entirely some host cell (currently unknown) functions to do their thing, such as the RNA polymerase to create copies of itself. Unknown also is whether they are dependent on other viruses, or only on cells for their propagation. Being just discovered, the researchers can do a great deal of bioinformatics, such as predicting the structure of the encoded protein, and the structure of the RNA genome. But key biology, like how they interact with host cells, what functions the host provides, and how they replicate, not to mention possible pathogenic consequences, remain unknown.

The highly self-complementary structure of one obelisk RNA sequence, leading to its identification and naming. In green is one reading frame, which codes for the main protein, of unknown function.

The curious thing about these new obelisk viroid-like RNAs is that, while common in human microbiomes, both oral and gut-derived, they are found only in 5-10% of them, not in all samples. This sort of suggests that they may account for some of the variability traceable to microbiomes, such as autoimmune issues, chronic ailments, nutritional variations, even effects on mood, etc.

Once a lot of databases were searched, obelisk RNAs turn up everywhere, even in some bacteria.

This work was done entirely in silico. Not a single wet-lab experiment was performed. It is a testament to the power of having alot of genomes at our disposal, and of modern computational firepower. This lab just had the idea that novel small viroid-like RNAs might exhibit certain types of (circular, self-complementary) structure, which led to this discovery of a novel form of "life". Are these RNAs alive? Certainly not. They are mere molecules and parasites that feed off, and transport themselves between, more fully functional cells. But they are part of the tapestry of life, which itself is wholly molecular, with many amazing emergent properties. Whether these obelisks turn out to have any medical or ecological significance, they are one more example of the lengths (and shorts) to which Darwinian selection has gone in the struggle for existence. 


Saturday, December 2, 2023

Preliminary Pieces of AI

We already live in an AI world, and really, it isn't so bad.

It is odd to hear about all the hyperventilating about artificial intelligence of late. One would think it is a new thing, or some science-fiction-y entity. Then there are fears about the singularity and loss of control by humans. Count me a skeptic on all fronts. Man is, and remains, wolf to man. To take one example, we are contemplating the election of perhaps the dummbest person ever to hold the office of president. For the second time. How an intelligence, artificial or otherwise, is supposed to worm its way into power over us is not easy to understand, looking at nature of humans and of power. 

So let's take a step back and figure out what is going on, and where it is likely to take us. AI has become a catch-all for a diversity of computer methods, mostly characterized by being slightly better at doing things we have long wanted computers to do, like interpreting text, speech, and images. But I would offer that it should include much more- all the things we have computers do to manage information. In that sense, we have been living among shards of artificial intelligence for a very long time. We have become utterly dependent on databases, for instance, for our memory functions. Imagine having to chase down a bank balance or a news story, without access to the searchable memories that modern databases provide. They are breathtakingly superior to our own intelligence when it comes to the amount of things they can remember, the accuracy they can remember them, and the speed with which they can find them. The same goes for calculations of all sorts, and more recently, complex scientific math like solving atomic structures, creating wonderful CGI graphics, or predicting the weather. 

We should view AI as a cabinet filled with many different tools, just as our own bodies and minds are filled with many kinds of intelligence. The integration of our minds into a single consciousness tends to blind us to the diversity of what happens under the hood. While we may want gross measurements like "general intelligence", we also know increasingly that it (whatever "it" is, and whatever it fails to encompass of our many facets and talents) is composed of many functions that several decades of work in AI, computer science, and neuroscience have shown are far more complicated and difficult to replicate than the early AI pioneers imagined, once they got hold of their Turing machine with its infinite potential. 

Originally, we tended to imagine artificial intelligence as a robot- humanoid, slightly odd looking, but just like us in form and abilities. That was a natural consequence of our archetypes and narcissism. But AI is nothing like that, because full-on humanoid consciousness is an impossibly high bar, at least for the foreseeable future, and requires innumerable capabilities and forms of intelligence to be developed first. 

The autonomous car drama is a good example of this. It has taken every ounce of ingenuity and high tech to get to a reasonably passable vehicle, which is able to "understand" key components of the world around it. That a blob in front is a person, instead of a motorcycle, or that a light is a traffic light instead of a reflection of the sun. Just as our brain has a stepwise hierarchy of visual processing, we have recapitulated that evolution here by harnessing cameras in these cars (and lasers, etc.) to not just take in a flat visual scene, which by itself is meaningless, but to parse it into understandable units like ... other cars, crosswalks, buildings, bicylists, etc.. Visual scenes are very rich, and figuring out what is in them is a huge accomplishment. 

But is it intelligence? Yes, it certainly is a fragment of intelligence, but it isn't consciousness. Imagine how effortless this process is for us, and how effortful and constricted it is for an autonomous vehicle. We understand everything in a scene within a much wider model of the world, where everything relates to everything else. We evaluate and understand innumerable levels of our environment, from its chemical makeup to its social and political implications. Traffic cones do not freak us out. The bare obstacle course of getting around, such as in a vehicle, is a minor aspect, really, of this consciousness, and of our intelligence. Autonomous cars are barely up to the level of cockroaches, on balance, in overall intelligence.

The AI of text and language handling is similarly primitive. Despite the vast improvements in text translation and interpretation, the underlying models these mechanisms draw on are limited. Translation can be done without understanding text at all, merely by matching patterns from pre-digested pools of pre-translated text, regurgitated as cued by the input text. Siri-like spoken responses, on the other hand, do require some parsing of meaning out of the input, to decide what the topic and the question are. But the scope of these tools tend to be very limited, and the wider scope they are allowed, the more embarrassing their performance, since they are essentially scraping web sites and text pools for question-response patterns, instead of truly understanding the user's request or any field of knowledge.

Lastly, there are the generative ChatGPT style engines, which also regurgitate text patterns reformatted from public sources in response to topical requests. The ability to re-write a Wikipedia entry through a Shakespeare filter is amazing, but it is really the search / input functions that are most impressive- being able, like the Siri system, to parse through the user's request for all its key points. This betokens some degree of understanding, in the sense that the world of the machine (i.e. its database) is parceled up into topics that can be separately gathered and reshuffled into a response. This requires a pretty broad and structured ontological / classification system, which is one important part of intelligence.

Not only is there a diversity of forms of intelligence to be considered, but there is a vast diversity of expertise and knowledge to be learned. There are millions of jobs and professions, each with their own forms of knowledge. Back the early days of AI, we thought that expert systems could be instructed by experts, formalizing their expertise. But that turned out to be not just impractical, but impossible, since much of that expertise, formed out of years of study and experience, is implicit and unconscious. That is why apprenticeship among humans is so powerful, offering a combination of learning by watching and learning by doing. Can AI do that? Only if it gains several more kinds of intelligence including an ability to learn in very un-computer-ish ways.

This analysis has emphasized the diverse nature of intelligences, and the uneven, evolutionary development they have undergone. How close are we to a social intelligence that could understand people's motivations and empathise with them? Not very close at all. How close are we to a scientific intelligence that could find holes in the scholarly literature and manage a research enterprise to fill them? Not very close at all. So it is very early days in terms of anything that could properly be called artificial intelligence, even while bits and pieces have been with us for a long time. We may be in for fifty to a hundred more years of hearing every advance in computer science being billed as artificial intelligence.


Uneven development is going to continue to be the pattern, as we seize upon topics that seem interesting or economically rewarding, and do whatever the current technological frontier allows. Memory and calculation were the first to fall, being easily formalizable. Communication network management is similarly positioned. Game learning was next, followed by the Siri / Watson systems for question answering. Then came a frontal assault on language understanding, using the neural network systems, which discard the former expert system's obsession with grammar and rules, for much simpler statistical learning from large pools of text. This is where we are, far from fully understanding language, but highly capable in restricted areas. And the need for better AI is acute. There are great frontiers to realize in medical diagnosis and in the modeling of biological systems, to only name two fields close at hand that could benefit from a thoroughly systematic and capable artificial intelligence.

The problem is that world modeling, which is what languages implicitly stand for, is very complicated. We do not even know how to do this properly in principle, let alone having the mechanisms and scale to implement it. What we have in terms of expert systems and databases do not have the kind of richness or accessibility needed for a fluid and wide-ranging consciousness. Will neural nets get us there? Or ontological systems / databases? Or some combination? However it is done, full world modeling with the ability to learn continuously into those models are key capabilities needed for significant artificial intelligence.

After world modeling come other forms of intelligence like social / emotional intelligence and agency / management intelligence with motivation. I have no doubt that we will get to full machine consciousness at some point. The mechanisms of biological brains are just not sufficiently mysterious to think that they can not be replicated or improved upon. But we are nowhere near that yet, despite bandying about the word artificial intelligence. When we get there, we will have to pay special attention to the forms of motivation we implant, to mitigate the dangers of making beings who are even more malevolent than those that already exist... us.

Would that constitute some kind of "singularity"? I doubt it. Among humans there are already plenty of smart people and diversity, which result in niches for everyone having something useful to do. Technology has been replacing human labor forever, and will continue moving up the chain of capability. And when machines exceed the level of human intelligence, in some general sense, they will get all the difficult jobs. But the job of president? That will still go to a dolt, I have no doubt. Selection for some jobs is by criteria that artificial intelligence, no matter how astute, is not going to fulfill.

Risks? In the current environment, there are a plenty of risks, which are typically cases where technology has outrun our will to regulate its social harm. Fake information, thanks to the chatbots and image makers, can now flood the zone. But this is hardly a new phenomenon, and perhaps we need to get back to a position where we do not believe everything we read, in the National Enquirer or on the internet. The quality of our sources may become once again an important consideration, as they always should have been.

Another current risk is that the automation risks chaos. For example in the financial markets, the new technologies seem to calm the markets most of the time, arbitraging with relentless precision. But when things go out of bounds, flash breakdowns can happen, very destructively. The SEC has sifted through some past events of this kind and set up regulatory guard rails. But they will probably be perpetually behind the curve. Militaries are itching to use robots instead of pilots and soldiers, and to automate killing from afar. But ultimately, control of the military comes down to social power, which comes down to people of not necessarily great intelligence. 

The biggest risk from these machines is that of security. If we have our financial markets run by machine, or our medical system run by super-knowledgeable artificial intelligences, or our military by some panopticon neural net, or even just our electrical grid run by super-computers, the problem is not that they will turn against us of their own volition, but that some hacker somewhere will turn them against us. Countless hospitals have already faced ransomware attacks. This is a real problem, growing as machines become more capable and indispensable. If and when we make artificial people, we will need the same kind of surveillance and social control mechanisms over them that we do over everyone else, but with the added option of changing their programming. Again, powerful intelligences made for military purposes to kill our enemies are, by the reflexive property of all technology, prime targets for being turned against us. So just as we have locked up our nuclear weapons and managed to not have them fall into enemy hands (so far), similar safeguards would need to be put on similar powers arising from these newer technologies.

We may have been misled by the many AI and super-beings of science fiction, Nietzsche's Übermensch, and similar archetypes. The point of Nietzsche's construction is moral, not intellectual or physical- a person who has thrown off all the moral boundaries of civilization, expecially Christian civilization. But that is a phantasm. The point of most societies is to allow the weak to band together to control the strong and malevolent. A society where the strong band together to enslave the weak.. well, that is surely a nightmare, and more unrealistic the more concentrated the power. We must simply hope that, given the ample time we have before truly comprehensive and superior artificial intelligent beings exist, we have exercised sufficient care in their construction, and in the health of our political institutions, to control them as we have many other potentially malevolent agents.


  • AI in chemistry.
  • AI to recognize cells in images.
  • Ayaan Hirsi Ali becomes Christian. "I ultimately found life without any spiritual solace unendurable."
  • The racism runs very deep.
  • An appreciation of Stephen J. Gould.
  • Forced arbitration against customers and employees is OK, but fines against frauds... not so much?
  • Oil production still going up.

Saturday, September 30, 2023

Are we all the Same, or all Different?

Refining diversity.

There has been some confusion and convenient logic around diversity. Are we all the same? Conventional wisdom makes us legally the same, and the same in terms of rights, in an ever-expanding program of level playing fields- race, gender, gender preference, neurodiversity, etc. At the same time, conventional wisdom treasures diversity, inclusion, and difference. Educational conventional wisdom assumes all children are the same, and deserve the same investments and education, until some magic point when diversity flowers, and children pursue their individual dreams, applying to higher educational institutions, or not. But here again, selectiveness and merit are highly contested- should all ethnic groups be equally represented at universities, or are we diverse on that plane as well?

It is quite confusing, on the new political correctness program, to tell who is supposed to be the same and who different, and in what ways, and for what ends. Some acute social radar is called for to navigate this woke world and one can sympathize, though not too much, with those who are sick of it and want to go back to simpler times of shameless competition; black and white. 

The fundamental tension is that a society needs some degree of solidarity and cohesion to satisfy our social natures and to get anything complex done. At the same time, Darwinian and economic imperatives have us competing with each other at all levels- among nations, ethnicities, states, genders, families, work groups, individuals. We are wonderfully sensitive to infinitesimal differences, which form the soul of Darwinian selection. Woke efforts clearly try to separate differences that are essential and involuntary, (which should in principle be excluded from competition), from those that are not fixed, such as personal virtue and work ethic, thus forming the proper field of education and competition.

But that is awfully abstract. Reducing that vague principle to practice is highly fraught. Race, insofar as it can be defined at all, is clearly an essential trait. So race should not be a criterion for any competitive aspect of the society- job hunting, education, customer service. But what about "diversity" and what about affirmative action? Should the competition be weighted a little to make up for past wrongs? How about intelligence? Intelligence is heritable, but we can't call it essential, lest virtually every form of competition in our society be brought to a halt. Higher education and business, and the general business of life, is extremely competitive on the field of intelligence- who can con whom, who can come up with great ideas, write books, do the work, and manage others.

These impulses towards solidarity and competition define our fundamental political divides, with Republicans glorying in the unfairness of life, and the success of the rich. Democrats want everyone to get along, with care for unfortunate and oppressed. Our social schizophrenia over identity and empathy is expressed in the crazy politics of today. And Republicans reflect contemporary identity politics as well, just in their twisted, white-centric way. We are coming apart socially, and losing key cooperative capacity that puts our national project in jeopardy. We can grant that the narratives and archetypes that have glued the civic culture have been fantasies- that everyone is equal, or that the founding fathers were geniuses that selflessly wrought the perfect union. But at the same time, the new mantras of diversity have dangerous aspects as well.


Each side, in archetypal terms, is right and each is an essential element in making society work. Neither side's utopia is either practical or desirable. The Democratic dream is for everyone to get plenty of public services and equal treatment at every possible nexus of life, with morally-informed regulation of every social and economic harm, and unions helping to run every workplace. In the end, there would be little room for economic activity at all- for the competition that undergirds innovation and productivity, and we would find ourselves poverty-stricken, which was what led other socialist/communist states to drastic solutions that were not socially progressive at all.

On the other hand is a capitalist utopia where the winners take not just hundreds of billions of dollars, but everything else, such as the freedom of workers to organize or resist, and political power as well. The US would turn into a moneyed class system, just like the old "nobility" of Europe, with serfs. It is the Nietzschian, Randian ideal of total competition, enabling winners to oppress everyone else in perpetuity, and, into the bargain, write themselves into the history books as gods.

These are not (and were not, historically) appetizing prospects, and we need the tension of mature and civil political debate between them to find a middle ground that is both fertile and humane. Nature is, as in so many other things, an excellent guide. Cooperation is a big theme in evolution, from the assembly of the eukaryotic cell from prokaryotic precusors, to its wonderous elaboration into multicellular bodies and later into societies such as our own and those of the insects. Cooperation is the way to great accomplishments. Yet competition is the baseline that is equally essential. Diversity, yes, but it is competition and selection among that diversity and cooperative enterprise that turns the downward trajectory of entropy and decay (as dictated by physics and time) into flourishing progress.


  • Identity, essentialism, and postmodernism.
  • Family structure, ... or diversity?
  • Earth in the far future.
  • Electric or not, cars are still bad.
  • Our non-political and totally not corrupt supreme court.
  • The nightmare of building in California.

Saturday, March 11, 2023

An Origin Story for Spider Venom

Phylogenetic analysis shows that the major component of spider venom derives from one ancient ancestor.

One reason why biologists are so fully committed to the Darwinian account of natural selection and evolution is that it keeps explaining and organizing what we see. Despite the almost incredible diversity and complexity of life, every close look keeps confirming what Darwin sensed and outlined so long ago. In the modern era, biology has gone through the "Modern Synthesis", bringing genetics, molecular biology, and evolutionary theory into alignment with mutually supporting data and theories. For example, it was Linus Pauling and colleagues (after they lost the race to determine the structure of DNA) who proposed that the composition of proteins (hemoglobin, in their case) could be used to estimate evolutionary relationships, both among those molecules, and among their host species.

Naturally, these methods have become vastly more powerful, to the point that most phylogenetic analyses of the relationship between species (including the definition of what species are, vs subspecies, hybrids, etc.) are led these days by DNA analysis, which provides the richest possible trove of differentiating characters- a vast spectrum from universally conserved to highly (and forensically) varying. And, naturally, it also constitutes a record of the mutational steps that make up the evolutionary process. The correlation of such analyses with other traditionally used diagnostic characters, and with the paleontological record, is a huge area of productive science, which leads, again and again, to new revelations about life's history.


One sample structure of a DRP- the disulfide rich protein that makes up most of spider venoms.
 The disulfide bond (between two cysteines) is shown in red. There is usually another disulfide helping to hold the two halves of the molecule together as well. The rest of the molecule is (evolutionarily, and structurally) free to change shape and character, in order to carry out its neuron-channel blocking or other toxic function.

One small example was published recently, in a study of spider venoms. Spiders arose, from current estimates, about 375 million years ago, and comprise the second most prevalent form of animal life, second only to their cousins, the insects. They generally have a hunting lifestyle, using venom to immobilize their prey, after capture and before digestion. These venoms are highly complex brews that can have over a hundred distinct molecules, including potassium, acids, tissue- and membrane-digesting enzymes, nucleosides, pore-forming peptides, and neurotoxins. At over three-fourths of the venom, the protein-based neurotoxins are the most interesting and best studied of the venom components, and a spider typically deploys dozens of types in its venom. They are also called cysteine-rich peptides or disulfide-rich peptides (DRPs) due to their composition. The fact that spiders tend to each have a large variety of these DRPs in their collection argues that a lot of gene duplication and diversification has occured.

A general phylogenetic tree of spiders (left). On the right are the signal peptides of a variety of venoms from some of these species. The identity of many of these signal sequences, which are not present in the final active protein, is a sign that these venom genes were recently duplicated.

So where do they come from? Sequences of the peptides themselves are of limited assistance, being small, (averaging ~60 amino acids), and under extensive selection to diversify. But they are processed from larger proteins (pro-proteins) and genes that show better conservation, providing the present authors more material for their evolutionary studies. The figure above, for example, shows, on the far right, the signal peptides from families of these DRP genes from single species. Signal peptides are the small leading section of a translated protein that directs it to be secreted rather than being kept inside the cell. Right after the protein is processed to the right place, this signal is clipped off and thus is not part of the mature venom protein. These signal peptides tend to be far more conserved than the mature venom protein, despite that fact that they have little to do- just send the protein to the right place, which can be accomplished by all sorts of sequences. But this is a sign that the venoms are under positive evolutionary pressure- to be more effective, to extend the range of possible victims, and to overcome whatever resistance the victims might evolve against them. 

Indeed, these authors show specifically that strong positive selection is at work, which is one more insight that molecular data can provide. (First, by comparing the rates of protein-coding positions that are neutral via the genetic code (synonymous) vs those that make the protein sequence change (non-synonymous), and second by the pattern and tempo of evolution of venom sequences compared with the mass of neutral sequences of the species.

"Given their significant sequence divergence since their deep-rooted evolutionary origin, the entire protein-coding gene, including the signal and propeptide regions, has accumulated significant differences. Consistent with this hypothesis, the majority of positively selected sites (~96%) identified in spider venom DRP toxins (all sites in Araneomorphae, and all but two sites in Mygalomorphae) were restricted to the mature peptide region, whereas the signal and propeptide regions harboured a minor proportion of these sites (1% and 3%, respectively)."

 

Phylogenetic tree (left), connecting up venom genes from across the spider phylogeny. On right, some of the venom sequences are shown just by their cysteine (C) locations, which form the basic structural scaffold of these proteins (top figure).


The more general phyogenetic analysis from all their sequences tells these authors that all the venom DRP genes, from all spider species, came from one origin. One easy way to see this is in the image above on the right, where just the cysteine scaffold of these proteins from around the phylogeny are lined up, showing that this scaffold is very highly conserved, regardless of the rest of the sequence. This finding (which confirms prior work) is surprising, since venoms of other animals, like snakes, tend to incorporate a motley bunch of active enzymes and components, sourced from a variety of ancestral sources. So to see spiders sticking so tenaciously to this fundamental structure and template for the major component of their venom is impressive- clearly it is a very effective molecule. The authors point out the cone snails, another notorious venom-maker, originated much more recently, (about 45 million years ago), and shows the same pattern of using one ancestral form to evolve a diversified blizzard of venom components, which have been of significant interest to medical science.


  • Example: a spider swings a bolas to snare a moth.

Saturday, February 18, 2023

Everything is Alive, but the Gods are all Dead

Barbara Ehrenreich's memoir and theological ruminations in "Living with a Wild God".

It turns out that everyone is a seeker. Somewhere there must be something or someone to tell us the meaning of life- something we don't have to manufacture with our own hands, but rather can go into a store and buy. Atheists are just as much seekers as anyone else, only they never find anything worth buying. The late writer Barbara Ehrenreich was such an atheist, as well as a remarkable writer and intellectual who wrote a memoir of her formation. Unusually and fruitfully, it focuses on those intense early and teen years when we are reaching out with both hands to seize the world- a world that is maddeningly just beyond our grasp, full of secrets and codes it takes a lifetime and more to understand. Religion is the ultimate hidden secret, the greatest mystery which has been solved in countless ways, each of them conflicting and confounding.

Ehrenreich's tale is more memoir than theology, taking us on a tour through a dysfunctional childhood with alcoholic parents and tough love. A story of growth, striking out into the world, and sad coming-to-terms with the parents who each die tragically. But it also turns on a pattern of mystical experiences that she keeps having, throughout her adult life, which she ultimately diagnoses as dissociative states where she zones out and has a sort of psychedelic communion with the world.

"Something peeled off the visible world, taking with it all meaning, inference, association, labels, and words. I was looking at a tree, and if anyone had asked, that's what I would have said I was doing, but the word "tree" was gone, along with all the notions of tree-ness that had accumulated in the last dozen years or so since I had acquired language. Was it a place that was suddenly revealed to me? Or was it a substance- the indivisible, elemental material out of which the entire known and agreed-upon world arises as a fantastic elaboration? I don't know, because this substance, this residue, was stolidly, imperturbably mute. The interesting thing, some might say alarming, was that when you take away all the human attributions- the words, the names of species, the wisps of remembered tree-related poetry, the fables of photosynthesis and capillary action- that when you take all this this away, there is still something left."

This is not very hard to understand as a neurological phenomenon of some kind of transient disconnection of just the kind of brain areas she mentions- those that do all the labeling, name-calling, and boxing-in. In schizophrenia, it runs to the pathological, but in Ehrenreich's case, she does not regard it as pathological at all, as it is always quite brief. But obviously, the emotional impact and weirdness of the experience- that is something else altogether, and something that humans have been inducing with drugs, and puzzling over, forever. 

Source

As a memoir, the book is very engaging. As a theological quest, however, it doesn't work as well, because the mystical experience is, as noted above, resolutely meaningless. It neither compels Ehrenreich to take up Christianity, as after a Pauline conversion, nor any other faith or belief system. It offers a peek behind the curtain, but, stripped of meaning as this view is, Ehrenreich is perhaps too skeptical or bereft of imagination to give it another, whether of her own or one available from the conventional array of sects and religions. So while the experiences are doubtless mystical, one can not call them religious, let alone god-given, because Ehrenreich hasn't interpreted them that away. This hearkens back to the writings of William James, who declined to assign general significance to mystical experiences, while freely admitting their momentous and convincing nature to those who experienced them.

Only in one brief section (which had clearly been originally destined for an entirely different book) does she offer a more interesting and insightful analysis. There, Ehrenreich notes that the history of religion can be understood as a progressive bloodbath of deicide. At first, everything is alive and sacred, to an animist mind. Every leaf and grain of sand holds wonders. Every stream and cloud is divine. This is probably our natural state, which a great deal of culture has been required to stamp out of us. Next is a hunting kind of religion, where deities are concentrated in the economic objects (and social patterns) of the tribe- the prey animals, the great plants that are eaten, and perhaps the more striking natural phenomena and powerful beasts. But by the time of paganism, the pantheon is cut down still more and tamed into a domestic household, with its soap-opera dramas and an increasingly tight focus on the major gods- the head of the family, as it were. 

Monotheism comes next, doing away with all the dedicated gods of the ocean, of medicine, of amor and war, etc., cutting the cast down to one. One, which is inflated to absurd proportions with all-goodness, all-power, all-knowledge, etc. A final and terrifying authoritarianism, probably patterned on the primitive royal state. This is the phase when the natural world is left in the lurch, as an undeified and unprotected zone where human economic greed can run rampant, safe in the belief that the one god is focused entirely on man's doings, whether for good or for ill, not on that of any other creature or feature of the natural world. A phase when even animals, who are so patently conscious, can, through the narcissism of primitive science and egoistic religion, be deemed mere mechanisms without feeling. This process doesn't even touch on the intercultural deicide committed by colonialism and conquest.

This in turn invites the last deicide- that by rational people who toss aside this now-cartoonish super-god, and return to a simpler reverence for the world as we naturally respond to it, without carting in a lot of social power-and-drama baggage. It is the cultural phase we are in right now, but the transition is painfully slow, uneven, and drawn-out. For Ehrenreich, there are plenty of signs- in the non-linear chemical phenomena of her undergraduate research, in the liveliness of quantum physics even into the non-empty vacuum, in the animals who populate our world and are perhaps the alien consciousnesses that we should be seeking in place of the hunt through outer space, and in our natural delight in, and dreams about, nature at large. So she ends the book as atheist as ever, but hinting that perhaps the liveliness of the universe around us holds some message that we are not the only thinking and sentient beings.

"Ah, you say, this is all in your mind. And you are right to be skeptical; I expect no less. It is in my mind, which I have acknowledged from the beginning is a less than perfect instrument. but this is what appears to be the purpose of my mind, and no doubt yours as well, its designed function beyond all the mundane calculations: to condense all the chaos and mystery of the world into a palpable Other or Others, not necessarily because we love it, and certainly not out of any intention to "worship" it. But because ultimately we may have no choice in the matter. I have the impression, growing out of the experiences chronicled here, that it may be seeking us out." 

Thus the book ends, and I find it a rather poor ending. It feels ripped from an X-Files episode, highly suggestive and playing into all the Deepak and similar mystical tropes of cosmic consciousness. That is, if this passage really means much at all. Anyhow, the rest of the trip is well worth it, and it is appropriate to return to the issue of the mystical experience, which is here handled with such judicious care and restraint. Where imagination could have run rampant, the cooly scientific view (Ehrenreich had a doctorate in biology) is that the experiences she had, while fascinating and possibly book-proposal-worthy, did not force a religious interpretation. This is radically unlike the treatment of such matters in countless other hands, needless to say. Perhaps our normal consciousness should not be automatically valued less than more rare and esoteric states, just because it is common, or because it is even-tempered.


  • God would like us to use "they".
  • If you are interested in early Christianity, Gnosticism is a good place to start.
  • Green is still an uphill battle.