Showing posts with label emergence. Show all posts
Showing posts with label emergence. Show all posts

Sunday, July 7, 2024

Living on the Edge of Chaos

 Consciousness as a physical process, driven by thalamic-cortical communication.

Who is conscious? This is not only a medical and practical question, but a deep philosophical and scientific question. Over the last decades and century, we have become increasingly comfortable assigning consciousness to other animals, in ever-wider circles of understanding and empathy. The complex lives of chimpanzees, as brought into our consciousness by Jane Goodall, are one example. Birds are increasingly appreciated for their intelligence and strategic maneuvering. Insects are another frontier, as we appreciate the complex communication and navigation strategies honeybees, for one example, employ. Where does it end? Are bacteria conscious?

I would define consciousness as responsiveness crosschecked with a rapidly accessible model of the world. Responsiveness alone, such in a thermostat, is not consciousness. But a nest thermostat that checks the weather, knows its occupant's habits and schedules, and prices for gas and electricity ... that might be a little conscious(!) But back to the chain of being- are jellyfish conscious? They are quite responsive, and have a few thousand networked neurons that might well be computing the expected conditions outside, so I would count them as borderline conscious. That is generally where I would put the dividing line, with plants, bacteria, and sponges as not conscious, and organisms with brains, even quite decentralized ones like octopi and snails, as conscious, with jellyfish as slightly conscious. Consciousness is an infinitely graded condition, which in us reaches great heights of richness, but presumably starts at a very simple level.

These classifications imply that consciousness is a property of very primitive brains, and thus in our brains, is likely to be driven by very primitive levels of its anatomy. And that brings us to the two articles for this week, about current theories and work centered on the thalamus as a driver of human consciousness. One paper relates some detailed experimental and modeled tests of information transfer that characterizes a causal back-and-forth between the thalamus and the cortex, in which there is a frequency division between thalamic 10 - 20 Hz oscillations, whose information is then re-encoded and reflected in cortical oscillations of much higher frequency, at about 50 - 150 Hz. It also continues a long-running theme in the field, characterizing the edge-of chaos nature of electrical activity in these thalamus-cortex communications, as being just the kind of signaling suited to consciousness, and tracks the variation of chaoticity during anesthesia, waking, psychedelic drug usage, and seizure. A background paper provides a general review of this field, showing that the thalamus seems to be a central orchestrator of both the activation and maintenance of consciousness, as well as its contents and form.

The thalamus is at the very center of the brain, and, as is typical for primitive parts of the brain, it packs a lot of molecular diversity, cell types, and anatomy in a very small region. More recently evolved areas of the brain tend to be more anatomically and molecularly uniform, while supporting more complexity at the computational level. The thalamus has about thirty "nuclei", or anatomical areas that have distinct patterns of connections and cell types. It is known to relay sensory signals to the cortex, to be central to sleep control and alertness. It sits right over the brain stem, and has radiating connections out to, and back from, the cerebral cortex, suggestive of a hub-like role. 

The thalamus is networked with recurrent connections all over the cortex.


The first paper claims firstly that electrical oscillations in the thalamus and the cortex are interestingly related. Mouse, rats, and humans were all used as subjects and gave consistent results over the testing, supporting the idea that, at very least, we think alike, even if what we think about may differ. What is encoded in the awake brain at 1-13 Hz in the thalamus appears in correlated form (that is, in a transformed way) as 50-100+ Hz in the cortex. They study the statistics of recordings from both areas to claim that there is directional information flow, not just marked by the anatomical connections, but by active, harmonic entrainment and recoding. But this relationship fails to occur in unconscious states, even though the thalamus is at this time (in sleep, or anesthesia) helping to drive slow wave sleep patterns directly in the cortex. This supports the idea that there is a summary function going on, where richer information processed in the cortex is reduced in dimension into the vaguer hunches and impressions that make up our conscious experience. Even when our feelings and impressions are very vivid, they probably do not do justice to the vast processing power operating in the cortex, which is mostly unconscious.

Administering psychedelic drugs to their experimental animals caused greater information transfer backwards from the cortex to the thalamus, suggesting that the animal's conscious experience was being flooded. They also observe that these loops from the thalamus to cortex and back have an edge-of-chaos form. They are complex, ever-shifting, and information-rich. Chaos is an interesting concept in information science, quite distinct from noise. Chaos is deterministic, in that the same starting conditions should always produce the same results. But chaos is non-linear, where small changes to initial conditions can generate large swings in the output. Limited chaos is characteristic of living systems, which have feedback controls to limit the range of activity, but also have high sensitivity to small inputs, new information, etc., and thus are highly responsive. Noise is random changes to a signal that may not be reproducible, and are not part of a control mechanism.

Unfortunately, I don't think the figures from this paper support their claims very well, or at least not clearly, so I won't show them. It is exploratory work, on the whole. At any rate, they are working from, and contributing to, a by now quite well-supported paradigm that puts the thalamus at the center of conscious experience. For example, direct electrical stimulation of the center of the thalamus can bring animals immediately up from unconsciousness induced by anesthesia. Conversely, stimulation at the same place, with a different electrical frequency, (10 Hz, rather than 50 Hz), causes immediate freezing and vacancy of expression of the animal, suggesting interference with consciousness. Secondly, the thalamus is known to be the place which gates what sensory data enters consciousness, based on a long history of attention, ocular rivalry, and blind-sight experiments.

A schematic of how stimulation of the thalamus (in middle) interacts with the overlying cortex. CL stands for the ventral lateral nucleus of thalamus, where these stimulation experiments were targeted. The greek letters alpha and gamma stand for different frequency bands of the neural oscillation.

So, from both anatomical perspectives and functional ones, the thalamus appears at the center of conscious experience. This is a field that is not going to be revolutionized by a lightning insight or a new equation. It is looking very much like a matter of normal science slowly accumulating ever-more refined observations, simulations, better technologies, and theories that gain, piece by piece, on this most curious of mysteries.


Saturday, February 17, 2024

A New Form of Life is Discovered

An extremely short RNA is infectious and prevalent in the human microbiome.

While the last century might be called the DNA century, at least for molecular biology, the current century might be called that of RNA. A blizzard of new RNA types and potentials have been discovered in the normal eukaryotic milieu, including miRNA, eRNA, lincRNA. An RNA virus caused a pandemic, which was remedied by an RNA vaccine. Nobel prizes have been handed out in these fields, and we are also increasingly aware that RNA lies at the origin of life itself, as the first genetic and catalytic mechanism.

One of these Nobel prize winners recently undertook a hunt for small RNAs that might be lurking in the human microbiome- the soup of bacteria, fungi, and all the combined products that cover our surfaces, inside and out. What they found was astonishing- an RNA of merely 1164 nucleotides, which folds up into a rigid, linear rod, which they call "obelisks". This is not a product of the host genome, nor of any other known organism, but is rather some kind of extremely minimal pathogen that, like a transposon or self-splicing intron, is entirely nucleic-acid based. And the more they hunted, the more they found, ultimately finding thousands of obelisk-like entities hidden in the many databases of the world drawn from various environmental and microbiome samples. There is some precedent for this kind of structure, in the form of hepatitis D. This "viroid" of only 1682 nucleotides is a parasite of hepatitis B virus, depending on that virus for key replication functions. While normal viruses (like hepatitis B) encode many key functions of their own, like envelope proteins, genome packaging proteins, and replication enzymes, viroids tend to not encode anything, though hepatitis D does encode one antigenic protein, which exacerbates hepatitis B infections.

The obelisk RNA viroid-like species appear to encode one or two proteins, and possibly a ribozyme as well. The functions of all these are as yet unknown, but necessarily the RNAs rely entirely some host cell (currently unknown) functions to do their thing, such as the RNA polymerase to create copies of itself. Unknown also is whether they are dependent on other viruses, or only on cells for their propagation. Being just discovered, the researchers can do a great deal of bioinformatics, such as predicting the structure of the encoded protein, and the structure of the RNA genome. But key biology, like how they interact with host cells, what functions the host provides, and how they replicate, not to mention possible pathogenic consequences, remain unknown.

The highly self-complementary structure of one obelisk RNA sequence, leading to its identification and naming. In green is one reading frame, which codes for the main protein, of unknown function.

The curious thing about these new obelisk viroid-like RNAs is that, while common in human microbiomes, both oral and gut-derived, they are found only in 5-10% of them, not in all samples. This sort of suggests that they may account for some of the variability traceable to microbiomes, such as autoimmune issues, chronic ailments, nutritional variations, even effects on mood, etc.

Once a lot of databases were searched, obelisk RNAs turn up everywhere, even in some bacteria.

This work was done entirely in silico. Not a single wet-lab experiment was performed. It is a testament to the power of having alot of genomes at our disposal, and of modern computational firepower. This lab just had the idea that novel small viroid-like RNAs might exhibit certain types of (circular, self-complementary) structure, which led to this discovery of a novel form of "life". Are these RNAs alive? Certainly not. They are mere molecules and parasites that feed off, and transport themselves between, more fully functional cells. But they are part of the tapestry of life, which itself is wholly molecular, with many amazing emergent properties. Whether these obelisks turn out to have any medical or ecological significance, they are one more example of the lengths (and shorts) to which Darwinian selection has gone in the struggle for existence. 


Saturday, December 2, 2023

Preliminary Pieces of AI

We already live in an AI world, and really, it isn't so bad.

It is odd to hear about all the hyperventilating about artificial intelligence of late. One would think it is a new thing, or some science-fiction-y entity. Then there are fears about the singularity and loss of control by humans. Count me a skeptic on all fronts. Man is, and remains, wolf to man. To take one example, we are contemplating the election of perhaps the dummbest person ever to hold the office of president. For the second time. How an intelligence, artificial or otherwise, is supposed to worm its way into power over us is not easy to understand, looking at nature of humans and of power. 

So let's take a step back and figure out what is going on, and where it is likely to take us. AI has become a catch-all for a diversity of computer methods, mostly characterized by being slightly better at doing things we have long wanted computers to do, like interpreting text, speech, and images. But I would offer that it should include much more- all the things we have computers do to manage information. In that sense, we have been living among shards of artificial intelligence for a very long time. We have become utterly dependent on databases, for instance, for our memory functions. Imagine having to chase down a bank balance or a news story, without access to the searchable memories that modern databases provide. They are breathtakingly superior to our own intelligence when it comes to the amount of things they can remember, the accuracy they can remember them, and the speed with which they can find them. The same goes for calculations of all sorts, and more recently, complex scientific math like solving atomic structures, creating wonderful CGI graphics, or predicting the weather. 

We should view AI as a cabinet filled with many different tools, just as our own bodies and minds are filled with many kinds of intelligence. The integration of our minds into a single consciousness tends to blind us to the diversity of what happens under the hood. While we may want gross measurements like "general intelligence", we also know increasingly that it (whatever "it" is, and whatever it fails to encompass of our many facets and talents) is composed of many functions that several decades of work in AI, computer science, and neuroscience have shown are far more complicated and difficult to replicate than the early AI pioneers imagined, once they got hold of their Turing machine with its infinite potential. 

Originally, we tended to imagine artificial intelligence as a robot- humanoid, slightly odd looking, but just like us in form and abilities. That was a natural consequence of our archetypes and narcissism. But AI is nothing like that, because full-on humanoid consciousness is an impossibly high bar, at least for the foreseeable future, and requires innumerable capabilities and forms of intelligence to be developed first. 

The autonomous car drama is a good example of this. It has taken every ounce of ingenuity and high tech to get to a reasonably passable vehicle, which is able to "understand" key components of the world around it. That a blob in front is a person, instead of a motorcycle, or that a light is a traffic light instead of a reflection of the sun. Just as our brain has a stepwise hierarchy of visual processing, we have recapitulated that evolution here by harnessing cameras in these cars (and lasers, etc.) to not just take in a flat visual scene, which by itself is meaningless, but to parse it into understandable units like ... other cars, crosswalks, buildings, bicylists, etc.. Visual scenes are very rich, and figuring out what is in them is a huge accomplishment. 

But is it intelligence? Yes, it certainly is a fragment of intelligence, but it isn't consciousness. Imagine how effortless this process is for us, and how effortful and constricted it is for an autonomous vehicle. We understand everything in a scene within a much wider model of the world, where everything relates to everything else. We evaluate and understand innumerable levels of our environment, from its chemical makeup to its social and political implications. Traffic cones do not freak us out. The bare obstacle course of getting around, such as in a vehicle, is a minor aspect, really, of this consciousness, and of our intelligence. Autonomous cars are barely up to the level of cockroaches, on balance, in overall intelligence.

The AI of text and language handling is similarly primitive. Despite the vast improvements in text translation and interpretation, the underlying models these mechanisms draw on are limited. Translation can be done without understanding text at all, merely by matching patterns from pre-digested pools of pre-translated text, regurgitated as cued by the input text. Siri-like spoken responses, on the other hand, do require some parsing of meaning out of the input, to decide what the topic and the question are. But the scope of these tools tend to be very limited, and the wider scope they are allowed, the more embarrassing their performance, since they are essentially scraping web sites and text pools for question-response patterns, instead of truly understanding the user's request or any field of knowledge.

Lastly, there are the generative ChatGPT style engines, which also regurgitate text patterns reformatted from public sources in response to topical requests. The ability to re-write a Wikipedia entry through a Shakespeare filter is amazing, but it is really the search / input functions that are most impressive- being able, like the Siri system, to parse through the user's request for all its key points. This betokens some degree of understanding, in the sense that the world of the machine (i.e. its database) is parceled up into topics that can be separately gathered and reshuffled into a response. This requires a pretty broad and structured ontological / classification system, which is one important part of intelligence.

Not only is there a diversity of forms of intelligence to be considered, but there is a vast diversity of expertise and knowledge to be learned. There are millions of jobs and professions, each with their own forms of knowledge. Back the early days of AI, we thought that expert systems could be instructed by experts, formalizing their expertise. But that turned out to be not just impractical, but impossible, since much of that expertise, formed out of years of study and experience, is implicit and unconscious. That is why apprenticeship among humans is so powerful, offering a combination of learning by watching and learning by doing. Can AI do that? Only if it gains several more kinds of intelligence including an ability to learn in very un-computer-ish ways.

This analysis has emphasized the diverse nature of intelligences, and the uneven, evolutionary development they have undergone. How close are we to a social intelligence that could understand people's motivations and empathise with them? Not very close at all. How close are we to a scientific intelligence that could find holes in the scholarly literature and manage a research enterprise to fill them? Not very close at all. So it is very early days in terms of anything that could properly be called artificial intelligence, even while bits and pieces have been with us for a long time. We may be in for fifty to a hundred more years of hearing every advance in computer science being billed as artificial intelligence.


Uneven development is going to continue to be the pattern, as we seize upon topics that seem interesting or economically rewarding, and do whatever the current technological frontier allows. Memory and calculation were the first to fall, being easily formalizable. Communication network management is similarly positioned. Game learning was next, followed by the Siri / Watson systems for question answering. Then came a frontal assault on language understanding, using the neural network systems, which discard the former expert system's obsession with grammar and rules, for much simpler statistical learning from large pools of text. This is where we are, far from fully understanding language, but highly capable in restricted areas. And the need for better AI is acute. There are great frontiers to realize in medical diagnosis and in the modeling of biological systems, to only name two fields close at hand that could benefit from a thoroughly systematic and capable artificial intelligence.

The problem is that world modeling, which is what languages implicitly stand for, is very complicated. We do not even know how to do this properly in principle, let alone having the mechanisms and scale to implement it. What we have in terms of expert systems and databases do not have the kind of richness or accessibility needed for a fluid and wide-ranging consciousness. Will neural nets get us there? Or ontological systems / databases? Or some combination? However it is done, full world modeling with the ability to learn continuously into those models are key capabilities needed for significant artificial intelligence.

After world modeling come other forms of intelligence like social / emotional intelligence and agency / management intelligence with motivation. I have no doubt that we will get to full machine consciousness at some point. The mechanisms of biological brains are just not sufficiently mysterious to think that they can not be replicated or improved upon. But we are nowhere near that yet, despite bandying about the word artificial intelligence. When we get there, we will have to pay special attention to the forms of motivation we implant, to mitigate the dangers of making beings who are even more malevolent than those that already exist... us.

Would that constitute some kind of "singularity"? I doubt it. Among humans there are already plenty of smart people and diversity, which result in niches for everyone having something useful to do. Technology has been replacing human labor forever, and will continue moving up the chain of capability. And when machines exceed the level of human intelligence, in some general sense, they will get all the difficult jobs. But the job of president? That will still go to a dolt, I have no doubt. Selection for some jobs is by criteria that artificial intelligence, no matter how astute, is not going to fulfill.

Risks? In the current environment, there are a plenty of risks, which are typically cases where technology has outrun our will to regulate its social harm. Fake information, thanks to the chatbots and image makers, can now flood the zone. But this is hardly a new phenomenon, and perhaps we need to get back to a position where we do not believe everything we read, in the National Enquirer or on the internet. The quality of our sources may become once again an important consideration, as they always should have been.

Another current risk is that the automation risks chaos. For example in the financial markets, the new technologies seem to calm the markets most of the time, arbitraging with relentless precision. But when things go out of bounds, flash breakdowns can happen, very destructively. The SEC has sifted through some past events of this kind and set up regulatory guard rails. But they will probably be perpetually behind the curve. Militaries are itching to use robots instead of pilots and soldiers, and to automate killing from afar. But ultimately, control of the military comes down to social power, which comes down to people of not necessarily great intelligence. 

The biggest risk from these machines is that of security. If we have our financial markets run by machine, or our medical system run by super-knowledgeable artificial intelligences, or our military by some panopticon neural net, or even just our electrical grid run by super-computers, the problem is not that they will turn against us of their own volition, but that some hacker somewhere will turn them against us. Countless hospitals have already faced ransomware attacks. This is a real problem, growing as machines become more capable and indispensable. If and when we make artificial people, we will need the same kind of surveillance and social control mechanisms over them that we do over everyone else, but with the added option of changing their programming. Again, powerful intelligences made for military purposes to kill our enemies are, by the reflexive property of all technology, prime targets for being turned against us. So just as we have locked up our nuclear weapons and managed to not have them fall into enemy hands (so far), similar safeguards would need to be put on similar powers arising from these newer technologies.

We may have been misled by the many AI and super-beings of science fiction, Nietzsche's Ãœbermensch, and similar archetypes. The point of Nietzsche's construction is moral, not intellectual or physical- a person who has thrown off all the moral boundaries of civilization, expecially Christian civilization. But that is a phantasm. The point of most societies is to allow the weak to band together to control the strong and malevolent. A society where the strong band together to enslave the weak.. well, that is surely a nightmare, and more unrealistic the more concentrated the power. We must simply hope that, given the ample time we have before truly comprehensive and superior artificial intelligent beings exist, we have exercised sufficient care in their construction, and in the health of our political institutions, to control them as we have many other potentially malevolent agents.


  • AI in chemistry.
  • AI to recognize cells in images.
  • Ayaan Hirsi Ali becomes Christian. "I ultimately found life without any spiritual solace unendurable."
  • The racism runs very deep.
  • An appreciation of Stephen J. Gould.
  • Forced arbitration against customers and employees is OK, but fines against frauds... not so much?
  • Oil production still going up.

Saturday, September 30, 2023

Are we all the Same, or all Different?

Refining diversity.

There has been some confusion and convenient logic around diversity. Are we all the same? Conventional wisdom makes us legally the same, and the same in terms of rights, in an ever-expanding program of level playing fields- race, gender, gender preference, neurodiversity, etc. At the same time, conventional wisdom treasures diversity, inclusion, and difference. Educational conventional wisdom assumes all children are the same, and deserve the same investments and education, until some magic point when diversity flowers, and children pursue their individual dreams, applying to higher educational institutions, or not. But here again, selectiveness and merit are highly contested- should all ethnic groups be equally represented at universities, or are we diverse on that plane as well?

It is quite confusing, on the new political correctness program, to tell who is supposed to be the same and who different, and in what ways, and for what ends. Some acute social radar is called for to navigate this woke world and one can sympathize, though not too much, with those who are sick of it and want to go back to simpler times of shameless competition; black and white. 

The fundamental tension is that a society needs some degree of solidarity and cohesion to satisfy our social natures and to get anything complex done. At the same time, Darwinian and economic imperatives have us competing with each other at all levels- among nations, ethnicities, states, genders, families, work groups, individuals. We are wonderfully sensitive to infinitesimal differences, which form the soul of Darwinian selection. Woke efforts clearly try to separate differences that are essential and involuntary, (which should in principle be excluded from competition), from those that are not fixed, such as personal virtue and work ethic, thus forming the proper field of education and competition.

But that is awfully abstract. Reducing that vague principle to practice is highly fraught. Race, insofar as it can be defined at all, is clearly an essential trait. So race should not be a criterion for any competitive aspect of the society- job hunting, education, customer service. But what about "diversity" and what about affirmative action? Should the competition be weighted a little to make up for past wrongs? How about intelligence? Intelligence is heritable, but we can't call it essential, lest virtually every form of competition in our society be brought to a halt. Higher education and business, and the general business of life, is extremely competitive on the field of intelligence- who can con whom, who can come up with great ideas, write books, do the work, and manage others.

These impulses towards solidarity and competition define our fundamental political divides, with Republicans glorying in the unfairness of life, and the success of the rich. Democrats want everyone to get along, with care for unfortunate and oppressed. Our social schizophrenia over identity and empathy is expressed in the crazy politics of today. And Republicans reflect contemporary identity politics as well, just in their twisted, white-centric way. We are coming apart socially, and losing key cooperative capacity that puts our national project in jeopardy. We can grant that the narratives and archetypes that have glued the civic culture have been fantasies- that everyone is equal, or that the founding fathers were geniuses that selflessly wrought the perfect union. But at the same time, the new mantras of diversity have dangerous aspects as well.


Each side, in archetypal terms, is right and each is an essential element in making society work. Neither side's utopia is either practical or desirable. The Democratic dream is for everyone to get plenty of public services and equal treatment at every possible nexus of life, with morally-informed regulation of every social and economic harm, and unions helping to run every workplace. In the end, there would be little room for economic activity at all- for the competition that undergirds innovation and productivity, and we would find ourselves poverty-stricken, which was what led other socialist/communist states to drastic solutions that were not socially progressive at all.

On the other hand is a capitalist utopia where the winners take not just hundreds of billions of dollars, but everything else, such as the freedom of workers to organize or resist, and political power as well. The US would turn into a moneyed class system, just like the old "nobility" of Europe, with serfs. It is the Nietzschian, Randian ideal of total competition, enabling winners to oppress everyone else in perpetuity, and, into the bargain, write themselves into the history books as gods.

These are not (and were not, historically) appetizing prospects, and we need the tension of mature and civil political debate between them to find a middle ground that is both fertile and humane. Nature is, as in so many other things, an excellent guide. Cooperation is a big theme in evolution, from the assembly of the eukaryotic cell from prokaryotic precusors, to its wonderous elaboration into multicellular bodies and later into societies such as our own and those of the insects. Cooperation is the way to great accomplishments. Yet competition is the baseline that is equally essential. Diversity, yes, but it is competition and selection among that diversity and cooperative enterprise that turns the downward trajectory of entropy and decay (as dictated by physics and time) into flourishing progress.


  • Identity, essentialism, and postmodernism.
  • Family structure, ... or diversity?
  • Earth in the far future.
  • Electric or not, cars are still bad.
  • Our non-political and totally not corrupt supreme court.
  • The nightmare of building in California.

Saturday, March 11, 2023

An Origin Story for Spider Venom

Phylogenetic analysis shows that the major component of spider venom derives from one ancient ancestor.

One reason why biologists are so fully committed to the Darwinian account of natural selection and evolution is that it keeps explaining and organizing what we see. Despite the almost incredible diversity and complexity of life, every close look keeps confirming what Darwin sensed and outlined so long ago. In the modern era, biology has gone through the "Modern Synthesis", bringing genetics, molecular biology, and evolutionary theory into alignment with mutually supporting data and theories. For example, it was Linus Pauling and colleagues (after they lost the race to determine the structure of DNA) who proposed that the composition of proteins (hemoglobin, in their case) could be used to estimate evolutionary relationships, both among those molecules, and among their host species.

Naturally, these methods have become vastly more powerful, to the point that most phylogenetic analyses of the relationship between species (including the definition of what species are, vs subspecies, hybrids, etc.) are led these days by DNA analysis, which provides the richest possible trove of differentiating characters- a vast spectrum from universally conserved to highly (and forensically) varying. And, naturally, it also constitutes a record of the mutational steps that make up the evolutionary process. The correlation of such analyses with other traditionally used diagnostic characters, and with the paleontological record, is a huge area of productive science, which leads, again and again, to new revelations about life's history.


One sample structure of a DRP- the disulfide rich protein that makes up most of spider venoms.
 The disulfide bond (between two cysteines) is shown in red. There is usually another disulfide helping to hold the two halves of the molecule together as well. The rest of the molecule is (evolutionarily, and structurally) free to change shape and character, in order to carry out its neuron-channel blocking or other toxic function.

One small example was published recently, in a study of spider venoms. Spiders arose, from current estimates, about 375 million years ago, and comprise the second most prevalent form of animal life, second only to their cousins, the insects. They generally have a hunting lifestyle, using venom to immobilize their prey, after capture and before digestion. These venoms are highly complex brews that can have over a hundred distinct molecules, including potassium, acids, tissue- and membrane-digesting enzymes, nucleosides, pore-forming peptides, and neurotoxins. At over three-fourths of the venom, the protein-based neurotoxins are the most interesting and best studied of the venom components, and a spider typically deploys dozens of types in its venom. They are also called cysteine-rich peptides or disulfide-rich peptides (DRPs) due to their composition. The fact that spiders tend to each have a large variety of these DRPs in their collection argues that a lot of gene duplication and diversification has occured.

A general phylogenetic tree of spiders (left). On the right are the signal peptides of a variety of venoms from some of these species. The identity of many of these signal sequences, which are not present in the final active protein, is a sign that these venom genes were recently duplicated.

So where do they come from? Sequences of the peptides themselves are of limited assistance, being small, (averaging ~60 amino acids), and under extensive selection to diversify. But they are processed from larger proteins (pro-proteins) and genes that show better conservation, providing the present authors more material for their evolutionary studies. The figure above, for example, shows, on the far right, the signal peptides from families of these DRP genes from single species. Signal peptides are the small leading section of a translated protein that directs it to be secreted rather than being kept inside the cell. Right after the protein is processed to the right place, this signal is clipped off and thus is not part of the mature venom protein. These signal peptides tend to be far more conserved than the mature venom protein, despite that fact that they have little to do- just send the protein to the right place, which can be accomplished by all sorts of sequences. But this is a sign that the venoms are under positive evolutionary pressure- to be more effective, to extend the range of possible victims, and to overcome whatever resistance the victims might evolve against them. 

Indeed, these authors show specifically that strong positive selection is at work, which is one more insight that molecular data can provide. (First, by comparing the rates of protein-coding positions that are neutral via the genetic code (synonymous) vs those that make the protein sequence change (non-synonymous), and second by the pattern and tempo of evolution of venom sequences compared with the mass of neutral sequences of the species.

"Given their significant sequence divergence since their deep-rooted evolutionary origin, the entire protein-coding gene, including the signal and propeptide regions, has accumulated significant differences. Consistent with this hypothesis, the majority of positively selected sites (~96%) identified in spider venom DRP toxins (all sites in Araneomorphae, and all but two sites in Mygalomorphae) were restricted to the mature peptide region, whereas the signal and propeptide regions harboured a minor proportion of these sites (1% and 3%, respectively)."

 

Phylogenetic tree (left), connecting up venom genes from across the spider phylogeny. On right, some of the venom sequences are shown just by their cysteine (C) locations, which form the basic structural scaffold of these proteins (top figure).


The more general phyogenetic analysis from all their sequences tells these authors that all the venom DRP genes, from all spider species, came from one origin. One easy way to see this is in the image above on the right, where just the cysteine scaffold of these proteins from around the phylogeny are lined up, showing that this scaffold is very highly conserved, regardless of the rest of the sequence. This finding (which confirms prior work) is surprising, since venoms of other animals, like snakes, tend to incorporate a motley bunch of active enzymes and components, sourced from a variety of ancestral sources. So to see spiders sticking so tenaciously to this fundamental structure and template for the major component of their venom is impressive- clearly it is a very effective molecule. The authors point out the cone snails, another notorious venom-maker, originated much more recently, (about 45 million years ago), and shows the same pattern of using one ancestral form to evolve a diversified blizzard of venom components, which have been of significant interest to medical science.


  • Example: a spider swings a bolas to snare a moth.

Saturday, February 18, 2023

Everything is Alive, but the Gods are all Dead

Barbara Ehrenreich's memoir and theological ruminations in "Living with a Wild God".

It turns out that everyone is a seeker. Somewhere there must be something or someone to tell us the meaning of life- something we don't have to manufacture with our own hands, but rather can go into a store and buy. Atheists are just as much seekers as anyone else, only they never find anything worth buying. The late writer Barbara Ehrenreich was such an atheist, as well as a remarkable writer and intellectual who wrote a memoir of her formation. Unusually and fruitfully, it focuses on those intense early and teen years when we are reaching out with both hands to seize the world- a world that is maddeningly just beyond our grasp, full of secrets and codes it takes a lifetime and more to understand. Religion is the ultimate hidden secret, the greatest mystery which has been solved in countless ways, each of them conflicting and confounding.

Ehrenreich's tale is more memoir than theology, taking us on a tour through a dysfunctional childhood with alcoholic parents and tough love. A story of growth, striking out into the world, and sad coming-to-terms with the parents who each die tragically. But it also turns on a pattern of mystical experiences that she keeps having, throughout her adult life, which she ultimately diagnoses as dissociative states where she zones out and has a sort of psychedelic communion with the world.

"Something peeled off the visible world, taking with it all meaning, inference, association, labels, and words. I was looking at a tree, and if anyone had asked, that's what I would have said I was doing, but the word "tree" was gone, along with all the notions of tree-ness that had accumulated in the last dozen years or so since I had acquired language. Was it a place that was suddenly revealed to me? Or was it a substance- the indivisible, elemental material out of which the entire known and agreed-upon world arises as a fantastic elaboration? I don't know, because this substance, this residue, was stolidly, imperturbably mute. The interesting thing, some might say alarming, was that when you take away all the human attributions- the words, the names of species, the wisps of remembered tree-related poetry, the fables of photosynthesis and capillary action- that when you take all this this away, there is still something left."

This is not very hard to understand as a neurological phenomenon of some kind of transient disconnection of just the kind of brain areas she mentions- those that do all the labeling, name-calling, and boxing-in. In schizophrenia, it runs to the pathological, but in Ehrenreich's case, she does not regard it as pathological at all, as it is always quite brief. But obviously, the emotional impact and weirdness of the experience- that is something else altogether, and something that humans have been inducing with drugs, and puzzling over, forever. 

Source

As a memoir, the book is very engaging. As a theological quest, however, it doesn't work as well, because the mystical experience is, as noted above, resolutely meaningless. It neither compels Ehrenreich to take up Christianity, as after a Pauline conversion, nor any other faith or belief system. It offers a peek behind the curtain, but, stripped of meaning as this view is, Ehrenreich is perhaps too skeptical or bereft of imagination to give it another, whether of her own or one available from the conventional array of sects and religions. So while the experiences are doubtless mystical, one can not call them religious, let alone god-given, because Ehrenreich hasn't interpreted them that away. This hearkens back to the writings of William James, who declined to assign general significance to mystical experiences, while freely admitting their momentous and convincing nature to those who experienced them.

Only in one brief section (which had clearly been originally destined for an entirely different book) does she offer a more interesting and insightful analysis. There, Ehrenreich notes that the history of religion can be understood as a progressive bloodbath of deicide. At first, everything is alive and sacred, to an animist mind. Every leaf and grain of sand holds wonders. Every stream and cloud is divine. This is probably our natural state, which a great deal of culture has been required to stamp out of us. Next is a hunting kind of religion, where deities are concentrated in the economic objects (and social patterns) of the tribe- the prey animals, the great plants that are eaten, and perhaps the more striking natural phenomena and powerful beasts. But by the time of paganism, the pantheon is cut down still more and tamed into a domestic household, with its soap-opera dramas and an increasingly tight focus on the major gods- the head of the family, as it were. 

Monotheism comes next, doing away with all the dedicated gods of the ocean, of medicine, of amor and war, etc., cutting the cast down to one. One, which is inflated to absurd proportions with all-goodness, all-power, all-knowledge, etc. A final and terrifying authoritarianism, probably patterned on the primitive royal state. This is the phase when the natural world is left in the lurch, as an undeified and unprotected zone where human economic greed can run rampant, safe in the belief that the one god is focused entirely on man's doings, whether for good or for ill, not on that of any other creature or feature of the natural world. A phase when even animals, who are so patently conscious, can, through the narcissism of primitive science and egoistic religion, be deemed mere mechanisms without feeling. This process doesn't even touch on the intercultural deicide committed by colonialism and conquest.

This in turn invites the last deicide- that by rational people who toss aside this now-cartoonish super-god, and return to a simpler reverence for the world as we naturally respond to it, without carting in a lot of social power-and-drama baggage. It is the cultural phase we are in right now, but the transition is painfully slow, uneven, and drawn-out. For Ehrenreich, there are plenty of signs- in the non-linear chemical phenomena of her undergraduate research, in the liveliness of quantum physics even into the non-empty vacuum, in the animals who populate our world and are perhaps the alien consciousnesses that we should be seeking in place of the hunt through outer space, and in our natural delight in, and dreams about, nature at large. So she ends the book as atheist as ever, but hinting that perhaps the liveliness of the universe around us holds some message that we are not the only thinking and sentient beings.

"Ah, you say, this is all in your mind. And you are right to be skeptical; I expect no less. It is in my mind, which I have acknowledged from the beginning is a less than perfect instrument. but this is what appears to be the purpose of my mind, and no doubt yours as well, its designed function beyond all the mundane calculations: to condense all the chaos and mystery of the world into a palpable Other or Others, not necessarily because we love it, and certainly not out of any intention to "worship" it. But because ultimately we may have no choice in the matter. I have the impression, growing out of the experiences chronicled here, that it may be seeking us out." 

Thus the book ends, and I find it a rather poor ending. It feels ripped from an X-Files episode, highly suggestive and playing into all the Deepak and similar mystical tropes of cosmic consciousness. That is, if this passage really means much at all. Anyhow, the rest of the trip is well worth it, and it is appropriate to return to the issue of the mystical experience, which is here handled with such judicious care and restraint. Where imagination could have run rampant, the cooly scientific view (Ehrenreich had a doctorate in biology) is that the experiences she had, while fascinating and possibly book-proposal-worthy, did not force a religious interpretation. This is radically unlike the treatment of such matters in countless other hands, needless to say. Perhaps our normal consciousness should not be automatically valued less than more rare and esoteric states, just because it is common, or because it is even-tempered.


  • God would like us to use "they".
  • If you are interested in early Christianity, Gnosticism is a good place to start.
  • Green is still an uphill battle.

Saturday, December 31, 2022

Hand-Waving to God

A decade on, the Discovery Institute is still cranking out skepticism, diversion, and obfuscation.

A post a couple of weeks ago mentioned that the Discovery Institute offered a knowledgeable critique of the lineages of the Ediacaran fauna. They have raised their scientific game significantly, and so I wanted to review what they are doing these days, focusing on two of their most recent papers. The Discovery Institute has a lineage of its own, from creationism. It has adapted to the derision that entailed, by retreating to "intelligent design", which is creationism without naming the creators, nailing down the schedule of creation, or providing any detail of how and from where creation operates. Their review of the Ediacaran fauna raised some highly skeptical points about whether these organisms were animals or not. Particularly, they suggested that cholesterol is not really restricted to animals, so the chemical traces of cholesterol that were so clearly found in the Dickinsonia fossil layers might not really mean that these were animals- they might also be unusual protists of gigantic size, or odd plant forms, etc. While the critique is not unreasonable, it does not alter the balance of the evidence which does indeed point to an animal affinity. These fauna are so primitive and distant that it is fair to say that we can not be sure, and particularly we can not be sure that they had any direct ancestral relationship to any later organisms of the ensuing Cambrian period, when recognizable animals emerged.

Fair enough. But what of their larger point? The Discovery Institute is trying to make the point, I believe, about the sudden-ness of early Cambrian evolution of animals, and thus its implausibility under conventional evolutionary theory. But we are traversing tens of millions of years through these intervals, which is a long time, even in evolutionary terms. Secondly, the Ediacaran period, though now represented by several exquisite fossil beds, spanned a hundred million years and is still far from completely characterized paleontologically, even supposing that early true animals would have fossilized, rather than being infinitesimal and very soft-bodied. So the Cambrian biota could easily have predecessors in the Ediacaran that have or have not yet been observed- it is as yet not easy to say. But what we can not claim is the negative, that no predecessors existed before some time X- say the 540 MYA point at the base of the Cambrian. So the implication that the Discovery Institute is attempting to suggest has very little merit, particularly since everything that they themselves cite about the molecular and paleontological sequence is so clearly progressive and in proper time sequence, in complete accord with the overall theory of evolution.

For we should always keep in mind that an intelligent designer has a free hand, and can make all of life in a day (or in six, if absolutely needed). The fact that this designer works in the shadows of slightly altered mutation rates, or in a few million years rather than twenty million, and never puts fossils out of sequence in the sedimentary record, is an acknowledgement that this designer is a bit dull, and bears a strong resemblence to evolution by natural selection. To put it in psychological terms, the institute is in the "negotiation" stage of grief- over the death of god.

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, December 17, 2022

The Pillow Creatures That Time Forgot

Did the Ediacaran fauna lead to anything else, or was it a dead end?

While to a molecular biologist, the evolution of the eukaryotic cell is probably the greatest watershed event after the advent of life itself, most others would probably go with the rise of animals and plants, after about three billion years of exclusively microbial life. This event is commonly located at the base of the Cambrian, (i.e. the Cambrian explosion), which is where the fossils that Darwin and his contemporaries were familiar with began, about 540 million years ago. Darwin was puzzled by this sudden start of the fossil record, from apparently nothing, and presciently held (as he did in the case of the apparent age of the sun) that the data were faulty, and that the ancient character of life on earth would leave other traces much farther back in time.

That has indeed proved to be the case. There are signs of microbial life going back over three billion years, and whole geologies in the subsequent time dependent on its activity, such as the banded iron formations prevalent around two billion years ago that testify to the slow oxygenation of the oceans by photosynthesizing microbes. And there are also signs of animal life prior to the Cambrian, going back roughly to 600 million years ago that have turned up, after much deeper investigations of the fossil record. This immediately pre-Cambrian period is labeled the Ediacaran, for one of its fossil-bearing sites in Australia. A recent paper looked over this whole period to ask whether the evolution of proto-animals during this time was a steady process, or punctuated by mass extinction event(s). They conclude that, despite the patchy record, there is enough to say that there was a steady (if extremely slow) march of ecological diversification and specialization through the time, until the evolution of true animals in the Cambrian literally ate up all the Ediacaran fauna. 

Fossil impression of Dickinsonia, with trailing impressions that some think might be a trail from movement. Or perhaps just friends in the neighborhood.
 
For the difference between the Ediacaran fauna and that of the Cambrian is stark. The Ediacaran fauna is beautiful, but simple. There are no backbones, no sensory organs. No mouth, no limbs, no head. In developmental terms, they seem to have had only two embryological cell layers, rather than our three, which makes all the difference in terms of complexity. How they ate remains a mystery, but they are assumed to have simply osmosed nutrients from their environment, thanks to their apparently flat forms. A bit like sponges today. As they were the most complex animals at the time, (and some were large, up to 2 meters long), they may have had an easy time of it, simply plopping themselves on top of rich microbial mats, oozing with biofilms and other nutrients.

The paper provides a schematic view of the ecology at single locations, and also of longer-term evolution, from a sequence of views (i.e. fossils) obtained from different locations around the world of roughly ten million year intervals through the Ediacaran. One noticeable trend is the increasing development or prevalence of taller fern-like forms that stick up into the water over time, versus the flatter bottom-dwelling forms. This may reflect some degree of competition, perhaps after the bottom microbial mats have been over-"grazed". A second trend is towards slightly more complexity at the end of the period, with one very small form (form C (a) in the image below) even marked by shell remains, though what its animal inhabitant looked like is unknown. 

Schematic representation of putative animals observed during the Ediacaran epoch, from early, (A, ~570 MYA, Avalon assemblage), middle, (B, ~554 MYA, White River and other assemblages), and late (C, ~545 MYA, Nama assemblage). The A panel is also differentiated by successional forms from early to mature ecosystems, while the C panel is differentiated by ocean depth, from shallow to deep. The persistence of these forms is quite impressive overall, as is their common simplicity. But lurking somewhere among them are the makings of far more complicated animals.

Very few of these organisms have been linked to actual animals of later epochs, so virtually all of them seem to have been superceded by the wholly different Cambrian fauna- much of which itself remains perplexing. One remarkable study used mass-spec chemical analysis on some Dickinsonia fossils from the late Ediacaran to determine that they bore specific traces of cholesterol, marking them as probable animals, rather than overgrown protists or seaweed. But beyond that, there is little that can be said. (Note a very critical and informed review of all this from the Discovery Institute, of all places.) Their preservation is often remarkable, considering the age involved, and they clearly form the sole fauna known from pre-Cambrian times. 

But the core question of how the Cambrian (and later) animals came to be remains uncertain, at least as far as the fossil record is concerned. One relevant observation is that there is no sign of burrowing through the sediments of the Ediacaran epoch. So the appearance of more complex animals, while it surely had some kind of precedent deep in the Ediacaran, or even before, did not make itself felt in any macroscopic way then. It is evident that once the triploblastic developmental paradigm arose, out of the various geologic upheavals that occurred at the bases of both the Ediacaran and the Cambrian, its new design including mouths, eyes, spines, bones, plates, limbs, guts, and all the rest that we are now so very familiar with, utterly over-ran everything that had gone before.

Some more fine fossils from Canada, ~ 580 MYA.


  • A video tour of some of the Avalon fauna.
  • An excellent BBC podcast on the Ediacaran.
  • We need to measure the economy differently.
  • Deep dive on the costs of foreign debt.
  • Now I know why chemotherapy is so horrible.
  • Waste on an epic scale.
  • The problem was not the raids, but the terrible intelligence... by our intelligence agency.

Saturday, October 15, 2022

From Geo-Logic to Bio-Logic

Why did ATP become the central energy currency and all-around utility molecule, at the origin of life?

The exploration of the solar system and astronomical objects beyond has been one of the greatest achievements of humanity, and of the US in particular. We should be proud of expanding humanity's knowledge using robotic spacecraft and space-based telescopes that have visited every planet and seen incredibly far out in space, and back in time. But one thing we have not found is life. The Earth is unique, and it is unlikely that we will ever find life elsewhere within traveling distance. While life may concievably have landed on Earth from elsewhere, it is more probable that it originated here. Early Earth had as conducive conditions as anywhere we know of, to create the life that we see all around us: carbon-based, water-based, precious, organic life.

Figuring out how that happened has been a side-show in the course of molecular biology, whose funding is mostly premised on medical rationales, and of chemistry, whose funding is mostly industrial. But our research enterprise thankfully has a little room for basic research and fundamental questions, of which this is one of the most frustrating and esoteric, if philosphically meaningful. The field has coalesced in recent decades around the idea that oceanic hydrothermal vents provided some of the likeliest conditions for the origin of life, due to the various freebies they offer.

Early earth, as today, had very active geology that generated a stream of reduced hydrogen and other compounds coming out of hydrothermal vents, among other places. There was no free oxygen, and conditions were generally reducing. Oxygen was bound up in rocks, water, and CO2. The geology is so reducing that water itself was and still is routinely reduced on its trip through the mantle by processes such as serpentinization.

The essential problem is how to jump the enormous gap from the logic of geology and chemistry, over to the logic of biology. It is not a question of raw energy- the earth has plenty of energetic processes, from vocanoes and tectonics to incoming solar energy. The question is how a natural process that has resolutely chemical logic, running down the usual chemical and physical gradients from lower to higher entropy, could have generated the kind of replicating and coding molecular system where biological logic starts. A paper from 2007 gives a broad and scrupulous overview of the field, featuring detailed arguments supporting the RNA world as the probable destination (from chemical origins) where biological logic really began. 

To rehearse very briefly, RNA has, and still retains in life today, both coding capacity and catalytic capacity, unifying in one molecule the most essential elements of life. So RNA is thought to have been the first molecule with truly biological ... logic, being replaced later with DNA for some of its more sedentary roles. But there is no way to get to even very short RNA molecules without some kind of metabolic support. There has to be an organic soup of energy and small organic molecules- some kind of pre-biological metabolism- to give this RNA something to do and chemical substituents to replicate itself out of. And that is the role of the hydrothermal vent system, which seems like a supportive environment. For the trick in biology is that not everything is coded explicitly. Brains are not planned out in the DNA down to their crenelations, and membranes are not given size and weight blueprints. Biology relies heavily on natural chemistry and other unbiological physical processes to channel its development and ongoing activity. The coding for all this, which seems so vast with our 3 Gb genome, is actually rather sparse, specifying some processes in exquisite detail, (large proteins, after billions of years of jury-rigging, agglomeration, and optimization), while leaving a tremendous amount still implicit in the natural physical course of events.

A rough sketch of the chemical forces and gradients at a vent. CO2 is reduced into various simple organic compounds at the rock interfaces, through the power of the incoming hydrogen rich (electron-rich) chemicals. Vents like this can persist for thousands of years.

So the origin of life does not have to build the plane from raw aluminum, as it were. It just has to explain how a piece of paper got crumpled in a peculiar way that allowed it to fly, after which evolution could take care of the rest of the optimization and elaboration. Less metaphorically, if a supportive chemical environment could spontaneously (in geo-chemical terms) produce an ongoing stream of reduced organic molecules like ATP and acyl groups and TCA cycle intermediates out of the ambient CO2, water, and other key elements common in rocks, then the leap to life is a lot less daunting. And hydrothermal vents do just that- they conduct a warm and consistent stream of chemically reduced (i.e. extra electrons) and chemical-rich fluid out of the sea floor, while gathering up the ambient CO2 (which was highly concentrated on the early Earth) and making it into a zoo of organic chemicals. They also host the iron and other minerals useful in catalytic conversions, which remain at the heart of key metabolic enzymes to this day. And they also contain bubble-like stuctures that could have confined and segregated all this activity in pre-cellular forms. In this way, they are thought to be the most probable locations where many of the ingredients of life were being generated for free, making the step over to biological logic much less daunting than was once thought.

The rTCA cycle, portrayed in the reverse from our oxidative version, as a cycle of compounds that spontaneously generate out of simple ingredients, due to their step-wise reduction and energy content values. The fact that the output (top) can be easily cleaved into the inputs provides a "metabolic" cycle that could exist in a reducing geological setting, without life or complicated enzymes.

The TCA cycle, for instance, is absolutely at the core of metabolism, a flow of small molecules that disassemble (or assemble, if run in reverse) small carbon compounds in stepwise fashion, eventually arriving back at the starting constituents, with only outputs (inputs) of hydrogen reduction power, CO2, and ATP. In our cells, we use it to oxidize (metabolize) organic compounds to extract energy. Its various stations also supply the inputs to innumerable other biosynthetic processes. But other organisms, admittedly rare in today's world, use it in the forward direction to create organic compounds from CO2, where it is called reductive or reverse (rTCA). An article from 2004 discusses how this latter cycle and set of compounds very likely predates any biological coding capacity, and represents an intrisically natural flow of carbon reduction that would have been seen in a pre-biotic hydrothermal vent setting. 

What sparked my immediate interest in all this was a recent paper that described experiments focused on showing why ATP, of all the other bases and related chemicals, became such a central part of life's metabolism, including as a modern accessory to the TCA cycle. ATP is the major energy currency in cells, giving the extra push to thousands of enzymes, and forming the cores of additional central metabolic cofactors like NAD (nicotine adenine dinucleotide), and acetyl-CoA (the A is for adenine), and participating as one of the bases of DNA and RNA in our genetic core processes. 

Of all nucleoside diphosphates, ADP is most easily converted to ATP in the very simple conditions of added acyl phosphate and Fe3+ in water, at ambient temperatures or warmer. Note that the trace for ITP shows the same absorbance before and after the reaction. The others show no reaction either. Panel F shows a time course of the ADP reaction, in hours. The X axis refers to time of chromatography of the sample, not of the reaction.

Why ATP, and not the other bases, or other chemicals? Well, bases appear as early products out of pre-biotic reaction mixtures, so while somewhat complicated, they are a natural part of the milieu. The current work compares how phosphorylation of all the possible di-phosphate bases works, (that is, adenosine, cytidine, guanosine, inosine, and uridine diphosphates), using the plausible prebiotic ingredients ferric ion (Fe3+) and acetyl phosphate. They found surprisingly that only ADP can be productively converted to ATP in this setting, and it was pretty insensitive to pH, other ions, etc. This was apparently due to the special Fe3+ coordinating capability that ADP has due to its pentose N and neighboring amino group that allows an easy electron transfers to the incoming phosphate group. Iron remains common as an enzymatic cofactor today, and it is obviously highly plausible in this free form as a critical catalyst in a pre-biotic setting. Likewise, acetyl phosphate could hardly be simpler, occurs naturally under prebiotic conditions, and remains an important element of bacterial metabolism (and transiently one of eukaryotic metabolism) today. 

Ferric iron and ATP make a unique mechanistic pairing that enables easy phosphorylation at the third position, making ATP out of ADP and acyl phosphate. At step b, the incoming acyl phosphate is coordinated by the amino group while the iron is coordinated by the pentose nitrogen and two existing phosphates.

The point of this paper was simply to reveal why ATP, of all the possible bases and related chemicals, gained its dominant position of core chemical and currency. It is rare in origin-of-life research to gain a definitive insight like this, amid the masses of speculation and modeling, however plausible. So this is a significant step ahead for the field, while it continues to refine its ideas of how this amazing transition took place. Whether it can demonstrate the spontaneous rTCA cycle in a reasonable experimental setting is perhaps the next significant question.


  • How China extorts celebrities, even Taiwanese celebrities, to toe the line.
  • Stay away from medicare advantage, unless you are very healthy, and will stay that way.
  • What to expect in retirement.