Saturday, March 11, 2023

An Origin Story for Spider Venom

Phylogenetic analysis shows that the major component of spider venom derives from one ancient ancestor.

One reason why biologists are so fully committed to the Darwinian account of natural selection and evolution is that it keeps explaining and organizing what we see. Despite the almost incredible diversity and complexity of life, every close look keeps confirming what Darwin sensed and outlined so long ago. In the modern era, biology has gone through the "Modern Synthesis", bringing genetics, molecular biology, and evolutionary theory into alignment with mutually supporting data and theories. For example, it was Linus Pauling and colleagues (after they lost the race to determine the structure of DNA) who proposed that the composition of proteins (hemoglobin, in their case) could be used to estimate evolutionary relationships, both among those molecules, and among their host species.

Naturally, these methods have become vastly more powerful, to the point that most phylogenetic analyses of the relationship between species (including the definition of what species are, vs subspecies, hybrids, etc.) are led these days by DNA analysis, which provides the richest possible trove of differentiating characters- a vast spectrum from universally conserved to highly (and forensically) varying. And, naturally, it also constitutes a record of the mutational steps that make up the evolutionary process. The correlation of such analyses with other traditionally used diagnostic characters, and with the paleontological record, is a huge area of productive science, which leads, again and again, to new revelations about life's history.


One sample structure of a DRP- the disulfide rich protein that makes up most of spider venoms.
 The disulfide bond (between two cysteines) is shown in red. There is usually another disulfide helping to hold the two halves of the molecule together as well. The rest of the molecule is (evolutionarily, and structurally) free to change shape and character, in order to carry out its neuron-channel blocking or other toxic function.

One small example was published recently, in a study of spider venoms. Spiders arose, from current estimates, about 375 million years ago, and comprise the second most prevalent form of animal life, second only to their cousins, the insects. They generally have a hunting lifestyle, using venom to immobilize their prey, after capture and before digestion. These venoms are highly complex brews that can have over a hundred distinct molecules, including potassium, acids, tissue- and membrane-digesting enzymes, nucleosides, pore-forming peptides, and neurotoxins. At over three-fourths of the venom, the protein-based neurotoxins are the most interesting and best studied of the venom components, and a spider typically deploys dozens of types in its venom. They are also called cysteine-rich peptides or disulfide-rich peptides (DRPs) due to their composition. The fact that spiders tend to each have a large variety of these DRPs in their collection argues that a lot of gene duplication and diversification has occured.

A general phylogenetic tree of spiders (left). On the right are the signal peptides of a variety of venoms from some of these species. The identity of many of these signal sequences, which are not present in the final active protein, is a sign that these venom genes were recently duplicated.

So where do they come from? Sequences of the peptides themselves are of limited assistance, being small, (averaging ~60 amino acids), and under extensive selection to diversify. But they are processed from larger proteins (pro-proteins) and genes that show better conservation, providing the present authors more material for their evolutionary studies. The figure above, for example, shows, on the far right, the signal peptides from families of these DRP genes from single species. Signal peptides are the small leading section of a translated protein that directs it to be secreted rather than being kept inside the cell. Right after the protein is processed to the right place, this signal is clipped off and thus is not part of the mature venom protein. These signal peptides tend to be far more conserved than the mature venom protein, despite that fact that they have little to do- just send the protein to the right place, which can be accomplished by all sorts of sequences. But this is a sign that the venoms are under positive evolutionary pressure- to be more effective, to extend the range of possible victims, and to overcome whatever resistance the victims might evolve against them. 

Indeed, these authors show specifically that strong positive selection is at work, which is one more insight that molecular data can provide. (First, by comparing the rates of protein-coding positions that are neutral via the genetic code (synonymous) vs those that make the protein sequence change (non-synonymous), and second by the pattern and tempo of evolution of venom sequences compared with the mass of neutral sequences of the species.

"Given their significant sequence divergence since their deep-rooted evolutionary origin, the entire protein-coding gene, including the signal and propeptide regions, has accumulated significant differences. Consistent with this hypothesis, the majority of positively selected sites (~96%) identified in spider venom DRP toxins (all sites in Araneomorphae, and all but two sites in Mygalomorphae) were restricted to the mature peptide region, whereas the signal and propeptide regions harboured a minor proportion of these sites (1% and 3%, respectively)."

 

Phylogenetic tree (left), connecting up venom genes from across the spider phylogeny. On right, some of the venom sequences are shown just by their cysteine (C) locations, which form the basic structural scaffold of these proteins (top figure).


The more general phyogenetic analysis from all their sequences tells these authors that all the venom DRP genes, from all spider species, came from one origin. One easy way to see this is in the image above on the right, where just the cysteine scaffold of these proteins from around the phylogeny are lined up, showing that this scaffold is very highly conserved, regardless of the rest of the sequence. This finding (which confirms prior work) is surprising, since venoms of other animals, like snakes, tend to incorporate a motley bunch of active enzymes and components, sourced from a variety of ancestral sources. So to see spiders sticking so tenaciously to this fundamental structure and template for the major component of their venom is impressive- clearly it is a very effective molecule. The authors point out the cone snails, another notorious venom-maker, originated much more recently, (about 45 million years ago), and shows the same pattern of using one ancestral form to evolve a diversified blizzard of venom components, which have been of significant interest to medical science.


  • Example: a spider swings a bolas to snare a moth.

Saturday, March 4, 2023

New World Order, or Old World Order?

As we gaze into the future, are we looking at a new Cold War?

The international landscape is taking on a tone of deja vu these days, as we return to Kremlinology and proxy wars. It feels like a new Cold war is upon us. The familiar lineup of Russia and China, with various other formerly communist states, are aligned against "the West" writ large: the US with core European countries, plus also those European post-Soviet states that turned in revulsion against their former captor. Only Belarus was left behind as a pawn of Russia. Iran is perhaps the one large country that was previously part of the US coalition and has decisively switched to the other side, though several countries like Turkey, Indonesia, and Pakistan are non-aligned or hostile.

But the familiar names and grudges belie vast changes in the landscape. The principal shift is that the Soviet economic system, and the communist economic systems more widely, is no longer the albatross it once was. Virtually every country has ditched communism (or, more precisely, top-down planning), discovered capitalism, and put it to work resolving fundamental economic problems and built modern economies, more or less. North Korea may be the only exception, (and perhaps Cuba), showing its entreprenurial spirit in the sphere of international crime, but otherwise hewing doggedly to a fully planned economy. China is foremost in this new authoritarian movement, having mastered a hybrid system of one-party politics and multi-party economics. This new model, (perhaps pioneered ultimately by Singapore), is a far more concerning and long-term threat than communism ever was. Straight communism was brutally impractical, and demanded correspondingly brutal methods of implementation. As it turned out, it was only attractive to the most extreme authoritarians, such as Lenin, Stalin, Mao, and Ho Chi Minh, and their starry-eyed believers. 


Prevalence of government types, world-wide. There has been a noticeable regression over the last decade or two.

But the new model is much more widely applicable and attractive. Even we in the US had a narrow escape during the last administration, and half the country remains in thrall to its lure. If power is one's goal, autocracy, or "managed democracy", is far more attractive than a truly competitive democracy. While in the 20th century many authoritarian states transitioned to democracy, this century has shown a different trend, as countries like Hungary, India, Venezuela, and Russia head away from more or less functional democracy. 

The Ukraine war has obviously broken all this open, manifesting Vladimir Putin's seething resentment that yet one more former Soviet "Republic" and Imperial satrapy resisted all his efforts at corruption and cooptation, and through force of popular will dared to aligned itself with the West. Every country has to choose a position, and those positions were recently enunciated in the recent UN assembly vote against the aggressor. China has exposed itself as fatally hypocritical to its former mantras of non-interference, peaceful coexistence, and national self-determination. Support of its fellow authoritarian, in war that so closely mirrors the one it contemplates against Taiwan, takes precedence over any lip-service to principle or peace. The Western coalition of democracies fell naturally into line as well, in reaction to the horror that was unfolding, which everyone thought the experiences of the last century would have made impossible. Not so! The new authoritarian model has an ancillary and deeply related property, which is revived imperialist ambitions, just as it did back in World War 2, and earlier.

An interesting question is where India lands in this new alignment. It would seem a pretty simple proposition for India to condem the appalling and cruel invasion (couched in the clearest imperial and anti-democratic ambitions). India itself has been nibbled at by the bellicose ambitions of both Pakistan and China. But no. India abstained from the UN vote, along with China. India is propping up Russia by buying its discounted oil, and is otherwise mum, hearking back to its non-aligned status during the cold war. This is not helpful, as the world's largest country by population, and largest putative democracy. But India itself has been heading into an authoritarian, in its case Hindutva, direction, and clearly is torn regarding its allegiances, whether to true democracy, or to managed democracy, and its long-time quasi-friend, Russia. While it doubtless seeks to avoid the looming world where it ends up on the opposite side from an alliance between China, Russia, and possibly Pakistan and Iran, (which also abstained), that world is coming regardless. India flirted with alliance with the US over the last two decades, but this recent stance would seem to doom that relationship, or make it a non-reciprocal one. That India's stand is unprincipled goes without saying. Whether it will be tactically effective is another matter. Unlike smaller countries, India does not rely on rules in the international arena, but rather on power. Failure to support others in the face of unjustified and brutal invasion and spiteful bombardment of civilians saps international solidarity, impairs India's international reputation, and weakens its own future claims to sympathy when the wolf is at its own door. But its relationship with Russia may be valuable enough to repay those costs.

It should be obvious that, as a collective action problem, the way to avoid war is for all other countries to band together to forstall, condemn, reverse, and punish belligerent invasions like that started by Russia. Allowing Russia to get away with a half-a-loaf negotiated takeover would only invite future attempts by it or other aggressors. Punishment, when concentrated on the perpetrators, (not necessarily their national and captive populations), is critical to deterrence.

So it was heartening that most countries were not so cynical and saw the general danger well enough to have supported the UN resolution, toothless as it was. The question overall is whether international relations progress to a new world order, or regress to an old one. Since World War 2, Europe has enjoyed substantial and deepening peace, with an especially peaceful re-integration of Germany, re-establishment of the Baltic nations, Poland, and most nations of Eastern Europe. Yugoslavia was the only region that fell into warfare, and continues in an uneasy constellation of truces, mostly enforced by the dream of joining the peaceful and prosperous European community. While the breakup of the Soviet Union was caused by, and furthered, long-standing nationalist sentiments, those sentiments were kept in check by guardrails of the "new", or liberal international order, which prizes peace and tranquility, under the policing of NATO combined arms, with those of the US at the forefront. Of course Russia had its role as well in managing the post-beakup nationalisms, for instance in Chechnya and Georgia, and it was not interested in any liberal order. But the assault on Ukraine is an entirely new line that has been crossed.

The old world order is one that foreign policy "realists" relish. The old spheres of influence, and balances of power warm the hearts of Metternichian traditionalists, savoring the way it has always been. There, guile and propaganda, selective alliances and stealth were the order of the day, throwing small countries to the dogs while the big countries do what they wish, each pursuing imperial dreams. They claim that this is just the way things are, there is no alternative, and any hopey-changey ambitions for a better international system amount to just another League of Nations or toothless UN. 

One can grant that the international scene is not, yet, bound by a legal system or effective police powers. The US has tried to be the policeman, and done a generally well-intentioned, but poor job of it. We run a vast network of military bases that has stretched over the globe, and exert soft power of many kinds. This has given room for countless small nations to pursue their dreams, subject to, but not crushed, by great power spheres and pressures. Taiwan grew into a flourishing independent democracy, Poland shook off centuries of partitions and subjugation. People power rose up in the Phillipines, in Ukraine, and in the Middle East. Africa has had a fitful time, but generally has been able to at least breathe free of explicit colonial oppression. US policy over the last few decades has been a race to establish a civil international order that is entrenched enough to survive our own demise as a superpower. Even the Iraq war was, at least in spirit, intended to break the patterns of authoritarianism that plague the Middle East, and implant a new, prosperous democracy. But bringing a new and happy dispensation on a plate of hellfire did not work out so well. Indeed, the implanted Western democracy of Israel shows more signs of aligning with the local political patterns than of changing them. 

So, change is hard, as is management of international relations in the absence of rules and police. The realists would say that other nations, both major competitors and spoilers, always line up against the powerful nation of the moment, due to natural competitiveness. But the Ukraine war should be, if any international event can be, the most glaring example of the boundary line between possible systems, and possible futures.


One can liken the old order to a city with gangs or mafia families. The gangs are always in flux, growing, shrinking, and competing. Long times may go by with relatively stable constellations, but then all hell breaks loose and the warfare is in brutal earnest. The new international order is, in contrast, more like a modern city, with representative government, laws, and a police force. Its violence confined to small-time spoilers, criminals and malcontents. Large scale warfare is unknown. Who wouldn't want the second over the first? Well, that takes solidarity- that nations do not only look out for themselves for the moment, but take a long view of the system and their long-term interests, and band together globally to make that future happen.

Perhaps the US is uniquely able to pursue this vision of international relations due to (in addition to its wealth) its makeup as a polyglot nation, its long experience with self-government at all levels, its fascination with the Western and the Police procedural as its reigning entertainment forms, and its modest remove from the European wars of imperialism and domination. We were motive forces behind both the UN and the League of Nations. Whatever the cause, the logic remains that international peace relies entirely on the collective will hold it as an ideal, and then to enforce it. This future does take some imagination, which realists seem to be lacking. But international standards have advanced significantly. Slavery used to also be just the way things were in a naturally competitive world. Poison gas used to be a standard weapon of war. We can change the landscape of international competition.

With a modicum of international solidarity and policing, the international community can put an end to imperialistic wars of aggression. And that movement starts now, in Ukraine, by beating back Russia and all the lies, cruelty, and stupid condescension that it stands for.


Saturday, February 25, 2023

Drought Causes Cultural Breakdown

What happened to the Hittites, and the late Bronze Age?

Climate change is already causing wars and migration, misery on a vast scale. The global South takes the heat, while the global North keeps making it, pumping out the CO2. Can we adapt, or is the human population going to decrease, either gently or not so gently, as conditions deteriorate? The answer is not at all clear. The adaptation measures taken by the rich world involve highly contentious politics, and uncertain technology that, at best, requires a great deal more resource extraction. The poor, on the other hand, are left to either try developing (if they can maintain good political and economic governance) to join the rich in their extractive ways, (China, India), or migrate en masse to rich countries (Africa, Central America). All this is going to get worse, not better, since we are still at peak CO2 emissions and only beginning the process of global heating.

Our emissions of CO2 are still going up, not down. Therefore climate change will be getting worse, faster. Conflict is one likely outcome.


Well, migrations and dislocation have happened before. Over the last millennium, it was cold temperatures, not hot, that have correlated with conflict. Epic migrations occurred in the declining days of the Roman Empire, when the Huns drove a domino series of migrations of Germanic tribes that fought their way throughout Europe. What prompted the Huns out of the Asian steppe is unknown, however. Jared Diamond wrote of several other cultures that met their end after exhausing their resources and technologies. A recent paper added one more such case- the Hittites of late Bronze Age.

The Hittites were a big deal in their time (1700 to 1200 BCE, very roughly), running what is now Eastern and Southern Turkey, and occasionally Syria and points South. They were an early offshoot of the Indo-European migrations, and had a convulsive (though not very well understood) history of rises and falls, mostly due to their political dynamics. At the height of Hittite power, they fought Egypt directy at the battle of Kadesh, (1274 BCE), which occured just a little north of current-day Lebanon. This was the complex frontier between Assyria / Babylon, the Hittites, and Egypt. Egyptian history is full of expeditions- military, economic, and diplomatic- through the Levant.

The Hittites were artists as well as warriors.

The Hittites were also one of several communities around the Mediterranean that shared in the late Bronze Age collapse. This is the epic time that saw the Greek siege of Troy, (~1200 BCE), and the "Sea People's" invasion of Egypt. Its causes and details remain a long-standing historical mystery. But its scale was vast. Greece entered its dark age that lasted from 1200 to the 800's BCE. North Africa, the Balkans, Turkey, Levant, and the Caucaus all declined. Assyria and Egypt were weakened, but did not collapse. The latest paper uses tree-ring data from junipers from around the former Hittite capital in what is now central Turkey to more precisely date a severe drought that may have caused this collapse. Drought is just the kind of cause that would have been wide-spread enough and foundational enough to destroy the regional economies and prompt migrations and wars. Wars.. there are always wars, but no single war would have caused the collapse of cultures on such a wide scale, including a weakening of Egypt. Plagues are also not a great candidate, since they do not harm a society's resource base, but only its population. Such population reductions typically benefit the survivors, who rebuild in short order.

Moisture levels inferred from tree ring data, with lower values dryer. There are three consecutive catastrophic years dated to 1198-1196 BCE in this region, which is around the ancient Hittite capital. The ensuing decade was also unusually dry and likely poor for agriculture. The 20% and 6.25% levels of drought are by comparison to wider sampling, including modern data.


The drought these authors identified and located with precision was extraordinary. They note that, using modern data for indexing, the 20% level (representing about 30 cm of annual rain) is the minimum viable threshold for growing wheat. The 6.25% level is far below that and represents widespread crop failure. They developed two types of data from the tree rings, drawn from 18 individual trees whose rings spanned about a thousand years across the second millenium BCE. First is the size of the rings themselves, whose data are shown above. Second is the carbon 13 isotope ratio, which is a separate index of dryness, based on the isotopic discrimination that plants exercise over CO2 respiration under different climatic conditions. 

The same tree rings that provided the inferences above from their geometry (width) also here provided carbon 13 isotope data that lead to a similar conclusion, though with much less precision. High proportions of C13 indicate drier climate, here continuous around 1200 BCE.

The paper shows three consecutive years at the 6.25% level of rainfall, starting at 1198 BC. The ensuing decade was also harshly dry. All this correlates with cuneiform texts found in the Levant that were letters from the Hittites, bemoaning their drought and begging for assistance. But everyone in the region was in a similar position. The Hittite culture never recovered. 

So drought is now a leading hypothesis for the ultimate cause of the late Bronze Age collapse around many parts of the Mediterranean, with Greece and Anatolia particularly affected. While it is reasonable to imagine that such conditions would lead to desperation, migration, and war, there is no direct link yet. The nature and origin of the Sea Peoples who attacked Egypt remain unknown, for instance. The reasons for the seige of Troy are lost to myth. The Illiad never mentions drought, nor would Troy have been in a much better position than Mycenaean Greece, climatically speaking. But the consequences of geopolitical shifts in alignment can be unpredictable, as we continue to experience today. It is exciting (as well as sobering) to get a glimpse into this cloudy history- into a vast swath of human experience that built great cultures and suffered epic defeats.


Saturday, February 18, 2023

Everything is Alive, but the Gods are all Dead

Barbara Ehrenreich's memoir and theological ruminations in "Living with a Wild God".

It turns out that everyone is a seeker. Somewhere there must be something or someone to tell us the meaning of life- something we don't have to manufacture with our own hands, but rather can go into a store and buy. Atheists are just as much seekers as anyone else, only they never find anything worth buying. The late writer Barbara Ehrenreich was such an atheist, as well as a remarkable writer and intellectual who wrote a memoir of her formation. Unusually and fruitfully, it focuses on those intense early and teen years when we are reaching out with both hands to seize the world- a world that is maddeningly just beyond our grasp, full of secrets and codes it takes a lifetime and more to understand. Religion is the ultimate hidden secret, the greatest mystery which has been solved in countless ways, each of them conflicting and confounding.

Ehrenreich's tale is more memoir than theology, taking us on a tour through a dysfunctional childhood with alcoholic parents and tough love. A story of growth, striking out into the world, and sad coming-to-terms with the parents who each die tragically. But it also turns on a pattern of mystical experiences that she keeps having, throughout her adult life, which she ultimately diagnoses as dissociative states where she zones out and has a sort of psychedelic communion with the world.

"Something peeled off the visible world, taking with it all meaning, inference, association, labels, and words. I was looking at a tree, and if anyone had asked, that's what I would have said I was doing, but the word "tree" was gone, along with all the notions of tree-ness that had accumulated in the last dozen years or so since I had acquired language. Was it a place that was suddenly revealed to me? Or was it a substance- the indivisible, elemental material out of which the entire known and agreed-upon world arises as a fantastic elaboration? I don't know, because this substance, this residue, was stolidly, imperturbably mute. The interesting thing, some might say alarming, was that when you take away all the human attributions- the words, the names of species, the wisps of remembered tree-related poetry, the fables of photosynthesis and capillary action- that when you take all this this away, there is still something left."

This is not very hard to understand as a neurological phenomenon of some kind of transient disconnection of just the kind of brain areas she mentions- those that do all the labeling, name-calling, and boxing-in. In schizophrenia, it runs to the pathological, but in Ehrenreich's case, she does not regard it as pathological at all, as it is always quite brief. But obviously, the emotional impact and weirdness of the experience- that is something else altogether, and something that humans have been inducing with drugs, and puzzling over, forever. 

Source

As a memoir, the book is very engaging. As a theological quest, however, it doesn't work as well, because the mystical experience is, as noted above, resolutely meaningless. It neither compels Ehrenreich to take up Christianity, as after a Pauline conversion, nor any other faith or belief system. It offers a peek behind the curtain, but, stripped of meaning as this view is, Ehrenreich is perhaps too skeptical or bereft of imagination to give it another, whether of her own or one available from the conventional array of sects and religions. So while the experiences are doubtless mystical, one can not call them religious, let alone god-given, because Ehrenreich hasn't interpreted them that away. This hearkens back to the writings of William James, who declined to assign general significance to mystical experiences, while freely admitting their momentous and convincing nature to those who experienced them.

Only in one brief section (which had clearly been originally destined for an entirely different book) does she offer a more interesting and insightful analysis. There, Ehrenreich notes that the history of religion can be understood as a progressive bloodbath of deicide. At first, everything is alive and sacred, to an animist mind. Every leaf and grain of sand holds wonders. Every stream and cloud is divine. This is probably our natural state, which a great deal of culture has been required to stamp out of us. Next is a hunting kind of religion, where deities are concentrated in the economic objects (and social patterns) of the tribe- the prey animals, the great plants that are eaten, and perhaps the more striking natural phenomena and powerful beasts. But by the time of paganism, the pantheon is cut down still more and tamed into a domestic household, with its soap-opera dramas and an increasingly tight focus on the major gods- the head of the family, as it were. 

Monotheism comes next, doing away with all the dedicated gods of the ocean, of medicine, of amor and war, etc., cutting the cast down to one. One, which is inflated to absurd proportions with all-goodness, all-power, all-knowledge, etc. A final and terrifying authoritarianism, probably patterned on the primitive royal state. This is the phase when the natural world is left in the lurch, as an undeified and unprotected zone where human economic greed can run rampant, safe in the belief that the one god is focused entirely on man's doings, whether for good or for ill, not on that of any other creature or feature of the natural world. A phase when even animals, who are so patently conscious, can, through the narcissism of primitive science and egoistic religion, be deemed mere mechanisms without feeling. This process doesn't even touch on the intercultural deicide committed by colonialism and conquest.

This in turn invites the last deicide- that by rational people who toss aside this now-cartoonish super-god, and return to a simpler reverence for the world as we naturally respond to it, without carting in a lot of social power-and-drama baggage. It is the cultural phase we are in right now, but the transition is painfully slow, uneven, and drawn-out. For Ehrenreich, there are plenty of signs- in the non-linear chemical phenomena of her undergraduate research, in the liveliness of quantum physics even into the non-empty vacuum, in the animals who populate our world and are perhaps the alien consciousnesses that we should be seeking in place of the hunt through outer space, and in our natural delight in, and dreams about, nature at large. So she ends the book as atheist as ever, but hinting that perhaps the liveliness of the universe around us holds some message that we are not the only thinking and sentient beings.

"Ah, you say, this is all in your mind. And you are right to be skeptical; I expect no less. It is in my mind, which I have acknowledged from the beginning is a less than perfect instrument. but this is what appears to be the purpose of my mind, and no doubt yours as well, its designed function beyond all the mundane calculations: to condense all the chaos and mystery of the world into a palpable Other or Others, not necessarily because we love it, and certainly not out of any intention to "worship" it. But because ultimately we may have no choice in the matter. I have the impression, growing out of the experiences chronicled here, that it may be seeking us out." 

Thus the book ends, and I find it a rather poor ending. It feels ripped from an X-Files episode, highly suggestive and playing into all the Deepak and similar mystical tropes of cosmic consciousness. That is, if this passage really means much at all. Anyhow, the rest of the trip is well worth it, and it is appropriate to return to the issue of the mystical experience, which is here handled with such judicious care and restraint. Where imagination could have run rampant, the cooly scientific view (Ehrenreich had a doctorate in biology) is that the experiences she had, while fascinating and possibly book-proposal-worthy, did not force a religious interpretation. This is radically unlike the treatment of such matters in countless other hands, needless to say. Perhaps our normal consciousness should not be automatically valued less than more rare and esoteric states, just because it is common, or because it is even-tempered.


  • God would like us to use "they".
  • If you are interested in early Christianity, Gnosticism is a good place to start.
  • Green is still an uphill battle.

Saturday, February 11, 2023

A Gene is Born

Yes, genes do develop out of nothing.

The "intelligent" design movement has long made a fetish of information. As science has found, life relies on encoded information for its genetic inheritance and the reliable expression of its physical manifestations. The ID proposition is, quite simply, that all this information could not have developed out of a mindless process, but only through "design" by a conscious being. Evidently, Darwinian natural selection still sticks on some people's craw. Michael Behe even developed a pseudo-mathematical theory about how, yes, genes could be copied mindlessly, but new genes could never be conjured out of nothing, due to ... information.

My understanding of information science equates information to loss of entropy, and expresses a minimal cost of the energy needed to create, compute or transmit information- that is, the Shannon limits. A quite different concept comes from physics, in the form of information conservation in places like black holes. This form of information is really the implicit information of the wave functions and states of physical matter, not anything encoded or transmitted in the sense of biology or communication. Physical state information may be indestructable (and un-create-able) on this principle, but coded information is an entirely different matter.

In a parody of scientific discussion, intelligent design proponents are hosted by the once-respectable Hoover Institution for a discussion about, well, god.

So the fecundity that life shows in creating new genes out of existing genes, (duplications), and even making whole-chromosome or whole-genome duplications, has long been a problem for creationists. Energetically, it is easy to explain as a mere side-effect of having plenty of energy to work with, combined with error-prone methods of replication. But creationistically, god must come into play somewhere, right? Perhaps it comes into play in the creation of really new genes, like those that arise from nothing, such as at the origin of life?

A recent paper discussed genes in humans that have over our recent evolutionary history arisen from essentially nothing. It drew on prior work in yeast that elegantly laid out a spectrum or life cycle of genes, from birth to death. It turns out that there is an active literature on the birth of genes, which shows that, just like duplication processes, it is entirely natural for genes to develop out of humble, junky precursors. And no information theory needs to be wheeled in to show that this is possible.

Yeast provides the tools to study novel genes in some detail, with rich genetics and lots of sequenced relatives, near and far. Here is portrayed a general life cycle of a gene, from birth out of non-gene DNA sequences (left) into the key step of translation, and on to a subject of normal natural selection ("Exposed") for some function. But if that function decays or is replaced, the gene may also die, by mutation, becoming a pseudogene, and eventually just some more genomic junk.

The death of genes is quite well understood. The databases are full of "pseudogenes" that are very similar to active genes, but are disabled for some reason, such as a truncation somewhere or loss of reading frame due to a point mutation or splicing mutation. Their annotation status is dynamic, as they are sometimes later found to be active after all, under obscure conditions or to some low level. Our genomes are also full of transposons and retroviruses that have died in this fashion, by mutation.

Duplications are also well-understood, some of which have over evolutionary time given rise to huge families of related proteins, such as kinases, odorant receptors, or zinc-finger transcription factors. But the hunt for genes that have developed out of non-gene materials is a relatively new area, due to its technical difficulty. Genome annotators were originally content to pay attention to genes that coded for a hundred amino acids or more, and ignore everything else. That became untenable when a huge variety of non-coding RNAs came on the scene. Also, occasional cases of very small genes that encoded proteins came up from work that found them by their functional effects.

As genome annotation progressed, it became apparent that, while a huge proportion of genes are conserved between species, (or members of families of related proteins), other genes had no relatives at all, and would never provide information by this highly convenient route of computer analysis. They are orphans, and must have either been so heavily mutated since divergence that their relationships have become unrecognizable, or have arisen recently (that is, since their evolutionary divergence from related species that are used for sequence comparison) from novel sources that provide no clue about their function. Finer analysis of ever more closely related species is often informative in these cases.

The recent paper on human novel genes makes the finer point that splicing and export from the nucleus constitute the major threshold between junk genes and "real" genes. Once an RNA gets out of the nucleus, any reading frame it may have will be translated and exposed to selection. So the acquisition of splicing signals is a key step, in their argument, to get a randomly expressed bit of RNA over the threshold.

A recent paper provided a remarkable example of novel gene origination. It uncovered a series of 74 human genes that are not shared with macaque, (which they took as their reference), have a clear path of origin from non-coding precursors, and some of which have significant biological effects on human development. They point to a gradual process whereby promiscuous transcription from the genome gave rise by chance to RNAs that acquired splice sites, which piped them into the nuclear export machinery and out to the cytoplasm. Once there, they could be translated, over whatever small coding region they might possess, after which selection could operate on their small protein products. A few appear to have gained enough function to encourage expansion of the coding region, resulting in growth of the gene and entrenchment as part of the developmental program.

Brain "organoids" grown from genetically manipulated human stem cells. On left is the control, in middle is where ENSG00000205704 was deleted, and on the right is where ENSG00000205704 is over-expressed. The result is very striking, as an evolutionarily momentous effect of a tiny and novel gene.

One gene, "ENSG00000205704" is shown as an example. Where in macaque, the genomic region corresponding to this gene encodes at best a non-coding RNA that is not exported from the nucleus, in humans it encodes a spliced and exported mRNA that encodes a protein of 107 amino acids. In humans it is also highly expressed in the brain, and when the researchers deleted it in embryonic stem cells and used those cells to grow "organoids", or clumps of brain-like tissue, the growth was significantly reduced by the knockout, and increased by the over-expression of this gene. What this gene does is completely unknown. Its sequence, not being related to anything else in human or other species, gives no clue. But it is a classic example of gene that arose from nothing to have what looks like a significant effect on human evolution. Does that somehow violate physics or math? Nothing could be farther from the truth.

  • Will nuclear power get there?
  • What the heck happened to Amazon shopping?

Saturday, February 4, 2023

How Recessive is a Recessive Mutation?

Many relationships exist between mutation, copy number, and phenotype.

The traditional setup of Mendelian genetics is that an allele of a gene is either recessive or dominant. Blue eyes are recessive to brown eyes, for the simple reason that blue arises from the absence of an enzyme, due to a loss of function mutation. So having some of that enzyme, from even one "brown" copy of that gene, is dominant over the defective "blue" copy. You need two "blue" alleles to have blue eyes. This could be generalized to most genes, especially essential genes, where lacking both copies is lethal, while having one working copy will get you through, and cover for a defective copy. Most gene mutations are, by this model, recessive. 

But most loci and mutations implicated in disease don't really work like that. Some recent papers delved into the genetics of such mutations, and observed that their recessiveness was all over the map, a spectrum, really, of effects from fully recessive to dominant, with most in the middle ground. This is informative for clinical genetics, but also for evolutionary studies, suggesting that evolution is not, after all, blind to the majority of mutations, which are mostly deleterious, exist most of the time in the haploid (one-copy) state, and would be wholly recessive by the usual assumption.

The first paper describes a large study over the Finnish population, which benefited from several advantages. Finns have a good health system with thorough records which are housed in a national biobank. The study used 177,000 health records and 83,000 variants in coding regions of genes collected from sequencing studies. Second, the Finnish population is relatively small and has experienced bottlenecks from smaller founding populations, which amplifies the prevalence of variants that those founders had. That allows those variants to rise to higher rates of appearance, especially in the homozygous state, which generally causes more noticeable disease phenotypes. Both the detectability and the statistics were powered by this higher incidence of some deleterious mutations (while others, naturally, would have been more rare than the world-wide average, or absent altogether).

Thirdly, the authors emphasize that they searched for various levels of recessive effect, which is contrary to the usual practice of just assuming a linear effect. A linear model says that one copy of a mutation has half the effect of two copies- which is true sometimes, but not most of the time, especially in more typical cases of recessive effect where one copy has a good deal less effect, if not zero. Returning to eye color, if one looks in detail, there are many shades of eyes, even of blue eyes, so it is evident that the alleles that affect eye color are various, and express to different degrees (have various penetrance, in the parlance). While complete recessiveness happens frequently, it is not the most common case, since we generally do not routinely express excess amounts of proteins from our genes, making loss of one copy noticeable most of the time, to some degree. This is why the lack of a whole chromosome, or an excess of a whole chromosome, has generally devastating consequences. Trisomies in only three chromosomes are viable (that is, not lethal), and confer various severe syndromes.

A population proportion plot vs age of disease diagnosis for three different diseases and an associated genetic variant. In blue is the normal ("wild-type") case, in yellow is the heterozygote, and in red the homozygote with two variant alleles. For "b", the total lack of XPA causes skin cancer with juvenile onset, and the homozygotic case is not shown. The Finnish data allowed detection of rather small recessive effects from variations that are common in that population. For instanace, "a" shows the barely discernable advancement of age of diagnosis for a disease (hearing loss) that in the homozygotic state is universal by age 10, caused by mutations in GJB2.

The second paper looked more directly at the fitness cost of variations over large populations, in the heterozygous state. They looked at loss-of-function (LOF) mutations of over 17,000 genes, studying their rate of appearance and loss from human populations, as well as in pedigrees. These rates were turned, by a modeling system, into fitness costs, which are stated in percentage terms, vs wild type. A fitness cost of 1% is pretty mild, (though highly significant over longer evolutionary time), while a fitness cost of 10% is quite severe, and one of 100% is immediately lethal and would never be observed in the population. For example, a mutation that is seen rarely, and in pedigrees only persists for a couple of generations, implies a fitness cost of over 10%.

They come up with a parameter "hs", which is the fitness cost "s" of losing both copies of a gene, multiplied by "h", a measure of the dominance of the mutation in a single copy.


In these graphs, human genes are stacked up in the Y axis sorted by their computed "hs" fitness cost in the heterozygous state. Error bars are in blue, showing that this is naturally a rather error-prone exercise of estimation. But what is significant is that most genes are somewhere on the spectrum, with very few having negligible effects, (bottom), and many having highly significant effects (top). Genes on the X chromosome are naturally skewed to much higher significance when mutated, since in males there is no other copy, and even in females, one X chromosome is (randomly) inactivated to provide dosage compensation- that is, to match the male dosage of production of X genes- which results in much higher penetrance for females as well.


So the bottom line is that while diploidy helps to hide alot of variation in sexual organisms, and in humans in particular, it does not hide it completely. We are each estimated to receive, at birth, about 70 new mutations, of which 1/1000 are the kind of total loss of gene function studied here. This work then estimates that 20% of those mutations have a severe fitness effect of >10%, meaning that about one in seventy zygotes carry such a new mutation, not counting what it has inherited from its parents, and will suffer ill effects immediately, even though it has a wild-type copy of that gene as well.

Humans, as other organisms, have a large mutational load that is constantly under surveillance by natural selection. The fact that severe mutations routinely still have significant effects in the heterozygous state is both good and bad news. Good in the sense that natural selection has more to work with and can gradually whittle down on their frequency without necessarily waiting for the chance of two meeting in an unfortunate homozygous state. But bad in the sense that it adds to our overall phenotypic variation and health difficulties a whole new set of deficiencies that, while individually and typically minor, are also legion.


Saturday, January 28, 2023

Building the Middle Class

Why are poor people in the US enslaved to tyrannical, immiserating institutions?

Santa Claus brought an interesting gift this Christmas, Barbara Ehrenreich's "Nickle and Dimed". This is a memoir of her experiment as a low wage worker. Ehrenreich is a well-educated scientist, feminist, journalist, and successful writer, so this was a dive from very comfortable upper middle class circumstances into the depths both of the low-end housing market and the minimum wage economy. While she brings a great deal of humor to the story, it is fundamentally appalling, an affront to basic decency. Our treatment of the poor should be a civil rights issue.

The first question is why we have a minimum wage at all. What is the lowest wage that natural economic conditions would bear, and what economic and social principles bear on this bottom economic rung? In ancient times, slavery was common, which meant a wage of zero. This was replicated in the ante-bellum American South- minimum wage of zero. So as far as natural capitalism is concerned, there is no minimum wage needed and people can rather easily be coerced by various social and violent means to work for the barest subsistence. The minimum wage is entirely a political and social concept, designed to express a society's ideas of minimal economic, civic, and social decency. Maybe that is why, as with so many other things, the US reached a high point in its real minimum wage in the late 1960's, 66% higher than what it is now.

Real minimum wage in the US, vs nominal.

The whole economy of low wage work is very unusual. One would think that supply and demand would operate here, and that difficult work would be rewarded by higher pay. But it is precisely the most difficult work- the most grinding, alienating, dispiriting work that is paid least. There is certainly an education effect on pay, but the social structure of low end work is mostly one of power relations, where desperate people are faced with endlessly greedy employers, who know that the less they pay, the more desperate their workers will be to get even that little amount. It is remarkable what we have allowed this sector to do in the name of "free" capitalism- the drug tests, the uniforms, the life-destroying scheduling chaos, the wage theft, the self-serving corporate propaganda, the surveillance.

Is it a population issue, that there is always an excess of low-wage workers? I think it is really the other way around, that there is a highly flexible supply of low-wage work, thanks to the petty-tyrannical spirit of "entrepreneurs". No one needs the eighth fast food restaurant, the fifteenth nail salon, or the third maid cleaning service. We use and abuse low wage labor because it is there, not because these are essential jobs. If a shortage of low-wage workers really starts to crimp an important industry, it has recourse to far more effective avenues of redress, such as importing workers from abroad, outsourcing the work, or if all else fails, automating it. What people are paid is largely a social construct in the minds of us, the society of employers who couldn't imagine paying decently for the work / servitude of others. To show an exception that illustrates the rule, nurses during the pandemic did in some cases, if they were willing to travel and negotiate, make out like bandits. But nurses who stayed put, played by the rules, and truly cared for those around them, were routinely abused, forced into extra work and bad conditions by employers who did not care about them and had .. no choices. In exceptional cases where true need exists, supply and demand can move the needle. But social power plays a very large role.

Some states have raised their minimum wage, such as California, to $15. This is a more realistic wage, though the state has astronomic housing and other costs as well. Has our economy collapsed here? No. It has had zero discernable effect on the provision of local services, and the low wage economy sails on at a new, and presumably more humane, level. When I first envisioned this essay, I thought that a much more substantial increase in the minimum wage would be the proper answer. But then I found that $15 per hour provides an annual income that is almost at the US level of median income, 34k annually for an individual. The average income in the US is only 53k. So there is not a lot of wiggle room there. We are a nation of the poorly paid, on average living practically hand-to-mouth. On the household level, things may look better if one has the luck to have two or more solid incomes.


My own individual incomes analysis, drawn from reported Social Security data.

Any any rate, a livable wage is not much different from the median wage, and even that is too low in many economically hot areas where real estate is unbearably expensive. This is, incidentally, another large dimension of US poverty, that the stand-pat, NIMBY, no-growth zoning practices of what is now a majority of the country have sentenced the poor and the young to an even lower standard of living than what the income statistics would indicate, as they fork over their precious earnings to the older, richer, and socially settled landlords among us.

So what is the answer? I would advocate for a mix of deep policy change. First is a minimum wage that is livable, which means $15 nationwide, indexed for inflation, and higher as needed in more high-cost states. It should be a basic contract with the citizenry and workers of all types that working should pay decently, and not send you to a food pantry. All those jobs and businesses that can not survive without poorly paid workers... we don't need them. Second would be a government employer of last resort system that would offer a job to anyone who wants one. This would be paid at the minimum wage, and put people to work doing projects of public significance- cleaning up roadways, building schools, offering medical care, checkups, crossing guards, etc. We can, as a society and as civil governments, do a better job employing the poor in a useful way than can the much-vaunted entrepreneurs. Instead of endless strip malls of bottom-feeding commerce, let local governments sweep up available labor for cleaning the environment, instead of fouling it. Welfare should be, instead of a demeaning odyssey through DMV- like bureaucracies, a straight payment to anyone not employed, at half the minimum wage.

Third, we need more public services. Transit should be totally free. Medical care should be completely free. Education should be free. And incidentally, secondary education should be all public, with private schools up to 12th grade banned. When we wonder why our country and politics have become so polarized, a big reason is the physical and spiritual separation between the rich and poor. While the speaker in the video linked below advocates for free housing as well, that would be perhaps a bridge too far, though housing needs to be addressed urgently by forcing governments to zone for their actual population and taking homelessness as a policy-directing index of the need to zone and build more housing.

Fourth, the rich need to be taxed more. The corrosion of  our social system is not only evident at the bottom where misery and quasi-slavery is the rule, but at the top, where the rich contribute less and less to positive social values. The recent Twitter drama showed in an almost mythical way the incredible narcisism and callous ethics that pervade the upper echelons (... if the last administration hadn't shown this already). The profusion of philanthropies are mere performative narcissism and white-washing, while the real damage is being done by the flood of money that flows from the rich into anti-democratic and anti-government projects across the land.

And what is all this social division accomplishing? It is not having any positive eugenic effect, if one takes that view of things. Reproduction is not noticeably affected, despite the richness at the top or the abject poverty at the bottom. It is not having positive social effects, as the rich wall themselves off with increasingly hermetic locations and technologies. They thought, apparently, that cryptocurrencies would be the next step of unshackling the Galtian entrepreneurs of the world from the oppression of national governments. Sadly, that did not work out very well. The rich can not be rich without a society to sponge off. The very idea of saving money presupposes an ongoing social and economic system from which that money can be redeemed by a future self. Making that future society (not to mention the future environment) healthy and cohesive should be our most fervent goal.


Sunday, January 22, 2023

One Tough Molecule: Cholesterol

In praise of cholesterol.

Membranes are an underappreciated aspect of biology. The recent pandemic was caused by a virus that has a very sophisticated system to commandeer many aspects of our cellular apparatus, including our membrane systems, creating complicated vesicular bodies in which to develop and hide. Membranes may not have participated in the very origin of life, (which seems to have involved energy-rich mineral systems), but were essential at the origin of cells, as all cells are surrounded by a classic bilayer membrane, composed of two-faced molecules with water- soluble heads and fatty tails, the latter of which make up the middle of the bilayer.

Membranes everywhere. Eukaryotic cells are filled with membrane-bound compartments. Here, Covid-causing virus (black arrows) hides out in vesicles enclosed within additional membranes. These are post-mortem samples, examined by electron microscopy. In E, from lung cells, asterisks mark the presence of viral particles, while the number sign marks another lamellar structure of membranes involved in lung surfactant synthesis and secretion.

Membranes were also central to the next greatest innovation in life, the eukaryotic cell. Not only are eukaryotes full of membrane-bound compartments, like mitochondria, endosomes, lysosomes, endoplasmic reticulum, golgi apparatus, and others, but their membrane composition changed as well, with the advent of sterol-related molecules. Plants use phytosterols, while animals use cholesterol as an additive to their membranes. Cholesterol has gotten decades of bad press due to its association with atherosclerosis and the whole bad/good HDL story, about the particles that carry cholesterol around the body. But cholesterol is an essential and amazing molecule, painstakingly developed through evolution to strengthen our membranes and provide special nano-localization services.

Cholesterol (right) compared with a normal phospholipid that makes up the bulk of most membranes. Hydrophilic areas are in red/purple/blue, while hydrophobic areas are gray. The phospholipid is sphingomyelin, which appears to be fully saturated, meaning it has no double bonds or kinks in its hydrophobic tails. These on their own tend to be highly floppy, while cholesterol is far more structurally stable.

Cholesterol is a shockingly complex and expensive molecule to make. Its synthesis requires 37 steps, lots of molecular oxygen, and a hundred molecules of ATP. No wonder few bacteria make anything like it in such vast amounts. At the same time, there must be simpler chemicals that could afford similar functions- cholesterol is probably a relic from a lengthy exploration of membrane additives, to find one that is empirically ideal. Historically, cholesterol seems to have arisen after the general oxygenation event, enabling its peculiar synthesis, the symbiosis with mitochondria, and the evolution of eukaryotes generally. Our cells can still all make their own cholesterol, and our bodies have extensive means to regulate amounts, though evidently these mechanisms don't always work optimally for modern, aging humans. 

At any rate, it is now realized that dietary cholesterol has relatively little impact on internal levels or health outcomes. In our cells, cholesterol concentrations are rigorously controlled and highly diverse, being as high as ~40% of all lipids on the external face of the plasma membrane, while only 5% in the mitochondrial membranes. The reasons for this distribution are not entirely understood, but our genomes encode numerous proteins devoted to transferring cholesterol and phospholipids to various places and sides of membranes. A recent paper discussed the fact that cholesterol significantly strengthens membranes, allowing eukaryotes to attain the amoeboid lifestyle, rather than having to grow exoskeletons (i.e. cell walls) as bacteria generally do. 

Cholesterol makes membranes significantly stronger, less bendable, more viscous, and yet does not impair lateral fluidity.

The surface area per lipid goes down drastically (and strength and stiffness go up) as cholesterol is added to a regular phospholipid membrane. This is less meaningful than portrayed in the paper, however, since cholesterol counts as a lipid in this calculation, and with only one fat tail vs two slender tails, it is likely that the reduction in surface area arises as much from cholesterol's smaller cross-section (see cartoon above) as from its organizing / ordering effects on the neighboring phospholipids. 

Not only does it make membranes tougher, but it alters their thickness (by straightening up the phospholipid tails) and selectively prefers to bind certain partner phospholipids (sphingolipids), thereby creating nano-domains. These domains are called "lipid rafts" and at 50 nanometers across, they are exceedingly small, given that membranes are about 5 nanometers thick. These rafts are the prefered places for many hormone and immune system receptors to operate, which, when bound to their partners, lead to greater raft agglomerations that facilitate signaling and particularly the separation of some signals from others. This is just one example of the many roles that cholesterol has gained in cell and molecular biology.

Some reviewers note that while we often imagine nano-tech and nano-bots to be machines of metal, essentially miniaturized versions of our macro-tech, with tiny gears, etc., real nano tech may more properly lie in soft materials that are resilient at this scale, adapted to its challenges of constant thermal motion and mutable structure. Reeds that bend in the wind, not rocks that slowly break down in it. Membranes are being used in the form of liposomes as drug and vaccine delivery vehicles, and deserve a greater appreciation from both biological and technical perspectives.

This video, produced by detailed atomic computer simulation, illustrates how frenetic Brownian motion is. The membrane molecules (teal) are in constant motion, fending off the water molecules (red/white). The adoption of a second membrane component that intercalates, strengthens, and imposes some order here is a highly significant advancement.


  • Maybe giving in to nuclear bluffing and blackmail is not a good idea.

Saturday, January 14, 2023

Evolution of Dogs, and Dog Brains

Deeper genetic studies of the history of dogs reveal causal genes and pathways.

Do traits run in families? Are mental and behavioral attributes heritable? Of course they are, though well-intentioned liberals tend to argue otherwise, that everyone is the same by nature, and education, social services, and perhaps psychotherapy are the only things holding anyone back from limitless potential. Well, there is a place for both nurture and nature, but plain observation and mountains of science, such as twin studies, show that nature plays a dominant role, especially in relatively stable societies where nurture is not grossly deficient. While plenty of evidence exists for this in humans, it is particularly evident in model animals, such as those we have bred to have certain dispositions, like dogs. 

A recent landmark study on the genetics of dogs delves into some of the genetic and molecular detail of these traits. The authors find clear lineage differences between groups of dogs bred for different purposes, and dredge up a telling details about where those differences lie in the dog genome. First off, they have a wealth of data to draw from- full genomes sequenced for hundreds of dogs, and mutation variation panels for many more. They claim data from 4,261 individual dogs and 226 breeds, running the gamut from pure bred to village mutts. Wild dogs, wolves and coyotes were also added as outgroup references. 

The second big advance was to use a highly refined method of data reduction. The scale of this data is huge, and how to pull the needles of meaningful, breed- or trait-correlated variation from the haystack of backbground variation? Most of the variation they find was already present in wolves, meaning that while some new mutations occured during domestication, humans mostly spent their time selecting desirable combinations out of a very rich trove of natural variation already present from the start. The traditional way to do this is by principal component analysis (PCA), which plots the data in high dimensional space, and finds the two orthogonal axes that align with the greatest asymmetry in that data, and casts those two axes to two dimensions for visualization.

That is pretty simple, and crude, and a recent paper showed that a more sensitive way (named PHATE) to explore high dimensional data is able to uncover far more structure from it. It is just the kind of thing that these genomic scientists needed to wring more meaning from their huge data set.

Comparison of different dimensional reduction methods, from the same data set, in this case gene expression from embryonic cell types. One can easily see that PCA analysis is far less effective in revealing structure than is the newer PHATE technique.

This method, used over the dog data, yielded extremely clear differentiation between the major lineages, such as herding dogs vs retrievers vs scent hounds vs pointing dogs. As expected, the mutts, village dogs, and wolves clustered near the middle, not having traveled very far from the ancestral condition (except for one ramification along with "sight hounds", like grey hounds and other hunters, shared with Middle Eastern village dogs). Conversely, lineages like terriers formed a clearly separated path from the ancestral condition to more exquisitely bred extremes, at the ends of the distribution. Incidentally, their geographic view of this data showed that the ends of their distributions consistently were occupied by dogs bred in Britain, stemming from the virtual mania for animal husbandry and breeding (not to say eugenics) prevalent in Victorian times. Darwin was fascinated by this as well, devoting much of his "Origin" to the variation and breeding of pigeons.

Structured differences found in the genomic and other variation data gathered from thousands of dogs, of hundreds of breeds and geographic origins. The genomic data naturally fall into the breeds and types of dogs we are familiar with, while wild and feral dogs tend more to the central, ancestral areas.

This data treatment was not just done for visual clarity, but provided the clean classification that these authors could then use to search for the differentiating mutations in genomes separated by these breeding histories. They also do a bit of psychoanalysis, correlating the various lineages with major trait dimensions, such as trainability, aggressiveness, predatory drive, fear, and energy. This helped to give some rationale to aspects that various lineages might share, despite their separation in the main axes. For example, terriers had high levels of predatory chasing, while herders showed high levels of fear. This just buttresses that the dimensional reduction analysis (done on genomes) uncovered real dimensions of dog mentality, not just labeled by conventional breed types, but also by correlation with imputed general traits. What was the headline of this lineage analysis? 

"Lineage-associated variants are largely non-coding regions implicated in neurodevelopment"

There are two very interesting aspects to unpack here. First is that the vast majority of the mutations (aka variants) were non-coding. They state that of 16,250 variants that passed some threshold of statistical significance with regard to lineage divergences, only 76 were protein coding changes with any significant impact. So instead of changing proteins being made in the body, the story is one of control- the regulation over where, when, and how much of these proteins gets made. This is significant, as many genetic tests for humans are still focused on what is called the "exome", which is to say, the protein-coding parts of our genomes, where certainly many devastating mutations exist.  But it isn't where the vast majority of interesting variations occur, either for disease or particularly for normal trait variation. Those happen in the far larger and murkier regions around each gene that are strung with regulatory control sites. Mutations there can have very subtle effects.

Secondly, of course, is that they found brain and neural development genes to dominate the analysis. This only makes sense for our breeding efforts, which have had to firstly tame what was once a wolf, and then develop its talents in very particular, and sometimes peculiar directions. For instance, they note that scent / blood hounds have relatively low trainability, since they were bred to lead the way and follow their noses, not so much their humans. While the official dog shows focus on looks, coats, and colors, the much harder, and more significant job has clearly been to remake the mind of the dog to serve us. Nothing shows this more clearly than the border collie and related herders, whose ability to work with experienced handlers on difficult tasks is legendary.

The figure below gives an overview of what they found. At the top is the dog genome, with scoring of differential herding dog variants on the Y axis. Highlighted in green are genes that are mentioned below (panel C) as being quite densely involved in neural development and maintenance. Many of these are indeed very highly scoring in the genome graph, but others are less so. The authors are evidently being quite selective in calling out genes of interest, and there are many genes at least equally significant that are not being discussed. For instance, while there are by my count about 50 genes that rise to the "10" level in the graph, only seven or eight of which were called out for presentation in this neural pathways collection. And there are easily hundreds if not a thousand that satisfy the "5" level in the graph, making the selection of genes like SRGAP3 which has a score in this range somewhat willful.

Distinctive variations of sheepdogs are heavily involved in brain development, with a selection illustrated at bottom. At top is a graph of dispersion scores vs genomic location, with some genes involved in neural function called out (green). In the middle, a few of these genes are blown up to show that the variants do not generally occur in the coding regions of these genes, but in surrounding regulatory areas. At bottom is a shown an overlay of the genes found and called out above, lain over an independently curated/assembled diagram depicting molecular details of neuronal guidance, from KEGG.

At any rate, the middle panel of this diagram provides a few magnified examples of where the variations are relative to the coding regions of their respective genes. The coding regions are depicted at top with an arrow showing the start of transcription, and tiny vertical lines showing each "protein-coding" exon fragment, interspersed with large non-coding introns. Clearly the variations are clustered in the regulatory regions near, but not in, these genes.

And at bottom is a curated pathway, assembled from huge amounts of work from many labs, of some molecular aspects of axon guidance- the process by which neurons send axons out from where they start in embryogenesis to the targets, sometimes very far away in the brain, where they synapse with other neurons to make up our (or here the dog's) brain anatomy. The concentration of relevant variations in such genes speaks volumes about what has been going on in this process of rather rapid, directed evolution. The domestication of dogs is thought to have begun, very roughly, about 30 thousand years ago. The speed of this process and its resulting variety suggest (as it did to Darwin, and countless others) that evolution by natural selection has had plenty of time to work the biological wonders we see around us.


  • Somewhat boring lecture on axon guidance mechanisms that allow organized brain development and maintenance.
  • Social capital and social climbing.
  • Eugenics, Israeli-style.
  • Brothers at arms.
  • Yes, genes can arise from junk DNA. And they are important genes.