Sunday, January 24, 2021

Tale of an Oncogene

Research on a key oncogene of melanoma, MITF, moves from seeing it as a rheostat to seeing it as a supercomputer.

The war on cancer was declared fifty years ago, yet effective therapies are only now trickling in. And very few of them can be characterized as cures. What has been going on, and why is the fight so slow? Here I discuss one example, of melanoma and one of its drivers and central players, the gene MITF.

Melanocytes are not really skin cells, but neural crest cells, i.e. originating in the the embryonic neural tube and giving rise to various peripheral neural structures in the spine, gut, and head. One sub-population migrates off into the epidermis to become melanocytes, which generate skin pigment in melanosome packets, which they distribute around to local keratinocytes. Evolutionarily, these cells are apparently afterthoughts, after originally having developed as part of photoreceptor systems. This history, of unusual evolution and extensive developmental migration and eventual invasion into foreign tissues, has obvious implications for their capacity to form cancers later in life, if mutations re-activate their youthful propensities.

 

Above is shown a sketch of some genes known to play roles in melanoma, and key pathways in which they act. In red are oncogenes known to suffer activating mutations that promote cancer progression. In grey are shown additional oncogenes, ones whose oncogenic mutations are simpler loss-of function, not gain of function, events. And green marks ancillary proteins in these pathways that have not (yet) been found as oncogenes of any sort. MITF is a transcription regulator that drives many genes needed for  melanocyte development and melanosome formation. It also influences cell cycle control and cytoskeletal and cell surface features relevant to migration and invasion of other tissues. This post is based mostly on reviews of the molecules active in melanoma, and the more focused story of MITF.

MITF binds to DNA near target genes, often in concert with other proteins, and activates transcription of the local gene (in most cases, though it represses some targets as well). The evidence linking MITF with melanoma and melanocytes is mostly genetic. It is an essential gene, so complete deletions are lethal. But a wide variety of "mi" mutations in mice and in humans lead to unusual phenotypes like white hair color, loss of hearing, large head formation, small blue eyes, osteopetrosis, and much else. Originally researchers thought there were several different genes involved, but they all resolved down to one complex locus, now called MITF, for mi transcription factor. Certain hereditary mutations also predispose to melanoma, as do some spontaneous mutations. That the dose of MITF also correlates with how active and aggressive a melanoma is also contributes to the recognition that MITF is central to the melanocyte fate and behavior, and also one of the most central players in the disease of melanoma.



The MITF gene spreads over 229,000 base pairs, though it codes for a protein of only 419 amino acids. The gene contains nine alternate transcription start sites, 18 exons (coding regions), and five alternate translation start sites, as sketched above. This structure allows dozens of different forms of the protein to be produced in different tissues and settings, via alternative splicing. The 1M form (above, bottom) is the main one made in melanocytes. Since the gene is essential, mutations that have the phenotypes mentioned above tend to be very small, affecting one amino acid or one splice site, or perhaps truncating translation near the end of the protein. Upstream of the MITF gene and in some of its introns, there are dozens of DNA sites that bind other regulators, which either activate or repress MITF transcription in response to developmental or environmental cues. For example, a LEF1/TCF site binds the protein LEF1, which receives signals from WNT1, which is a central developmental regulator, driving proliferation and differentiation of melanocytes from the stem neural crest cells.

That is just the beginning of MITF's complexity, however. The protein contains in its sequence codes for a wide array of modifications, by regulatory protein kinases (that attach phosphate groups), and other modifiers like SUMO-ylation and ubiquitination. Key cellular regulators like GSK3, AKT, RSK, ERK2, and TAK kinases each attach phosphates that affect MITF's activity. Additionally, MITF interacts with at least a dozen proteins, some of which also bind DNA and alter its target gene specificity, and others that cooperate to activate or repress transcription. One of the better-known signaling inputs is indirectly from the kinase BRAF1, which is a target of the first precision melanoma-fighting drugs. BRAF1 is mutated in half of melanoma cases, to a hyper-active form. It is a kinase responsive to growth factors, generally, and activates a core growth-inducing (MAP) kinase cascade (as shown above), among other pathways. BRAF1 has several effects on MITF by these pathways, but the dominant one seems to be its phosphorylation and activation of PAX3, which is a DNA-binding regulator that activates the MITF gene (and is, notably, absent from the summary figure above, showing how dynamic this field remains). Thus inhibition of BRAF1, which these precision drugs do, effectively reduces MITF expression, most of the time.

Then there are the gene targets of MITF, of which there are thousands, including dozens known to have significant developmental, cell cycle, pigment synthesis, cytoskeletal, and metabolic effects. All this is to say that this one gene participates in a bewilderingly complex network of activities only some of which are recognized to date, and none of which are understood at the kind of quantitative level that would allow for critical modeling and computation of the system. What has been found to date has led to a "switch", or rheostat hypothesis. One of the maddening aspects of melanoma is its resistance to therapy. This is thought in part to be due to this dynamic rheostat, which allows levels of MITF to vary widely and send individual cancer cells reversibly into several different states. At high levels of MITF, cancer cells are pigmented and proliferative (and sensitive to BRAF1 inhibition). But at medium levels of MITF, they revert more to their early migratory behavior, and become metastatic and invasive. So melanoma benefits from a diversity of cell types and states, dynamically switching between states that are both variable in their susceptibility to therapies like anti-BRAF1, and also maximally damaging in their proliferation and ranging activities (diagrammed below).




The theme that comes out of all this is enormous complexity, a complexity that only deepens the more one studies this field. It is a typical example in biology, however, and can be explained by the fact that we are a product of 4 billion years of evolution. The resulting design is far from intelligent- rather, it is a compendium of messy contraptions, historical compromises, and accreted mechanisms. We are very far from having the data to construct proper models that would critically analyze these systems and provide accurate predictions of their behavior. It is not really a computational issue, but a data issue, given the vast complexity we are faced with. Scientists in these fields are still thinking in cartoons, not in equations. 

But there are shortcuts of various kinds. One promising method is to analyze those patients who respond unusually well to one of the new precision treatments. They typically carry some hereditary alteration in some other pathway that in most people generates resistance or backup activity to the one that was drug-treated. If their genomes are fully sequenced and analyzed in depth, they can provide insight into what other pathway(s) may need to be targeted to achieve effective combination treatment. This is a lesson from the HIV and tuberculosis treatment experiences- that the redundancy and responsiveness of biological systems calls for multiple targets and multiple treatments to meet complex disease challenges.

Saturday, January 16, 2021

Hunting for Lost Height

Progress in sequencing technologies and genetic analysis nails down the genetic sources of variability in the trait of human height.

PBS has an excellent program about eugenics- the push by some scientists and social reformers in the early 1900's to fix social problems by fixing problematic people. Both the science and the social ethics fell into disrepute, however, and were completely done in by the Nazi's version. While the stigma and ethical futility of eugenics remains, human genetics has advanced immeasurably, putting the science on much firmer footing. One example is a recent announcement that one research group has found all the sources of genetic variation that relate to human height.

Height is obviously genetic, and twin studies show that it is 80% heritable. There has been an interesting literature on the environmental effects on height, to the extent that whole populations of malnourished immigrants find that, after they move to the US, their children grow substantially taller. So genetic influences are only apparent (as indicated by the 80% figure) in the absence of over-riding environmental constraints. 

The first attempts to find the genetic loci associated with height took off after the human genome was sequenced, in the form of GWAS studies (genome-wide association study). It was easier in this era to probe short oligonucleotide sequences against the sampled genomic DNA, rather than sequence whole genomes of many people. So GWAS typically took a large sample of about 500,000 locations through human genomes that were variant, and used them to test which of those variants a set of human populations had. A massive correlation analysis was done versus the traits of those people, say their height, or weight or health, to see which markers (i.e. variants) correlated with the trait of interest. 

Such studies only found about 5% to 25% of the heritability of height, perplexing researchers. They were sampling the entire genome, if sparsely. The 500,000 markers corresponded to about one every 6,000 base pairs, so should be near enough to most genes, if they have significant effects on the trait of interest. And since most human genome regions are inherited as relatively large blocks, (haplotypes), due to our near-clonal genetic history, the idea was that sampling a sparse set of markers was sufficient to get at any significant effect from any gene. Later work could then focus in on particular regions to find the actual genes and variations that were responsible for the trait in question.

But there was a big problem, which was that the variants selected to go into the marker pool were from a very small population of a few hundred people. Recall that sequencing whole genomes was very expensive at this time, so researchers were trying to wring as much analysis out of as little data as possible. By 2018, GWAS type studies were still only finding genetic causes for about 25% of the variability of height, clearly short of what was known from simple genetic analysis of the trait. Not only that, but the number of genes implicated was rising into the thousands, each with infinitesimal effect. The first 40 genes found in these studies only accounted for about 5% of the variation in height. 

The large effect of rare alleles. MAF (minor allele frequency) in the human population, plotted against the trait variance it accounts for. The color code (LD, or linkage disequilibrium) indicates selection against the locus (if high) and other predicted characteristics of the variation, in the color scheme. It is very rare protein-altering variants (blue) that have the strongest individual effects.

The current work (review, review) takes a new approach, by virtue of new technologies. They sequence the full genomes of over 20,000 people, finding a plethora of rare alleles that had not been included in the original marker studies- alleles that have significant effects on height. They find variations that account for 79% of height heritability, which is to say, all of it. It turns out that the whole premise of the GWAS study, that common markers are sufficient to analyze diverse populations, is incorrect. The common markers are not as widely distributed, or as well-linked to rare variants, as was originally assumed. The new technologies allow vastly more depth of analysis (full genome sequencing) and broader sampling (20,000 vs a few hundred) to find rare and influential variants. We had previously learned that using common variants confines the GWAS analysis to uninteresting variants- those that are not being selected against. This may not be an enormous issue in height trait, (though these researchers find that many of their new, rare loci are being selected against), but it was a big issue in the analysis of disease-linked genetic loci, like for diabetes or alcoholism. While these traits may be common, the most influential genetic variants that cause them are not, for good reason.

One can imagine that over time, everyone will have their genome sequenced, and that this data will lead to a fuller, if not complete, understanding of trait genetics. But what are the genes responsible for the traits? All this is still an abstract mapping of locations of variability (what used to be called mutation) correlated with variations of a trait. This newest data identifies thousands of influential variants covering one third of the genome. This means that, like most interesting traits, the genetics of human height are dispersed- a genetic fog. All sorts of defects or changes can influence this trait to infinitesimal degrees, making it a fool's errand to look for a gene for height.


  • Guns are a key element of this volatile moment.
  • Stories, data, and emotion.
  • God, guns, and lunacy ... a match made in heaven.

Sunday, January 10, 2021

Viruses Have Always Been With Us

Some researchers argue that viruses form their own kingdom of life, and originated prior to the last common cellular ancestor.

Viruses are all about, even more of them than bacteria. The pandemic has focused our attention on one of them, but they are truly astronomical in diversity and numbers. Where did they come from? This has historically been thought a pointless question, since, even if one concedes that they are life forms of a sort, they mutate and evolve quite a bit faster than cells and organisms do, erasing most of their history. Additionally, they have been thought to exchange genes at a high rate with their hosts, also tending to erase whatever history they retain. But an article published back in 2015 fought back against all this pessimism, and made the case that virus histories can be reconstructed on a global scale and have some very interesting things to tell us.

Their first point is that gene exchange between viruses and hosts is less confusing than thought. Cells certainly have adopted viral genes at a high rate. Our own genomes are chock full of retroviral remnants, for instance. But functional genes are a different story. Relatively few seem to have gone either way (though see Koonin et al., arguing that many viral capsid and coat proteins were adopted from cellular genomes). The core viral replication proteins, such as the SARS-CoV2 RNA polymerase, for instance, is not related to cellular enzymes, and seems to be very ancient. The authors suggest that such key components originated even before the last common cellular ancestor- the point of divergence between archaea, bacteria, and eukaryotes.

To overcome the main technical hurdle of rapid evolution, the authors use protein fold analysis. Instead of studying DNA sequences, (which evolve quite rapidly), or protein sequences, (which evolve more slowly), this uses the shape of the protein, which tends to persist even after sequence similarity is completely lost. This is one way to get at very deep phylogenies, and they claim that it points to a substantial set of protein folds that are specific to viruses and wide-spread within viral families. They point out additionally that these proteins tend also to be confined to families of viruses, one more indication that virus evolution has not been promiscuous, but rather remarkably traceable through time. Viruses are classified into major families by their mode of replication. Thus RNA viruses and DNA viruses appear to have, for instance, distinct and ancient lineages.

One way to make sense of these observations and claims is that viruses were actually cells at very early times. It is common for parasites to progressively lose functions that are needed in the free-living state but become unnecessary when living off one's parents, er some other fully competent cell. The closer the symbiotic or parasitic association, the fewer functions the parasite needs. If the parasite is intracellular, then a huge amount of cellular overhead can be dispensed with. Mitochondria evolved this way, from free-living bacteria to organelles now with only about 33 genes. 

Viruses come in all sorts of sizes, from nearly cell size, encoding a thousand genes, down to specks of RNA only 250 nucleotides long. This diversity suggests the plausibility of their origination as cells, and subsequent down-scaling through a parasitic lifestyle.

But what were those cells, and whom did they parasitize? The distinct and peculiar gene complements and mechanisms of viruses, particularly the RNA viruses, suggests that they originated prior to the major split of existing cellular kingdoms. It stands to reason that cellular life has been saddled with parasites and viruses almost since the advent of cells, so some of these virus families may predate the advent of DNA, thus the prevalence of RNA viruses. The authors do an analysis of ages of the protein folds they find and their distribution, and suggest that those folds shared in all domains of life (viruses, archaea, bacteria, and eukaryotes) show that those from this set found in RNA viruses are significantly older than those found in DNA viruses. Such protein folds that are universal would be the most ancient, so finding differention among which viruses have them suggests that the major virus lineages come from different epochs of this most ancient era of cellular evolution. Interestingly, the pattern they do not find is one reflecting the cellular domains of life, which would be the case if viruses arise continuously or in relatively modern times from their cellular milieu.

Phylogenetic tree of protein folds from all domains of life, including viruses. Note the close clustering of RNA viruses near the root, and the early distribution of other viruses, compared to the later divergence of cellular domains. This kind of stretched phylogenetic tree is unfortunately symptomatic of an unsually high evolutionary rate, which is also a viral property. So it is not clear whether these authors have fully resolved this issue with their protein fold-based methods.
 

The upshot is that these authors promote the idea that viruses should constitute their own superkingdom of life, in parallel with the major cellular superkingdoms- archaea, bacteria, and eukaryotes. The rooting/ordering of the cellular tree remains quite controversial, but viruses are clearly something else again. They exchange a fair amount of genetic material with cells, but retain noticeable traces of early protein and RNA evolution. The idea that they arose from primitive or proto-cells also makes sense as a general proposition, for otherwise it is difficult to imagine their origin, such as from naked nucleic acids. This whole view remains quite controversial in the field, however, given the difficulties of the molecular analysis and the general prejudice against viruses as proper forms of life. But I think time will bear out this view and add a significant feature to early, as well as current, evolution.

Saturday, January 2, 2021

The Parables of Octavia Butler

Review of Parable of the Sower, and Parable of the Talents, about earily familiar dystopias and the religions they call forth.

Octavia Butler is having a moment. The late science fiction author published the parable books in 1993 and 1998, not even knowing of the coming G. W. Bush administration, let alone that of Donald Trump. But her evangelical-supported right wing presidential candidate issues a call to "Make America great again". Her insight and prescience is head-spinning, in books that portray an America much farther gone into division, inequality, corporate power, and chaos (all owing to climate change(!)) than we in actual reality are- yet only by degrees. That is only the window dressing and frame, however. Her real subjects are religion and human purpose. I will try to not give away too much, since these make dramatic and interesting reading.

The books introduce heroine Lauren Olamina, who is totally together and possessed of a mission in life. She grows up in a neighborhood compound walled off from the chaos outside, but quite aware of the desperate conditions there. Her father is a pastor, and both she and her brother become, through the books, preachers as well. The brother in a conventional Christian mode, but Lauren founds a new religion, one maybe tailored for the generally skeptical science fiction audience. God is change. That is it. Lauren emphasizes empathy, usefulness, education, and the shaping of change, but there is no god as traditionally conceived. It is a sort of buddhistic philosophy and educational / communal program rather than a supernaturalist conjuring, and love (or fear), of imaginary beings.


One question is whether such a philosophy would actually gain adherents, form communities and function as a religion. I get the sense that Butler would have dearly loved for her ideas to gain a following, to actually ripen, as did those of fellow science fiction writer L. Ron Hubbard, into an actual religion (however horrible his escapade actually turned out to be!). But their difference is instructive. Hubbard's Dianetics/Scientology is a floridly imagined narrative of super-beings, secret spiritual powers, and crazy salvation. Absolute catnip to imaginative seekers wanting to feel special and purposeful. On the other hand, Olamina's system is quite arid, with most of the motive force supplied, as the book relates, by her own determination and charisma. Her philosophy is true, and therein lies a big, big problem. Truth does not supply purpose- we already knew that scientifically. Natural selection is all about change, and makes us want to live, flourish, and propagate. Change is everpresent, and while it might be healthy to embrace it and work with it, that is hardly an inspiring and purpose-filling prospect, psychologically. As the books relate in their narrative of Lauren's life, change is also often quite terrible, and to be feared.

But the more important question is what role people such as Lauren play, and why people like her followers exist. People need purpose. Life is intrisically purposeless, and while we have immediate needs and wants, our intelligence and high consciousness demands more- some reason for it all, some reason for existence, collectively and individually. An extra motive force beyond our basic needs. We naturally shape our lives into a narrative, and find it far easier and more compelling if that narrative is dramatic, with significance beyond just the humdrum day-to-day. But such narratives are not always easy to make or find. Classic epics typically revolve around war and heroic deeds, which continue to make up the grist of Hollywood blockbusters. Religion offers something different- a multi-level drama, wrapped up in collective archetypes and usually offering salvation in some form, frequently a hero, if not a militaristic one. Last week's post mentioned the life of Che Guevara, who found purpose in Marxism, and was so fully seized by it that he bent many others, possibly the whole nation of Cuba, to his will / ideology. Lauren Olamina is a similar, special person who has, through her own development and talents, discovered a strong purpose to her life and the world at large that she feels compelled to share, pulling others along on her visionary journey. Are such people "strong"? Are their followers "weak"? 

Human social life is very competitive, with the currency being ability to make others think what you want them to think, and do what you want them to do. Our ideology of freedom was built by a founding class of dominant, slave-holding rich white men who wanted only to come to a reasonable accommodation for political power within their class, not extend freedom to women, blacks, or the poor. This ideology was highly successful as a sort of civic religion, coming down to us in two traditions- the "winning" tradition of native American extermination, ruthless capitalism, and growing international empire- all set within a reasonably stable elitist political system. And the second "freedom" tradition, which gave us abolitionism, the civil rights movement, and the modern Democratic party, which takes Jefferson's ideals at their word, however little he actually meant them.

Religion is a particularly powerful engine of political and social ideology, making people go through ridiculous rituals and abasements to keep on the safe side of whatever the powerful tell them. So yes, domineering social personalities like Lauren and Che, (and Trump), are very powerful, deservedly treated as larger-than-life, charismatic figures. Their powers are archetypal and dangerous, so it falls to skeptics and free-thinkers to offer antidotes, if their charisma goes off the rails. Butler offers a hero who is relentlessly good and positive, as well as charismatic and strong, so the only competition comes from ignorance, conventional wisdom, and from the competing religious powers like traditional Christianity. But the power of artificial purposes, and of the charismatic figures who propound them, is almost uniformly corrupting, so Lauren's opposition is, in the end, far more realistic as a portrayal of what we are facing, now and in the future.


  • "China is about to bring 21 gigawatts of coal fired power online."
  • Stocks are euphoric, headed for a fall.
  • Obstruction of justice, in a continuing saga of impeachable offenses.

Saturday, December 26, 2020

Domineering Freeloader Decides Communism is the Answer

General, executioner, economic development czar, and head of the national bank of the Cuban revolution: the biography of Che Guevara, by John Lee Anderson.

Ernesto Guevara began life as a reckless, adventurous, and very intelligent kid. His first inspiration was medicine, indeed medical research on leprosy and other diseases common in South America, and he got a medical degree. But toiling away on small problems in the lab didn't fit his temperament, and he decided to bum around South America instead, living off the generosity of others, running up debts, fast-talking his way out of jams, and building up an implacable hatred of the US. A common thread through his travels from Argentina through Chile, Bolivia, Peru, and points north was the overwhelming influence of the US, usually corrupting the local political system for the benefit of mining interests in the south, and for the benefit of agricultural interests in Central America. Eventually he got caught up in the liberal quasi-socialist reforms of Jacobo Arbenz of Guatemala, later fleeing to Mexico after a US-supported right wing coup.

It was there that he fell under the spell of Fidel Castro, eventually becoming, despite his evident non-Cuban origins, Castro's right-hand man at the head of the communist revolution in Cuba. Not that it started as communist. No, Fidel was a master politician, and started as an anti-communist, currying favor with the Cuban population and the US. But both his brother Raul and Che were dedicated communists by that point, in thrall to Stalin and Mao, and their influence, combined with the logic of perpetual, one-party / one-person power, brought Fidel around to a gradual process of revealing, after the revolution had already gained power and Che had executed resistent elements of the army and police, their new (red) colors. Then came feelers to Moscow and the rest of the eastern bloc, the Cuban missile crisis, and that is pretty much where things stand still today.

Che and Fidel, when times were good.

Anderson's biography is definitive- fully researched, well written, and judiciously argued. He portrays Che as a seeker- a youth on the prowl for good times, but also for a purpose, which he ultimately found in full-on socialism. He found himself most fully during the early fight in the hills of Cuba- a trial by privation, exhaustion, and blood- where he put revolutionary principles to work organizing his men, making alliances with the local peasants, and executing deserters and traitors. Che's socialism was a pan Latin-American Bolivaran ideal, where all the countries of Central and South America would band together- possibly even unite- under state socialism as inspired by the peasant revolutions of Russia and especially China. It was both austere and visionary- a whole continent escaping from under the yoke of the great oppressor- the US.

It is clearly a religious conversion- the epiphany of a wholly captivating ideal. Che became Castro's second in command by his great intellectual and leadership talents, but even more by his absolute dedication to the cause- the cause of liberation from oppression. Unfortunately, after cleansing the army and securing Fidel's rule, Che was assigned to make the economy run, and here he came up against the immovable obstacle- reality. Socialism is healthy in small doses, but communism has not, in Cuba as elsewhere, been able to run an economy. Motivation to work needs to be supplied somehow, and if it is not by the lash of money and its lack, then terror will have to do the job, and poorly at that. Che did what he could, but the system he had fought so hard to establish was impossible to operate, and his thoughts turned back to his first love- revolution.

It is here that we see mostly clearly the religious nature of Che's motivations and of communism generally. If he were a rational researcher in the template of medical or other research, he would have sat back and realized that communism was not working in economic and social terms, let alone in terms of personal individual liberation. And then he would have adapted intellectually and tried to figure out a middle way to preserve Cuba's independence while running a realistic economic system. Possibly even elections. Unfortunately, by this time, Cuba had settled into a dependent relationship with Russia, which bought its sugar and gave aid, preventing either economic or political independence. Cuba is today still relatively poor, in the middle to lower ranks of GDP. Not as poor as Haiti, however, (or North Korea), and therein lies a message, which is that the Cuban revolution remains relatively humane, despite its many debilities and lack of political, social, and economic freedom. The collapse of the Soviet Union shocked the communist government into slight openings for private business and a heavy dose of tourism from Europe, which sustain it today.

But instead of recognizing the errors and failures of his dream, Che fomented more revolutionary cells all over Latin America and Africa, paying special attention to one sent to infiltrate Argentina, one that he was to join himself and die serving in 1965. One can not fault his dedication or consistency, but one can question the intellect that took him and so many other idealistic freedom fighters over the twentieth century into communism only to author monumental disasters of political and economic mismanagement. To think that dictatorship would resolve the class struggle, and produce washing machines and military might ... it had to be a religious movement, which unfortunately, once in power, became incredibly difficult to dislodge.

The motive force obviously was the US. We, through our callous and greedy treatment of our backyard over the nineteenth and twentieth centuries, and our betrayal of the paternalistic impulse of the Monroe Doctrine, not to mention similar failures of principle in the Middle East and Vietnam, motivated the intense anti-Yankee hatred of idealistic men such as Che Guevara, and the peasant resistance that, at least in Cuba, gave him and Castro support. It is a fascinating history of what the US has wrought, and how our failure to hold to our own ideals has come back to haunt us over and over again.

  • It has been abusive, unnecessary, toxic, and we will need some time to work it out of our system.

Saturday, December 19, 2020

Fair and Balanced

Momentary virality is not the best way to construct and distribute news. Nor is fear-based button-pushing. But what can we do about it?

Our political system almost ran off the rails over the last few months, and the ultimate cause was the media, which on the right-wing side has shaped an alternate reality of breathtaking extremism and divergence from reality. Outlets like Rush Limbaugh, FOX news, NewsMax, and Sinclair Broadcasting have fundamentally reshaped our political discourse, from a place fifty years ago where facts and problems were generally agreed upon, and policy discussions founded on those facts conducted- if not in a civil manner, then in a functional manner in legislative bodies like the US Senate. Now Limbaugh is broaching secession.

Rush, in his lair.

Even the Reagan era, conservative as it was, hewed to basic democratic principles and a centrist media environment. But then came Bill Clinton, and in response, Newt Gingrich, blazing a scorched-earth trail through the House of Representatives, followed soon by the establishment of FOX news as a relentless and shameless propaganda organ for the right. Now, even FOX is reviled by true believers as not extreme enough, as the end of the Trumpian epoch comes shudderingly into view. Which is worse- the internet melee of Russian disinformation and viral Q-conspiracies, or the regimented lying brought to us by corporate right-wing media? It is hard to tell sometimes, and both have been disastrous, but I think the latter has been substantially worse, forming a long-running environment of cultivated lies, normalized idiocy, and emotional trauma. Why anyone watches it or listens to it is beyond me personally, but clearly many people like to have their buttons pushed and participate in a crudely plausible vision of a black, white, and bloviatingly Christian (or un-Christian, depending on your theological ethics) world. 

Government censorship is probably not going to happen in this case. Even if we changed our legal system to allow it, the right wing would manage to subborn those regulatory bodies, as they have the Supreme Court, Senate, and the White House. These media outlets don't breathe oxygen, however, they breathe money- money that comes from advertisers who appreciate their ability to reach a uniquely gullible demographic. But those advertisers are not political. They are fomenting our divisions and destroying our political system for purely transactional reasons. It is, we can note in passing, another classic and ironic breakdown of the free market. 

The rational response, then, is to boycott the sponsors in systematic fashion, publicizing who advertises with which outlets, for how much. Several of these sites and petitions are already happening. But it is clear that they have not gained enough traction to have much effect. Only when the most egregious and appalling violations of decency occur does any attention rain on the channels and scare away sponsors. The tracking, petition, and boycotting system needs to have better centralization. Perhaps like the eco-friendly food labels, we need truth-friendly labeling of companies at the point of consumption, marking those (MyPillow! SmileDirect! Nutrisystem! Geico!) who are pouring money into these cesspools of psychological manipulation and political destruction.

Sure, this kind of accountability would heighten political divisions, causing a polarization of the business world, which has (supposedly) tried to keep itself out of the fray, and invite counter-boycotts of, say, NPR or MSNBC. But business has not been unbiassed at all, rather, through every organ, from chambers of commerce to K-street lobbies and Ayn Randian talk shops, they have pushed the right wing agenda in tandem with the propaganda organs that broadcast relentless pro-business and anti-public interest messages. It is high time to hold the whole ecosystem to account for the state of our country, directly and financially.


  • Should federal office holders be held to their oaths?
  • The business of the kidney dialysis business.
  • Apparently, Trump supporters put their money where their minds were.

Saturday, December 12, 2020

Where has Inflation Gone?

Inflation stays low amid vast public deficits and good employment and economic growth trends. What happened?

Those of us who grew up in the 1970's are permanently scarred by its economic malaise- inflation. As earlier generations were affected by their reigning economic conditions, from depression to prosperous expansion, we had a hard time shaking our syndrome, and keep looking for inflation around every corner.

But inflation is nowhere in sight, and no matter how much the government spends, it doesn't seem to matter. Prices simply haven't budged, past a very staid 1-3% inflation rate, for the last decade and more. It has put the lie to a generation of Republican scare-mongering, and is overall quite perplexing. Granted, the 2008 recession was extremely severe, was insufficiently addressed by fiscal spending, and is still affecting us in terms of lost economic capacity and lost employment. The current pandemic has been even more disastrous for employment and small business conditions. Yet, I think the question still stands- why are we now spending so much time fighting deflation, where we used to fight inflation?

I think the usual answers of global trade, declining worker power and unionization, and automation are significant, but there is one more that should be added, which is income inequality. Inflation is measured not in yachts and space ships, but in normal goods like groceries and appliances. The economy of these goods is largely ruled by lower income segments, and reflects pretty directly their income. At the higher end of the income and wealth scale, people tend to save rather than spend, which is why they are wealthy to start with. And what the rich do spend money on tends not to hit the basic inflation metrics very hard, like exclusive real estate, vanity philanthropy, and vulture capitalism. Their money does not, in any proportionate way, enter into the inflation-causing real economy.

State of inequality in the US, which has been growing much further during the Trump administration and the pandemic.

So one can imagine that, instead of looking for causality from inflation to income inequality- which may be a fool's errand- it actually works the other way around. High inequality is a societal condition where wages stay persistently low, at a subsistence level, (or even below), which saps both the cultural capital and the inflation-causing capacity of the working class and poorer sectors. At the same time, it funnels vast amounts of wealth to the already wealthy- money that mostly just gets squirreled away into the stock market, foreign bank accounts, government bonds, and other investments that are more or less unproductive, especially of inflation. It also causes a race for yield, which we see resulting in very low market interest rates- another result of inequality.

Sunday, December 6, 2020

Computer Science Meets Neurobiology in the Hippocampus

Review of Whittington, et al. - a theoretical paper on the generalized mapping and learning capabilities of the entorhinal/hippocampal complex that separates memories from a graph-mapping grid.

These are exciting times in neurobiology, as a grand convergence is in the offing with computational artificial intelligence. AI has been gaining powers at a rapid clip, in large part due to the technological evolution of neural networks. But computational theory has also been advancing, on questions of how concepts and relations can be gleaned from data- very basic questions of interest to both data scientists and neuroscientists. On the other hand, neurobiology has benefited from technical advancements as well, if far more modestly, and from the relentless accumulation of experimental and clinical observations. Which is to say, normal science. 

One of the hottest areas of neuroscience has been the hippocampus and the closely connected entorhinal cortex, seat of at least recent memory and of navigation maps and other relational knowledge. A recent paper extends this role to a general theory of relational computation in the brain. The basic ingredients of thought are objects and relations. Computer scientists typically represent these as a graph, where the objects are nodes, and the relations are the connecting lines, or edges. Nodes can have a rich set of descriptors (or relations to property nodes that express these descriptions). A key element to get all this off the ground is the ability to chunk, (or abstract, or generalize, or factorize) observations into discrete entities, which then serve as the objects of the relational graph. The ability to say that what you are seeing, in its whirling and colorful reality, is a dog .. is a very significant opening step to conceptualization, and the manipulation of those concepts in useful ways, such as understanding past events and predicting future ones.

Gross anatomy of the hippocampus and associated entorhinal cortex, which function together in conceptual binding and memory.

A particular function of the entorhinal/hippocampal complex is spatial navigation. Reseachers have found place cells, grid cells, and boundary cells (describing when these cells fire) as clear elements of spatial consciousness, which even replay in dreams as the rats re-run their daytime activities. It is evident that these cells are part of an abstraction mechanism that dissociates particular aspects of conceptualized sensory processing from the total scene and puts them back together again in useful ways, i.e. as various maps.

This paper is conducted at a rather abstruse level, so there is little that I can say about it in detail. Yet it and the field it contributes to is so extremely interesting that some extra effort is warranted. By the time the hippocampus is reached, visual (and other sensory data) has already been processed to the conceptual stage. Dogs have been identified, landmarks noted, people recognized. Memories are composed of fully conceptualized, if also sensorily colored, conceptual chunks. The basic idea the authors present is that key areas of the entorhinal cortex provide general and modular mapping services that allow the entorhinal/hippocampal complex to deal with all kinds of relational information and memories, not just physical navigation. Social relations, for example, are mapped similarly.

It is important to note tangentially that conceptualization is an emergent process in the brain, not dictated by pre-existing lists of entities or god-given databases of what exists in the world and beyond. No, all this arises naturally from experience in the world, and it has been of intense interest to computer scientists to figure out how to do this efficiently and accurately, on a computer. Some recent work was cited here and is interesting for its broad implications as well. It is evident that we will in due time be faced with fully conceptualizing, learning, and thinking machines.

"Structural sparsity also brings a new perspective to an old debate in cognitive science between symbolic versus emergent approaches to knowledge representation. The symbolic tradition uses classic knowledge structures including graphs, grammars, and logic, viewing these representations as the most natural route towards the richness of thought. The competing emergent tradition views these structures as epiphenomena: they are approximate characterizations that do not play an active cognitive role. Instead, cognition emerges as the cooperant consequence of simpler processes, often operating over vector spaces and distributed representations. This debate has been particularly lively with regards to conceptual organization, the domain studied here. The structural forms model has been criticized by the emergent camp for lacking the necessary flexibility for many real domains, which often stray from pristine forms. The importance of flexibility has motivated emergent alternatives, such as a connectionist network that maps animals and relations on the input side to attributes on the output side. As this model learns, an implicit tree structure emerges in its distributed representations. But those favoring explicit structure have pointed to difficulties: it becomes hard to incorporate data with direct structural implications like 'A dolphin is not a fish although it looks like one', and latent objects in the structure support the acquisition of superordinate classes such as 'primate' or 'mammal'. Structural sparsity shows how these seemingly incompatible desiderata could be satisfied within a single approach, and how rich and flexible structure can emerge from a preference for sparsity." - from Lake et al., 2017


Getting back the the hippocampus paper, the authors develop a computer model, which they dub the Tolman-Eichenbaum machine [TEM] after key workers in the field. This model implements a three-part system modeled on the physiological situation, plus their theory of how relational processing works. Medial entorhinal cells carry generalized mapping functions (grids, borders, vectors), which can be re-used for any kind of object/concept, supplying relations as originally deduced from sensory processing or possibly other abstract thought. Lateral entorhinal cells carry specific concepts or objects as abstracted from sensory processing, such as landmarks, smells, personal identities, etc. It is then the crossing of these "what" and "where" streams that allows navigation, both in reality and in imagination. This binding is proposed to happen in the hippocampus, as firing that happens when firing from the two separate entorhinal regions happen to synchronize, stating that a part of the conceptual grid or other map and an identified object have been detected in the same place, generating a bound sensory experience, which can be made into a memory, or arise from a memory, or an imaginative event, etc. This is characteristic of "place cells", hippocampal cells that fire when the organism is at a particular place, and not at other times.

"We propose TEM’s [the computational model they call a Tolman-Eichenbaum Machine] abstract location representations (g) as medial entorhinal cells, TEM’s grounded variables (p) as hippocampal cells, and TEM’s sensory input x as lateral entorhinal cells. In other words, TEM’s sensory data (the experience of a state) comes from the ‘what stream’ via lateral entorhinal cortex, and TEM’s abstract location representations are the ‘where stream’ coming from medial entorhinal cortex. TEM’s (hippocampal) conjunctive memory links ‘what’ to ‘where’, such that when we revisit ‘where’ we remember ‘what’."


Given the abstract mapping and a network of relations between each of the components, reasoning or imagining about possible events also becomes feasible, since the system can solve for any of the missing components. If a landmark is seen, a memory can be retrieved that binds the previously known location. If a location is surmised or imagined, then a landmark can be dredged up from memory to predict how that location looks. And if an unfamiliar combination of location and landmark is detected, then either a new memory can be made, or a queasy sense of unreality or hallucination would ensue if one of the two are well-known enough to make the disagreement disorienting.

As one can tell, this allows not only the experience of place, but the imagination of other places, as the generic mapping can be traversed imaginatively, even by paths that the organism has never directly experienced, to figure out what would happen if one, for instance, took a short-cut. 

The combination of conceptual abstraction / categorization with generic mapping onto relational graph forms that can model any conceptual scale provides some of the most basic apparatus for cognitive thought. While the system discussed in this paper is mostly demonstrated for spatial navigation, based on the proverbial rat maze, it is claimed, and quite plausible, that the segregation of the mapping from the object identification and binding allows crucial generalization of cognition- the tools we, and someday AI as well, rely on to make sense of the world.


  • A database of superspreader events. It suggests indoor, poorly ventilated spread of small aerosols. And being in a cold locker helps as well.
  • The curious implications of pardons.
  • It is going to be a long four years.
  • How tires kill salmon.
  • Ever think that there is more to life?

Saturday, November 28, 2020

Evolution of the Larynx

Primates already had bigger and more diverse larynxes, before humans came on the scene.

While oxygen was originally a photosynthetic waste product and toxic to all life forms, we gradually got used to it. Some, especially the eukaryotes, made a virtue of oxygen's great electronegativity to adopt a new and more efficient metabolism with oxygen as the final electron acceptor. This allowed the evolution of large animals, which breathe oxygen. All this happened in the oceans. But it turns out that it is far easier to get oxygen from air than from water, leading air breathing to evolve independently dozens of times among fishes of all sorts of lineages. Lungs developed from many tissues, but rarely from the swim bladder, which had critical roles revolving around constant and controllable pressure, pretty much the opposite of what one needs in a lung. So in our lineage, the lung developed as an adjunct to the digestive system, where the fish could gulp air when gill breathing didn't cut it. 


Overview of the atmosphere of earth. Lungs were only possible when the level of oxygen in air rose sufficiently, and respiration of any kind only when oxygen had vanquished the originally reducing chemical environment.

This in turn naturally led to the need to keep food from going into the nascent lung, (air going into the stomach is less of a problem.. we still do that part), thus the primitive larynx, which just a bit of muscle constricting the passage to the lung. As this breathing system became more important, regulating access to the lung became more important as well, and the larynx developed more muscles to open as well as close the air passage, then a progessively more stable and complex surrounding structure made of cartilage to anchor all these muscles. 

But that was not the end of the story, since animals decided that they wanted to express themselves. Birds developed an entirely different organ, the syrinx, separate from the larynx and positioned at the first tracheal branch, which allows them to sing with great power and sometimes two different notes at once. But mammals developed vocal cords right in the center of the larynx, making use of the cartilaginous structure and the passing air to send one fluttering note out to the world. The tension with which these cords are held, the air velocity going past them, and the shapes used in the upper amplifying structures of the throat, mouth, and sinuses all affect the final sound, giving mammals their expressive range, such as it is.

So why are humans the only animals to fully develop these capacities, into music and speech? Virtually all other mammals have communicative abilities, often quite rich, like purrs, barks, squealing, mewling, rasping, and the like. But none of this approaches the facility we have evolved. A lot can be laid to the evolution of our brains and cognitive capacities, but some involves evolution of the larynx itself. A recent paper discussed its trajectory in the primate lineages.

Comparison of representative larynxes, showing a typical size difference.

The authors accumulate a large database of larynx anatomy and function- sizes, frequency patterns, evolutionary divergence times- and use this to show that on average, the primate lineage has larger larynxes than other mammals, and has experienced faster larynx evolution, to a larger spread of sizes and functions, than other mammals. The largest larynxes of all belong to black howler monkeys, who are notorious for their loudness. "The call can be heard up to 5 km away." They also claim that among primates, larynx size is less closely related to body size than it is among other mammals, suggesting again that there has been more direct selection for larynx characteristics in this lineage.

Primates (blue) show greater larynx size and variability than other carnivores.

This all indicates that in the runup to human evolution, there had already been a lot of evolutionary development of the larynx among primates, probably due to their social complexity and tendency to live in dense forested areas where communication is difficult by other means. Yet do primates have vocal languages, in any respect? No- their vocalizations are hardly more complex than those of birds. Their increased evolutionary flexibility at most laid the physical groundwork for the rapid development of human speech, which included a permanently descended larynx and more importantly, cognitive and motor changes to enable fine voluntary control in line with a more powerful conceptual apparatus.

  • Is there a liar in your life?
  • Against climate change, we need much more... action.
  • 2020, auto-tuned.
  • Turns out, there was a war on Christmas.

Saturday, November 21, 2020

Stem Cell Asymmetry Originates at the Centrosome

At least sometimes.. how replication of the centrosome creates asymmetry of cell division as a whole, and what makes that asymmetry happen.

Cell fates can hinge on very simple distinctions. The orientation of dividing cells in a tight compartment may force one out of the cozy home, and into a different environment, which induces differentiation. Stem cells, those notorious objects of awe and research, are progenitor cells that stay undifferentiated themselves, but divide to produce progeny that differentiate into one, or many, different cell types. At the root of this capacity of stem cells is some kind of asymmetric cell division, whether enforced physically by an environmental micro-niche, or internally by molecular means. And a dominant way for cells to have intrinsic asymmetry is for their spindle apparatus to lead the way. Our previous overview of the centrosome (or spindle pole body) described its physical structure and ability to organize the microtubules of the cell, particularly during cell division. A recent paper discussed how the centrosome itself divides and originates a basic asymmetry of all eukaryotic cells.

The centrosome is a complicated structure that replicates in tandem with the rest of the cell cycle. Centrosomes do not divide in the middle or by fission. Rather, the daughter develops off to the side of the mother. Centrosomes are embedded in the nuclear envelope, and the mother develops a short extension, called a bridge or half-bridge, off its side, off of which the daughter develops, also anchored in the nuclear envelope. Though there are hundreds of associated proteins, the key components in this story are NUD1, which forms part of the core of the centrosome, and SPC72, which binds to NUD1 and also binds to the microtubules (made of the protein tubulin) which it is the job of the centrosome to organize. In yeast cells, which divide into very distinct mother and daughter (bud) cells, the mother centrosome (called the spindle pole body) leads the way into division and always goes into the daughter cell, while the daughter centrosome stays in the mother cell.

The deduced structure of some members of the centrosome/spindle pole in yeast cells. Everything below the nuclear envelope is inside the nucleus, while everything above is in the cytoplasm. The proteins most significant in this study are gamma tubulin (yTC), Spc72, and Nud1. OP stands for outer plaque, CP central plaque, IP inner plaque, as these structures look like separate dense layers in electron microscopy. To the right side of the central plaque is a dark bit called the half-bridge, on the other side of which the daughter centrosome develops, during cell division.

The authors asked why this difference exists- why do mother centrosomes act first to go to the outside of the cell where the bud forms? Is it simply a matter of immaturity, that the daughter centrosome is not complete at this point, (and if so, why), or is there more specific regulation involved that enforces this behavior? They use a combined approach in yeast cells combining advanced fluorescence microscopy with genetics to find the connection between the cell cycle and the progressive development of the daughter centrosome.

Yeast cells with three mutant centrosome proteins, each engineered as fusions to fluorescent proteins of different color, were used to show the relative positions of KAR1, (green), which lies in the half-bridge between the mother and daughter centrosomes. Three successive cell cycle states are shown. Spc42, (blue), at the core of the centrosome, and gamma tubulin (red; Tub4, or alternately Spc72, which lies just inside Tup4), which is at the outside and mediates between the centrosome and the tubulin-containing microtubules. Note that the addition of gamma tubulin is a late event, after Spc42 appears in the daughter. The bottom series is oriented essentially upside down vs the top two series.

What they find, looking at cells going through all stages of cell division, is that the assembly of the daughter centrosome is  stepwise, with inner components added before outer ones. Particularly, the final structural elements of Spc72 and gamma tubulin wait till the start of anaphase, when the cells are just about to divide, to be added to the daughter centrosome. The authors then bring in key cell cycle mutants to show that the central controller of the cell cycle, cyclin-dependent kinase CDK, is what is causing the hold-up. This kinase (a protein that phosphorylates other proteins, as a means of regulation) orchestrates much of the yeast cell cycle, as it does in all eukaryotic cells, subject to a blizzard of other regulatory influences. They observed that special inducible mutations (sensitive versions of the protein that shut off at elevated temperature) of CDK would stop this spindle assembly process, suggesting that some component was being phosphorylated by CDK at the key time of the cell cycle. Then, after systematically mutating possible CDK target phosphorylation sites on likely proteins of the centrosome, they came up with Nud1 as the probable target of CDK control. This makes complete sense, since Spc72 assembles on top of Nud1 in the structure, as diagrammed at top. They go on to show the direct phosphorylation of Nud1 by CDK, as well as direct binding between Nud1 and Spc72.

Final model from the article shows how the mechanics they revealed relate to the cell cycle. A daughter centrosome slowly develops off the side of the mother centrosome, but its "licensing" by CDK to nucleate microtubules (black rods anchored by the blue cones) only comes later on in M phase, just as the final steps of division need to take place. This gives the mother centrosome the jump, allowing it to migrate to the bud (daughter cell) and nucleate the microtubules needed to drive half of the replicated DNA/chromosomes into the bud. GammaTC is nucleating gamma tubulin, "P" stands for activating phosphorylation sites on Nud1.

This is a nice example of the power of a model system like yeast, whose rich set of mutants, ease of genetic and physical manipulation, complete genome sequence and associated bioinformatics, and many other technologies make it a gold mine of basic research. The only hard part was the microscopy, since yeast cells are substantially smaller than human cells, making that part of the study a tour de force.

Saturday, November 14, 2020

Are Attention and Consciousness the Same?

Not really, though what consciousness is in physical terms remains obscure.

A little like fusion power, the quest for a physical explanation of consciousness has been frustratingly unrewarding. The definition of consciousness is fraught to start with, and since it is by all reasonable hypotheses a chemical process well-hidden in the murky, messy, and mysterious processes of the brain, it is also maddeningly fraught in every technical sense. A couple of recent papers provide some views of just how far away the prospect of a solution is, based on analyses of the visual system, one in humans, the other in monkeys.

Vision provides both the most vivid form of consciousness, and a particularly well-analyzed system of neural processing, from retinal input through lower level computation at the back of the brain and onwards through two visual "streams" of processing to conscious perception (the ventral stream in the inferior temporal lobe) and action-oriented processing (in the posterior parietal lobe). It is at the top of this hierarchy that things get a bit vague. Consciousness has not yet been isolated, and how it could be remains unclear. Is attention the same as consciousness, or different? How can related activities like unconscious high-level vision processing, conscious reporting, pressing buttons, etc. be separated from pure consciousness? They all happen in the brain, after all. Or do those activities compose consciousness?

A few landmarks in the streams of visual processing.  V1 is the first level of visual processing, after pre-processing by the retina and lateral geniculate nucleus. Processing then divides into the two streams ending up in the inferotemporal lobe, where consciousness and memory seem to be fed, while the dorsal stream to the inferior parietal lobule and nearby areas feed action guidance in the vicinity of the motor cortex

In the first paper, the authors jammed a matrix of electrodes into the brains of macaques, near the "face cells" of the inferotemporal cortex of the ventral stream. The macaques were presented with a classic binocular rivalry test, with a face shown to one eye, and something else shown to the other eye. Nothing was changed on the screen, nor the head orientation of the macaque, but their conscious perception alternated (as would ours) between one image and the other. It is thought to be a clever way to isolate perceptual distinctions from lower level visual processing, which stay largely constant- each eye processes each scene fully, before higher levels make the choice of which one to focus on consciously. (But see here). It has been thought that by the time processing reaches the very high level of the face cells, they only activate when a face is being consciously perceived. But that was not the case here. The authors find that these cells, when tested more densely than has been possible before, show activity corresponding to both images. The face could be read using one filter on these neurons, but a large fraction (1/4 to 1/3) could be read by another filter to represent the non-face image. So by this work, this level of visual processing in the inferotemporal cortex is biased by conscious perception to concentrate on the conscious image, but that is not exclusive- the cells are not entirely representative of consciousness. This suggests that whatever consciousness is takes place somewhere else, or at a selective ensemble level of particular oscillations or other spike coding schemes.

"We trained a linear decoder to distinguish between trial types (A,B) and (A,C). Remarkably, the decoding accuracy for distinguishing the two trial types was 74%. For comparison, the decoding accuracy for distinguishing (A, B) versus (A, C) from the same cell population was 88%. Thus, while the conscious percept can be decoded better than the suppressed stimulus, face cells do encode significant information about the latter. ... This finding challenges the widely-held notion that in IT cortex almost all neurons respond only to the consciously perceived stimulus."

 

The second paper used EEG on human subjects to test their visual and perceptual response to disappearing images and filled-in zones. We have areas in our visual field where we are physically blind, (the fovea), and where higher levels of the visual system "fill in" parts of the visual scene to make our conscious perception seem smooth and continuous. The experimenters came up with a forbiddingly complex visual presentation system of calibrated dots and high-frequency snow whose purpose was to oppose visual attention against conscious perception. When attention is directed to the blind spot, that is precisely when the absence of an image there becomes apparent. This allowed the experimenters to ask whether the typical neural signatures of high-level visual processing (the steady-state visually evoked potential, or SSVEP) reflect conscious perception, as believed, or attention or other phenomena. They presented and removed image features all over the scene, including blind spot areas. What they found was that the EEG signal of SSVEP was heightened as attention was directed to the invisible areas, exactly the opposite of what they hypothesized if the signal was tied to actual visual conscious perception. This suggested that this particular signal is not a neural correlate of consciousness, but one of attention and perhaps surprise / contrast instead.

So where are the elusive neural correlates of consciousness? Papers like these refine what and where it might not be. It seems increasingly unlikely that "where" is the right question to ask. Consciousness is graded, episodic, extinguishable in sleep, heightened and lowered by various experiences and drugs. So it seems more like a dynamic but persistent pattern of activity than a locus, let alone an homunculus. And what exactly that activity is.. a Nobel prize surely awaits someone on that quest.


  • Unions are not a good thing ... sometimes.
  • Just another debt con.
  • Incompetent hacks and bullies. An administration ends in character.
  • Covid and the superspreader event.
  • Outgoing Secretary of State is also a deluded and pathetic loser.
  • But others are getting on board.
  • Bill Mitchell on social capital, third-way-ism, "empowerment", dogs, bones, etc.
  • Chart of the week: just how divided can we be?

Saturday, November 7, 2020

Why we Have Sex

Eukaryotes had to raise their game in the sex department.

Sex is very costly. On a biological level, not only does one have to put up with all the searching, displaying, courting, sharing, commitment, etc., but one gives up the ability to have children alone, by simple division, or parthanogenesis. Sex seems to be a fundamental development in the earliest stages of eukaryotic evolution, along with so many other innovative features that set us apart from bacteria. But sex is one of the oddest innovations, and demands its own explanation. 

Sex and other forms of mating confer one enormous genetic benefit, which is to allow good and bad mutations to be mixed up, separated, and redistributed, so that offspring with a high proportion of bad genes die off, and other offspring with a better collection can flourish. Organisms with no form of sex (that are clonal) can not get rid of bad mutations. Whatever mutations they have are passed to offspring, and since most mutations are bad, not good, this leads to a downward spiral of genetic decline, called Muller's ratchet.

It turns out that non-eukaryotes like bacteria do mate and recombine their genomes, and thus escape this fate. But their process is not nearly as organized or comprehesive as the whole-genome re-shuffling and mating that eukaryotes practice. What bacteria do is called lateral gene transfer, (LGT), because it typically involves short regions of their genomes, (a few genes), and can accept DNA from any sort of partner- they do not have to be the same species, though specific surface structures can promote mating within a species. Thus bacteria have frequently picked up genes from other species- the major limitation happens when the DNA arrives into the recipient cell, and needs to find a homologous region of the recipient's DNA. If it is too dissimilar, then no genetic recombination happens. (An exception is for small autonomous DNA elements like plasmids, which can be transferred wholesale without needing an homologous target in the recipient's genome. Antibiotic resistance genes are frequently passed around this way, for emergency selective adaptation!) This practice has a built-in virtue, in that the most populous bacteria locally will be contributing most of the donor DNA, so if a recipient bacterium wants to adapt to local conditions, it can do worse than try out some local DNA. On the other hand, there is also no going back. Once a foreign piece of DNA replaces the recipient's copy, there are no other copies to return to. If that DNA is bad, death ensues.

Two bacteria, connected by a sex pilus, which can conduct small amounts of DNA. This method is generally used to transfer autonomous genetic elements like plasmids, whereas environmental DNA is typically taken up during stress.

A recent paper modeled why this haphazard process was so thoroughly transformed by eukaryotes into the far more involving process we know and love. The authors argue that fundamentally, it was a question of genome size- that as eukaryotes transcended the energetic and size constraints of bacteria, their genomes grew as well- to a size that made the catch-as-catch-can mating strategy unable to keep up with the mutation rate. Greater size had another effect, of making populations smaller. Even with our modern billions, we are nothing in population terms compared to that of any respectable bacterium. This means that the value of positive mutations is higher, and the cost of negative mutations more severe, since each one counts for more of the whole population. Finding a way to reshuffle genes to preserve the best and discard the worst is imperative as populations get smaller.

Sex does several related things. The genes of each partner recombine randomly during meiosis, at a rate of a few recombination events per chromosome, thereby shuffling each haploid chromosome that was received from its parents. Second, each chromosome pair assorts randomly at meiosis, thereby again shuffling the parental genomes. Lastly, mating combines the genomes of two different partners (though inbreeding happens as well). All this results in a moderately thorough mixing of the genetic material at each generation. The resulting offspring are then a sampling of the two parental (and four grand-parental) genomes, succeeding if they get mostly the better genes, and not (frequently dying in utero) if they do not.

Additionally, eukaryotic sex gave rise to the diploid organism, with two copies of each gene, rather than the one copy that bacteria have. While some eukaryotes spend most of their lives in the haploid phase, and only briefly go through a diploid mated state, (yeasts are a good example of this lifestyle), most spend the bulk of their time as diploids, generating hapoid gametes for an extremely brief hapoid existence. The diploid provides the advantage of being able to ignore many deleterious genes, being a "carrier" for all those bad (recessive) mutations that are covered by a good allele. Mutations do not need to be eliminated immediately, taking a substantial load off the mating system to bring in replacements. (Indeed, some bacteria respond to stress by increasing promiscuity, taking in more DNA in case a genetic correction is needed, in addition to increasing their internal mutation rate.) A large fund of defective alleles can even become grist for evolutionary innovation. Still, for the species to persist, truly bad alleles need to be culled eventually- at a rate faster than that with which they appear.

The authors do a series of simulations with different genome sizes, mutation rates and sizes (DNA length) and rates of lateral gene transfer. Unfortunately, their figures are not very informative, but the logic is clear enough. The larger the genome, the higher the mutation load, assuming constant mutation rates. But LGT is a sporadic process, so correcting mutations takes not just a linearly higher rate of LGT, but some exponentially higher rate- a rate that is both insufficient to address all the mutations, but at the same time high enough to be both impractical and call into question what it means to be an individual of such a species. In their models, only when the length of LGT segments is a fair fraction of the whole genome size, (20%), and the rate quite high, like 10% of all individuals experiencing LGT once in their lifetimes, do organisms have a chance of escaping the ratchet of deleterious mutations.

" We considered a recombination length L = 0.2g [genome size], which is equivalent to 500 genes for a species with genome size of 2,500 genes – two orders of magnitude above the average estimated eDNA length in extant bacteria (Croucher et al., 2012). Recombination events of this magnitude are unknown among prokaryotes, possibly because of physical constraints on eDNA [environmental DNA] acquisition. ... In short, we show that LGT as actually practised by bacteria cannot prevent the degeneration of larger genomes. ... We suggest that systematic recombination across the entire bacterial genomes was a necessary development to preserve the integrity of the larger genomes that arose with the emergence of eukaryotes, giving a compelling explanation for the origin of meiotic sex."

But the authors argue that this scale of DNA length and frequency of uptake are quite unrealistic for actual bacteria. Bacterial LGT is constrained by the available DNA in the environment, and typically takes up only a few genes-worth of DNA. So as far as we know, this is not a process that would or could have scaled up to genomes of ten or one hundred fold larger size. Unfortunately, this is pretty much where the authors leave this work, without entering into an analysis of how meiotic recombination and re-assortment would function in these terms of forestalling the accumulation of deleterious mutations. They promise such insights in future work! But it is obvious that eukaryotic sex is in these terms an entirely different affair from bacterial LGT. Quite apart from featuring exchange and recombination across the entire length of the expanded genomes, it also ensures that only viable partners engage in genetic exchange, and simultaneously insulates them from any damage to their own genomes, instead placing the risk on their (presumably profuse) offspring. It buffers the effect of mutations by establishing a diploid state, and most importantly shuffles loci all over these recombined genomes so that deleterious mutations can be concentrated and eliminated in some offspring while others benefit from more fortunate combinations.