Saturday, December 26, 2020

Domineering Freeloader Decides Communism is the Answer

General, executioner, economic development czar, and head of the national bank of the Cuban revolution: the biography of Che Guevara, by John Lee Anderson.

Ernesto Guevara began life as a reckless, adventurous, and very intelligent kid. His first inspiration was medicine, indeed medical research on leprosy and other diseases common in South America, and he got a medical degree. But toiling away on small problems in the lab didn't fit his temperament, and he decided to bum around South America instead, living off the generosity of others, running up debts, fast-talking his way out of jams, and building up an implacable hatred of the US. A common thread through his travels from Argentina through Chile, Bolivia, Peru, and points north was the overwhelming influence of the US, usually corrupting the local political system for the benefit of mining interests in the south, and for the benefit of agricultural interests in Central America. Eventually he got caught up in the liberal quasi-socialist reforms of Jacobo Arbenz of Guatemala, later fleeing to Mexico after a US-supported right wing coup.

It was there that he fell under the spell of Fidel Castro, eventually becoming, despite his evident non-Cuban origins, Castro's right-hand man at the head of the communist revolution in Cuba. Not that it started as communist. No, Fidel was a master politician, and started as an anti-communist, currying favor with the Cuban population and the US. But both his brother Raul and Che were dedicated communists by that point, in thrall to Stalin and Mao, and their influence, combined with the logic of perpetual, one-party / one-person power, brought Fidel around to a gradual process of revealing, after the revolution had already gained power and Che had executed resistent elements of the army and police, their new (red) colors. Then came feelers to Moscow and the rest of the eastern bloc, the Cuban missile crisis, and that is pretty much where things stand still today.

Che and Fidel, when times were good.

Anderson's biography is definitive- fully researched, well written, and judiciously argued. He portrays Che as a seeker- a youth on the prowl for good times, but also for a purpose, which he ultimately found in full-on socialism. He found himself most fully during the early fight in the hills of Cuba- a trial by privation, exhaustion, and blood- where he put revolutionary principles to work organizing his men, making alliances with the local peasants, and executing deserters and traitors. Che's socialism was a pan Latin-American Bolivaran ideal, where all the countries of Central and South America would band together- possibly even unite- under state socialism as inspired by the peasant revolutions of Russia and especially China. It was both austere and visionary- a whole continent escaping from under the yoke of the great oppressor- the US.

It is clearly a religious conversion- the epiphany of a wholly captivating ideal. Che became Castro's second in command by his great intellectual and leadership talents, but even more by his absolute dedication to the cause- the cause of liberation from oppression. Unfortunately, after cleansing the army and securing Fidel's rule, Che was assigned to make the economy run, and here he came up against the immovable obstacle- reality. Socialism is healthy in small doses, but communism has not, in Cuba as elsewhere, been able to run an economy. Motivation to work needs to be supplied somehow, and if it is not by the lash of money and its lack, then terror will have to do the job, and poorly at that. Che did what he could, but the system he had fought so hard to establish was impossible to operate, and his thoughts turned back to his first love- revolution.

It is here that we see mostly clearly the religious nature of Che's motivations and of communism generally. If he were a rational researcher in the template of medical or other research, he would have sat back and realized that communism was not working in economic and social terms, let alone in terms of personal individual liberation. And then he would have adapted intellectually and tried to figure out a middle way to preserve Cuba's independence while running a realistic economic system. Possibly even elections. Unfortunately, by this time, Cuba had settled into a dependent relationship with Russia, which bought its sugar and gave aid, preventing either economic or political independence. Cuba is today still relatively poor, in the middle to lower ranks of GDP. Not as poor as Haiti, however, (or North Korea), and therein lies a message, which is that the Cuban revolution remains relatively humane, despite its many debilities and lack of political, social, and economic freedom. The collapse of the Soviet Union shocked the communist government into slight openings for private business and a heavy dose of tourism from Europe, which sustain it today.

But instead of recognizing the errors and failures of his dream, Che fomented more revolutionary cells all over Latin America and Africa, paying special attention to one sent to infiltrate Argentina, one that he was to join himself and die serving in 1965. One can not fault his dedication or consistency, but one can question the intellect that took him and so many other idealistic freedom fighters over the twentieth century into communism only to author monumental disasters of political and economic mismanagement. To think that dictatorship would resolve the class struggle, and produce washing machines and military might ... it had to be a religious movement, which unfortunately, once in power, became incredibly difficult to dislodge.

The motive force obviously was the US. We, through our callous and greedy treatment of our backyard over the nineteenth and twentieth centuries, and our betrayal of the paternalistic impulse of the Monroe Doctrine, not to mention similar failures of principle in the Middle East and Vietnam, motivated the intense anti-Yankee hatred of idealistic men such as Che Guevara, and the peasant resistance that, at least in Cuba, gave him and Castro support. It is a fascinating history of what the US has wrought, and how our failure to hold to our own ideals has come back to haunt us over and over again.

  • It has been abusive, unnecessary, toxic, and we will need some time to work it out of our system.

Saturday, December 19, 2020

Fair and Balanced

Momentary virality is not the best way to construct and distribute news. Nor is fear-based button-pushing. But what can we do about it?

Our political system almost ran off the rails over the last few months, and the ultimate cause was the media, which on the right-wing side has shaped an alternate reality of breathtaking extremism and divergence from reality. Outlets like Rush Limbaugh, FOX news, NewsMax, and Sinclair Broadcasting have fundamentally reshaped our political discourse, from a place fifty years ago where facts and problems were generally agreed upon, and policy discussions founded on those facts conducted- if not in a civil manner, then in a functional manner in legislative bodies like the US Senate. Now Limbaugh is broaching secession.

Rush, in his lair.

Even the Reagan era, conservative as it was, hewed to basic democratic principles and a centrist media environment. But then came Bill Clinton, and in response, Newt Gingrich, blazing a scorched-earth trail through the House of Representatives, followed soon by the establishment of FOX news as a relentless and shameless propaganda organ for the right. Now, even FOX is reviled by true believers as not extreme enough, as the end of the Trumpian epoch comes shudderingly into view. Which is worse- the internet melee of Russian disinformation and viral Q-conspiracies, or the regimented lying brought to us by corporate right-wing media? It is hard to tell sometimes, and both have been disastrous, but I think the latter has been substantially worse, forming a long-running environment of cultivated lies, normalized idiocy, and emotional trauma. Why anyone watches it or listens to it is beyond me personally, but clearly many people like to have their buttons pushed and participate in a crudely plausible vision of a black, white, and bloviatingly Christian (or un-Christian, depending on your theological ethics) world. 

Government censorship is probably not going to happen in this case. Even if we changed our legal system to allow it, the right wing would manage to subborn those regulatory bodies, as they have the Supreme Court, Senate, and the White House. These media outlets don't breathe oxygen, however, they breathe money- money that comes from advertisers who appreciate their ability to reach a uniquely gullible demographic. But those advertisers are not political. They are fomenting our divisions and destroying our political system for purely transactional reasons. It is, we can note in passing, another classic and ironic breakdown of the free market. 

The rational response, then, is to boycott the sponsors in systematic fashion, publicizing who advertises with which outlets, for how much. Several of these sites and petitions are already happening. But it is clear that they have not gained enough traction to have much effect. Only when the most egregious and appalling violations of decency occur does any attention rain on the channels and scare away sponsors. The tracking, petition, and boycotting system needs to have better centralization. Perhaps like the eco-friendly food labels, we need truth-friendly labeling of companies at the point of consumption, marking those (MyPillow! SmileDirect! Nutrisystem! Geico!) who are pouring money into these cesspools of psychological manipulation and political destruction.

Sure, this kind of accountability would heighten political divisions, causing a polarization of the business world, which has (supposedly) tried to keep itself out of the fray, and invite counter-boycotts of, say, NPR or MSNBC. But business has not been unbiassed at all, rather, through every organ, from chambers of commerce to K-street lobbies and Ayn Randian talk shops, they have pushed the right wing agenda in tandem with the propaganda organs that broadcast relentless pro-business and anti-public interest messages. It is high time to hold the whole ecosystem to account for the state of our country, directly and financially.


  • Should federal office holders be held to their oaths?
  • The business of the kidney dialysis business.
  • Apparently, Trump supporters put their money where their minds were.

Saturday, December 12, 2020

Where has Inflation Gone?

Inflation stays low amid vast public deficits and good employment and economic growth trends. What happened?

Those of us who grew up in the 1970's are permanently scarred by its economic malaise- inflation. As earlier generations were affected by their reigning economic conditions, from depression to prosperous expansion, we had a hard time shaking our syndrome, and keep looking for inflation around every corner.

But inflation is nowhere in sight, and no matter how much the government spends, it doesn't seem to matter. Prices simply haven't budged, past a very staid 1-3% inflation rate, for the last decade and more. It has put the lie to a generation of Republican scare-mongering, and is overall quite perplexing. Granted, the 2008 recession was extremely severe, was insufficiently addressed by fiscal spending, and is still affecting us in terms of lost economic capacity and lost employment. The current pandemic has been even more disastrous for employment and small business conditions. Yet, I think the question still stands- why are we now spending so much time fighting deflation, where we used to fight inflation?

I think the usual answers of global trade, declining worker power and unionization, and automation are significant, but there is one more that should be added, which is income inequality. Inflation is measured not in yachts and space ships, but in normal goods like groceries and appliances. The economy of these goods is largely ruled by lower income segments, and reflects pretty directly their income. At the higher end of the income and wealth scale, people tend to save rather than spend, which is why they are wealthy to start with. And what the rich do spend money on tends not to hit the basic inflation metrics very hard, like exclusive real estate, vanity philanthropy, and vulture capitalism. Their money does not, in any proportionate way, enter into the inflation-causing real economy.

State of inequality in the US, which has been growing much further during the Trump administration and the pandemic.

So one can imagine that, instead of looking for causality from inflation to income inequality- which may be a fool's errand- it actually works the other way around. High inequality is a societal condition where wages stay persistently low, at a subsistence level, (or even below), which saps both the cultural capital and the inflation-causing capacity of the working class and poorer sectors. At the same time, it funnels vast amounts of wealth to the already wealthy- money that mostly just gets squirreled away into the stock market, foreign bank accounts, government bonds, and other investments that are more or less unproductive, especially of inflation. It also causes a race for yield, which we see resulting in very low market interest rates- another result of inequality.

Sunday, December 6, 2020

Computer Science Meets Neurobiology in the Hippocampus

Review of Whittington, et al. - a theoretical paper on the generalized mapping and learning capabilities of the entorhinal/hippocampal complex that separates memories from a graph-mapping grid.

These are exciting times in neurobiology, as a grand convergence is in the offing with computational artificial intelligence. AI has been gaining powers at a rapid clip, in large part due to the technological evolution of neural networks. But computational theory has also been advancing, on questions of how concepts and relations can be gleaned from data- very basic questions of interest to both data scientists and neuroscientists. On the other hand, neurobiology has benefited from technical advancements as well, if far more modestly, and from the relentless accumulation of experimental and clinical observations. Which is to say, normal science. 

One of the hottest areas of neuroscience has been the hippocampus and the closely connected entorhinal cortex, seat of at least recent memory and of navigation maps and other relational knowledge. A recent paper extends this role to a general theory of relational computation in the brain. The basic ingredients of thought are objects and relations. Computer scientists typically represent these as a graph, where the objects are nodes, and the relations are the connecting lines, or edges. Nodes can have a rich set of descriptors (or relations to property nodes that express these descriptions). A key element to get all this off the ground is the ability to chunk, (or abstract, or generalize, or factorize) observations into discrete entities, which then serve as the objects of the relational graph. The ability to say that what you are seeing, in its whirling and colorful reality, is a dog .. is a very significant opening step to conceptualization, and the manipulation of those concepts in useful ways, such as understanding past events and predicting future ones.

Gross anatomy of the hippocampus and associated entorhinal cortex, which function together in conceptual binding and memory.

A particular function of the entorhinal/hippocampal complex is spatial navigation. Reseachers have found place cells, grid cells, and boundary cells (describing when these cells fire) as clear elements of spatial consciousness, which even replay in dreams as the rats re-run their daytime activities. It is evident that these cells are part of an abstraction mechanism that dissociates particular aspects of conceptualized sensory processing from the total scene and puts them back together again in useful ways, i.e. as various maps.

This paper is conducted at a rather abstruse level, so there is little that I can say about it in detail. Yet it and the field it contributes to is so extremely interesting that some extra effort is warranted. By the time the hippocampus is reached, visual (and other sensory data) has already been processed to the conceptual stage. Dogs have been identified, landmarks noted, people recognized. Memories are composed of fully conceptualized, if also sensorily colored, conceptual chunks. The basic idea the authors present is that key areas of the entorhinal cortex provide general and modular mapping services that allow the entorhinal/hippocampal complex to deal with all kinds of relational information and memories, not just physical navigation. Social relations, for example, are mapped similarly.

It is important to note tangentially that conceptualization is an emergent process in the brain, not dictated by pre-existing lists of entities or god-given databases of what exists in the world and beyond. No, all this arises naturally from experience in the world, and it has been of intense interest to computer scientists to figure out how to do this efficiently and accurately, on a computer. Some recent work was cited here and is interesting for its broad implications as well. It is evident that we will in due time be faced with fully conceptualizing, learning, and thinking machines.

"Structural sparsity also brings a new perspective to an old debate in cognitive science between symbolic versus emergent approaches to knowledge representation. The symbolic tradition uses classic knowledge structures including graphs, grammars, and logic, viewing these representations as the most natural route towards the richness of thought. The competing emergent tradition views these structures as epiphenomena: they are approximate characterizations that do not play an active cognitive role. Instead, cognition emerges as the cooperant consequence of simpler processes, often operating over vector spaces and distributed representations. This debate has been particularly lively with regards to conceptual organization, the domain studied here. The structural forms model has been criticized by the emergent camp for lacking the necessary flexibility for many real domains, which often stray from pristine forms. The importance of flexibility has motivated emergent alternatives, such as a connectionist network that maps animals and relations on the input side to attributes on the output side. As this model learns, an implicit tree structure emerges in its distributed representations. But those favoring explicit structure have pointed to difficulties: it becomes hard to incorporate data with direct structural implications like 'A dolphin is not a fish although it looks like one', and latent objects in the structure support the acquisition of superordinate classes such as 'primate' or 'mammal'. Structural sparsity shows how these seemingly incompatible desiderata could be satisfied within a single approach, and how rich and flexible structure can emerge from a preference for sparsity." - from Lake et al., 2017


Getting back the the hippocampus paper, the authors develop a computer model, which they dub the Tolman-Eichenbaum machine [TEM] after key workers in the field. This model implements a three-part system modeled on the physiological situation, plus their theory of how relational processing works. Medial entorhinal cells carry generalized mapping functions (grids, borders, vectors), which can be re-used for any kind of object/concept, supplying relations as originally deduced from sensory processing or possibly other abstract thought. Lateral entorhinal cells carry specific concepts or objects as abstracted from sensory processing, such as landmarks, smells, personal identities, etc. It is then the crossing of these "what" and "where" streams that allows navigation, both in reality and in imagination. This binding is proposed to happen in the hippocampus, as firing that happens when firing from the two separate entorhinal regions happen to synchronize, stating that a part of the conceptual grid or other map and an identified object have been detected in the same place, generating a bound sensory experience, which can be made into a memory, or arise from a memory, or an imaginative event, etc. This is characteristic of "place cells", hippocampal cells that fire when the organism is at a particular place, and not at other times.

"We propose TEM’s [the computational model they call a Tolman-Eichenbaum Machine] abstract location representations (g) as medial entorhinal cells, TEM’s grounded variables (p) as hippocampal cells, and TEM’s sensory input x as lateral entorhinal cells. In other words, TEM’s sensory data (the experience of a state) comes from the ‘what stream’ via lateral entorhinal cortex, and TEM’s abstract location representations are the ‘where stream’ coming from medial entorhinal cortex. TEM’s (hippocampal) conjunctive memory links ‘what’ to ‘where’, such that when we revisit ‘where’ we remember ‘what’."


Given the abstract mapping and a network of relations between each of the components, reasoning or imagining about possible events also becomes feasible, since the system can solve for any of the missing components. If a landmark is seen, a memory can be retrieved that binds the previously known location. If a location is surmised or imagined, then a landmark can be dredged up from memory to predict how that location looks. And if an unfamiliar combination of location and landmark is detected, then either a new memory can be made, or a queasy sense of unreality or hallucination would ensue if one of the two are well-known enough to make the disagreement disorienting.

As one can tell, this allows not only the experience of place, but the imagination of other places, as the generic mapping can be traversed imaginatively, even by paths that the organism has never directly experienced, to figure out what would happen if one, for instance, took a short-cut. 

The combination of conceptual abstraction / categorization with generic mapping onto relational graph forms that can model any conceptual scale provides some of the most basic apparatus for cognitive thought. While the system discussed in this paper is mostly demonstrated for spatial navigation, based on the proverbial rat maze, it is claimed, and quite plausible, that the segregation of the mapping from the object identification and binding allows crucial generalization of cognition- the tools we, and someday AI as well, rely on to make sense of the world.


  • A database of superspreader events. It suggests indoor, poorly ventilated spread of small aerosols. And being in a cold locker helps as well.
  • The curious implications of pardons.
  • It is going to be a long four years.
  • How tires kill salmon.
  • Ever think that there is more to life?

Saturday, November 28, 2020

Evolution of the Larynx

Primates already had bigger and more diverse larynxes, before humans came on the scene.

While oxygen was originally a photosynthetic waste product and toxic to all life forms, we gradually got used to it. Some, especially the eukaryotes, made a virtue of oxygen's great electronegativity to adopt a new and more efficient metabolism with oxygen as the final electron acceptor. This allowed the evolution of large animals, which breathe oxygen. All this happened in the oceans. But it turns out that it is far easier to get oxygen from air than from water, leading air breathing to evolve independently dozens of times among fishes of all sorts of lineages. Lungs developed from many tissues, but rarely from the swim bladder, which had critical roles revolving around constant and controllable pressure, pretty much the opposite of what one needs in a lung. So in our lineage, the lung developed as an adjunct to the digestive system, where the fish could gulp air when gill breathing didn't cut it. 


Overview of the atmosphere of earth. Lungs were only possible when the level of oxygen in air rose sufficiently, and respiration of any kind only when oxygen had vanquished the originally reducing chemical environment.

This in turn naturally led to the need to keep food from going into the nascent lung, (air going into the stomach is less of a problem.. we still do that part), thus the primitive larynx, which just a bit of muscle constricting the passage to the lung. As this breathing system became more important, regulating access to the lung became more important as well, and the larynx developed more muscles to open as well as close the air passage, then a progessively more stable and complex surrounding structure made of cartilage to anchor all these muscles. 

But that was not the end of the story, since animals decided that they wanted to express themselves. Birds developed an entirely different organ, the syrinx, separate from the larynx and positioned at the first tracheal branch, which allows them to sing with great power and sometimes two different notes at once. But mammals developed vocal cords right in the center of the larynx, making use of the cartilaginous structure and the passing air to send one fluttering note out to the world. The tension with which these cords are held, the air velocity going past them, and the shapes used in the upper amplifying structures of the throat, mouth, and sinuses all affect the final sound, giving mammals their expressive range, such as it is.

So why are humans the only animals to fully develop these capacities, into music and speech? Virtually all other mammals have communicative abilities, often quite rich, like purrs, barks, squealing, mewling, rasping, and the like. But none of this approaches the facility we have evolved. A lot can be laid to the evolution of our brains and cognitive capacities, but some involves evolution of the larynx itself. A recent paper discussed its trajectory in the primate lineages.

Comparison of representative larynxes, showing a typical size difference.

The authors accumulate a large database of larynx anatomy and function- sizes, frequency patterns, evolutionary divergence times- and use this to show that on average, the primate lineage has larger larynxes than other mammals, and has experienced faster larynx evolution, to a larger spread of sizes and functions, than other mammals. The largest larynxes of all belong to black howler monkeys, who are notorious for their loudness. "The call can be heard up to 5 km away." They also claim that among primates, larynx size is less closely related to body size than it is among other mammals, suggesting again that there has been more direct selection for larynx characteristics in this lineage.

Primates (blue) show greater larynx size and variability than other carnivores.

This all indicates that in the runup to human evolution, there had already been a lot of evolutionary development of the larynx among primates, probably due to their social complexity and tendency to live in dense forested areas where communication is difficult by other means. Yet do primates have vocal languages, in any respect? No- their vocalizations are hardly more complex than those of birds. Their increased evolutionary flexibility at most laid the physical groundwork for the rapid development of human speech, which included a permanently descended larynx and more importantly, cognitive and motor changes to enable fine voluntary control in line with a more powerful conceptual apparatus.

  • Is there a liar in your life?
  • Against climate change, we need much more... action.
  • 2020, auto-tuned.
  • Turns out, there was a war on Christmas.

Saturday, November 21, 2020

Stem Cell Asymmetry Originates at the Centrosome

At least sometimes.. how replication of the centrosome creates asymmetry of cell division as a whole, and what makes that asymmetry happen.

Cell fates can hinge on very simple distinctions. The orientation of dividing cells in a tight compartment may force one out of the cozy home, and into a different environment, which induces differentiation. Stem cells, those notorious objects of awe and research, are progenitor cells that stay undifferentiated themselves, but divide to produce progeny that differentiate into one, or many, different cell types. At the root of this capacity of stem cells is some kind of asymmetric cell division, whether enforced physically by an environmental micro-niche, or internally by molecular means. And a dominant way for cells to have intrinsic asymmetry is for their spindle apparatus to lead the way. Our previous overview of the centrosome (or spindle pole body) described its physical structure and ability to organize the microtubules of the cell, particularly during cell division. A recent paper discussed how the centrosome itself divides and originates a basic asymmetry of all eukaryotic cells.

The centrosome is a complicated structure that replicates in tandem with the rest of the cell cycle. Centrosomes do not divide in the middle or by fission. Rather, the daughter develops off to the side of the mother. Centrosomes are embedded in the nuclear envelope, and the mother develops a short extension, called a bridge or half-bridge, off its side, off of which the daughter develops, also anchored in the nuclear envelope. Though there are hundreds of associated proteins, the key components in this story are NUD1, which forms part of the core of the centrosome, and SPC72, which binds to NUD1 and also binds to the microtubules (made of the protein tubulin) which it is the job of the centrosome to organize. In yeast cells, which divide into very distinct mother and daughter (bud) cells, the mother centrosome (called the spindle pole body) leads the way into division and always goes into the daughter cell, while the daughter centrosome stays in the mother cell.

The deduced structure of some members of the centrosome/spindle pole in yeast cells. Everything below the nuclear envelope is inside the nucleus, while everything above is in the cytoplasm. The proteins most significant in this study are gamma tubulin (yTC), Spc72, and Nud1. OP stands for outer plaque, CP central plaque, IP inner plaque, as these structures look like separate dense layers in electron microscopy. To the right side of the central plaque is a dark bit called the half-bridge, on the other side of which the daughter centrosome develops, during cell division.

The authors asked why this difference exists- why do mother centrosomes act first to go to the outside of the cell where the bud forms? Is it simply a matter of immaturity, that the daughter centrosome is not complete at this point, (and if so, why), or is there more specific regulation involved that enforces this behavior? They use a combined approach in yeast cells combining advanced fluorescence microscopy with genetics to find the connection between the cell cycle and the progressive development of the daughter centrosome.

Yeast cells with three mutant centrosome proteins, each engineered as fusions to fluorescent proteins of different color, were used to show the relative positions of KAR1, (green), which lies in the half-bridge between the mother and daughter centrosomes. Three successive cell cycle states are shown. Spc42, (blue), at the core of the centrosome, and gamma tubulin (red; Tub4, or alternately Spc72, which lies just inside Tup4), which is at the outside and mediates between the centrosome and the tubulin-containing microtubules. Note that the addition of gamma tubulin is a late event, after Spc42 appears in the daughter. The bottom series is oriented essentially upside down vs the top two series.

What they find, looking at cells going through all stages of cell division, is that the assembly of the daughter centrosome is  stepwise, with inner components added before outer ones. Particularly, the final structural elements of Spc72 and gamma tubulin wait till the start of anaphase, when the cells are just about to divide, to be added to the daughter centrosome. The authors then bring in key cell cycle mutants to show that the central controller of the cell cycle, cyclin-dependent kinase CDK, is what is causing the hold-up. This kinase (a protein that phosphorylates other proteins, as a means of regulation) orchestrates much of the yeast cell cycle, as it does in all eukaryotic cells, subject to a blizzard of other regulatory influences. They observed that special inducible mutations (sensitive versions of the protein that shut off at elevated temperature) of CDK would stop this spindle assembly process, suggesting that some component was being phosphorylated by CDK at the key time of the cell cycle. Then, after systematically mutating possible CDK target phosphorylation sites on likely proteins of the centrosome, they came up with Nud1 as the probable target of CDK control. This makes complete sense, since Spc72 assembles on top of Nud1 in the structure, as diagrammed at top. They go on to show the direct phosphorylation of Nud1 by CDK, as well as direct binding between Nud1 and Spc72.

Final model from the article shows how the mechanics they revealed relate to the cell cycle. A daughter centrosome slowly develops off the side of the mother centrosome, but its "licensing" by CDK to nucleate microtubules (black rods anchored by the blue cones) only comes later on in M phase, just as the final steps of division need to take place. This gives the mother centrosome the jump, allowing it to migrate to the bud (daughter cell) and nucleate the microtubules needed to drive half of the replicated DNA/chromosomes into the bud. GammaTC is nucleating gamma tubulin, "P" stands for activating phosphorylation sites on Nud1.

This is a nice example of the power of a model system like yeast, whose rich set of mutants, ease of genetic and physical manipulation, complete genome sequence and associated bioinformatics, and many other technologies make it a gold mine of basic research. The only hard part was the microscopy, since yeast cells are substantially smaller than human cells, making that part of the study a tour de force.

Saturday, November 14, 2020

Are Attention and Consciousness the Same?

Not really, though what consciousness is in physical terms remains obscure.

A little like fusion power, the quest for a physical explanation of consciousness has been frustratingly unrewarding. The definition of consciousness is fraught to start with, and since it is by all reasonable hypotheses a chemical process well-hidden in the murky, messy, and mysterious processes of the brain, it is also maddeningly fraught in every technical sense. A couple of recent papers provide some views of just how far away the prospect of a solution is, based on analyses of the visual system, one in humans, the other in monkeys.

Vision provides both the most vivid form of consciousness, and a particularly well-analyzed system of neural processing, from retinal input through lower level computation at the back of the brain and onwards through two visual "streams" of processing to conscious perception (the ventral stream in the inferior temporal lobe) and action-oriented processing (in the posterior parietal lobe). It is at the top of this hierarchy that things get a bit vague. Consciousness has not yet been isolated, and how it could be remains unclear. Is attention the same as consciousness, or different? How can related activities like unconscious high-level vision processing, conscious reporting, pressing buttons, etc. be separated from pure consciousness? They all happen in the brain, after all. Or do those activities compose consciousness?

A few landmarks in the streams of visual processing.  V1 is the first level of visual processing, after pre-processing by the retina and lateral geniculate nucleus. Processing then divides into the two streams ending up in the inferotemporal lobe, where consciousness and memory seem to be fed, while the dorsal stream to the inferior parietal lobule and nearby areas feed action guidance in the vicinity of the motor cortex

In the first paper, the authors jammed a matrix of electrodes into the brains of macaques, near the "face cells" of the inferotemporal cortex of the ventral stream. The macaques were presented with a classic binocular rivalry test, with a face shown to one eye, and something else shown to the other eye. Nothing was changed on the screen, nor the head orientation of the macaque, but their conscious perception alternated (as would ours) between one image and the other. It is thought to be a clever way to isolate perceptual distinctions from lower level visual processing, which stay largely constant- each eye processes each scene fully, before higher levels make the choice of which one to focus on consciously. (But see here). It has been thought that by the time processing reaches the very high level of the face cells, they only activate when a face is being consciously perceived. But that was not the case here. The authors find that these cells, when tested more densely than has been possible before, show activity corresponding to both images. The face could be read using one filter on these neurons, but a large fraction (1/4 to 1/3) could be read by another filter to represent the non-face image. So by this work, this level of visual processing in the inferotemporal cortex is biased by conscious perception to concentrate on the conscious image, but that is not exclusive- the cells are not entirely representative of consciousness. This suggests that whatever consciousness is takes place somewhere else, or at a selective ensemble level of particular oscillations or other spike coding schemes.

"We trained a linear decoder to distinguish between trial types (A,B) and (A,C). Remarkably, the decoding accuracy for distinguishing the two trial types was 74%. For comparison, the decoding accuracy for distinguishing (A, B) versus (A, C) from the same cell population was 88%. Thus, while the conscious percept can be decoded better than the suppressed stimulus, face cells do encode significant information about the latter. ... This finding challenges the widely-held notion that in IT cortex almost all neurons respond only to the consciously perceived stimulus."

 

The second paper used EEG on human subjects to test their visual and perceptual response to disappearing images and filled-in zones. We have areas in our visual field where we are physically blind, (the fovea), and where higher levels of the visual system "fill in" parts of the visual scene to make our conscious perception seem smooth and continuous. The experimenters came up with a forbiddingly complex visual presentation system of calibrated dots and high-frequency snow whose purpose was to oppose visual attention against conscious perception. When attention is directed to the blind spot, that is precisely when the absence of an image there becomes apparent. This allowed the experimenters to ask whether the typical neural signatures of high-level visual processing (the steady-state visually evoked potential, or SSVEP) reflect conscious perception, as believed, or attention or other phenomena. They presented and removed image features all over the scene, including blind spot areas. What they found was that the EEG signal of SSVEP was heightened as attention was directed to the invisible areas, exactly the opposite of what they hypothesized if the signal was tied to actual visual conscious perception. This suggested that this particular signal is not a neural correlate of consciousness, but one of attention and perhaps surprise / contrast instead.

So where are the elusive neural correlates of consciousness? Papers like these refine what and where it might not be. It seems increasingly unlikely that "where" is the right question to ask. Consciousness is graded, episodic, extinguishable in sleep, heightened and lowered by various experiences and drugs. So it seems more like a dynamic but persistent pattern of activity than a locus, let alone an homunculus. And what exactly that activity is.. a Nobel prize surely awaits someone on that quest.


  • Unions are not a good thing ... sometimes.
  • Just another debt con.
  • Incompetent hacks and bullies. An administration ends in character.
  • Covid and the superspreader event.
  • Outgoing Secretary of State is also a deluded and pathetic loser.
  • But others are getting on board.
  • Bill Mitchell on social capital, third-way-ism, "empowerment", dogs, bones, etc.
  • Chart of the week: just how divided can we be?

Saturday, November 7, 2020

Why we Have Sex

Eukaryotes had to raise their game in the sex department.

Sex is very costly. On a biological level, not only does one have to put up with all the searching, displaying, courting, sharing, commitment, etc., but one gives up the ability to have children alone, by simple division, or parthanogenesis. Sex seems to be a fundamental development in the earliest stages of eukaryotic evolution, along with so many other innovative features that set us apart from bacteria. But sex is one of the oddest innovations, and demands its own explanation. 

Sex and other forms of mating confer one enormous genetic benefit, which is to allow good and bad mutations to be mixed up, separated, and redistributed, so that offspring with a high proportion of bad genes die off, and other offspring with a better collection can flourish. Organisms with no form of sex (that are clonal) can not get rid of bad mutations. Whatever mutations they have are passed to offspring, and since most mutations are bad, not good, this leads to a downward spiral of genetic decline, called Muller's ratchet.

It turns out that non-eukaryotes like bacteria do mate and recombine their genomes, and thus escape this fate. But their process is not nearly as organized or comprehesive as the whole-genome re-shuffling and mating that eukaryotes practice. What bacteria do is called lateral gene transfer, (LGT), because it typically involves short regions of their genomes, (a few genes), and can accept DNA from any sort of partner- they do not have to be the same species, though specific surface structures can promote mating within a species. Thus bacteria have frequently picked up genes from other species- the major limitation happens when the DNA arrives into the recipient cell, and needs to find a homologous region of the recipient's DNA. If it is too dissimilar, then no genetic recombination happens. (An exception is for small autonomous DNA elements like plasmids, which can be transferred wholesale without needing an homologous target in the recipient's genome. Antibiotic resistance genes are frequently passed around this way, for emergency selective adaptation!) This practice has a built-in virtue, in that the most populous bacteria locally will be contributing most of the donor DNA, so if a recipient bacterium wants to adapt to local conditions, it can do worse than try out some local DNA. On the other hand, there is also no going back. Once a foreign piece of DNA replaces the recipient's copy, there are no other copies to return to. If that DNA is bad, death ensues.

Two bacteria, connected by a sex pilus, which can conduct small amounts of DNA. This method is generally used to transfer autonomous genetic elements like plasmids, whereas environmental DNA is typically taken up during stress.

A recent paper modeled why this haphazard process was so thoroughly transformed by eukaryotes into the far more involving process we know and love. The authors argue that fundamentally, it was a question of genome size- that as eukaryotes transcended the energetic and size constraints of bacteria, their genomes grew as well- to a size that made the catch-as-catch-can mating strategy unable to keep up with the mutation rate. Greater size had another effect, of making populations smaller. Even with our modern billions, we are nothing in population terms compared to that of any respectable bacterium. This means that the value of positive mutations is higher, and the cost of negative mutations more severe, since each one counts for more of the whole population. Finding a way to reshuffle genes to preserve the best and discard the worst is imperative as populations get smaller.

Sex does several related things. The genes of each partner recombine randomly during meiosis, at a rate of a few recombination events per chromosome, thereby shuffling each haploid chromosome that was received from its parents. Second, each chromosome pair assorts randomly at meiosis, thereby again shuffling the parental genomes. Lastly, mating combines the genomes of two different partners (though inbreeding happens as well). All this results in a moderately thorough mixing of the genetic material at each generation. The resulting offspring are then a sampling of the two parental (and four grand-parental) genomes, succeeding if they get mostly the better genes, and not (frequently dying in utero) if they do not.

Additionally, eukaryotic sex gave rise to the diploid organism, with two copies of each gene, rather than the one copy that bacteria have. While some eukaryotes spend most of their lives in the haploid phase, and only briefly go through a diploid mated state, (yeasts are a good example of this lifestyle), most spend the bulk of their time as diploids, generating hapoid gametes for an extremely brief hapoid existence. The diploid provides the advantage of being able to ignore many deleterious genes, being a "carrier" for all those bad (recessive) mutations that are covered by a good allele. Mutations do not need to be eliminated immediately, taking a substantial load off the mating system to bring in replacements. (Indeed, some bacteria respond to stress by increasing promiscuity, taking in more DNA in case a genetic correction is needed, in addition to increasing their internal mutation rate.) A large fund of defective alleles can even become grist for evolutionary innovation. Still, for the species to persist, truly bad alleles need to be culled eventually- at a rate faster than that with which they appear.

The authors do a series of simulations with different genome sizes, mutation rates and sizes (DNA length) and rates of lateral gene transfer. Unfortunately, their figures are not very informative, but the logic is clear enough. The larger the genome, the higher the mutation load, assuming constant mutation rates. But LGT is a sporadic process, so correcting mutations takes not just a linearly higher rate of LGT, but some exponentially higher rate- a rate that is both insufficient to address all the mutations, but at the same time high enough to be both impractical and call into question what it means to be an individual of such a species. In their models, only when the length of LGT segments is a fair fraction of the whole genome size, (20%), and the rate quite high, like 10% of all individuals experiencing LGT once in their lifetimes, do organisms have a chance of escaping the ratchet of deleterious mutations.

" We considered a recombination length L = 0.2g [genome size], which is equivalent to 500 genes for a species with genome size of 2,500 genes – two orders of magnitude above the average estimated eDNA length in extant bacteria (Croucher et al., 2012). Recombination events of this magnitude are unknown among prokaryotes, possibly because of physical constraints on eDNA [environmental DNA] acquisition. ... In short, we show that LGT as actually practised by bacteria cannot prevent the degeneration of larger genomes. ... We suggest that systematic recombination across the entire bacterial genomes was a necessary development to preserve the integrity of the larger genomes that arose with the emergence of eukaryotes, giving a compelling explanation for the origin of meiotic sex."

But the authors argue that this scale of DNA length and frequency of uptake are quite unrealistic for actual bacteria. Bacterial LGT is constrained by the available DNA in the environment, and typically takes up only a few genes-worth of DNA. So as far as we know, this is not a process that would or could have scaled up to genomes of ten or one hundred fold larger size. Unfortunately, this is pretty much where the authors leave this work, without entering into an analysis of how meiotic recombination and re-assortment would function in these terms of forestalling the accumulation of deleterious mutations. They promise such insights in future work! But it is obvious that eukaryotic sex is in these terms an entirely different affair from bacterial LGT. Quite apart from featuring exchange and recombination across the entire length of the expanded genomes, it also ensures that only viable partners engage in genetic exchange, and simultaneously insulates them from any damage to their own genomes, instead placing the risk on their (presumably profuse) offspring. It buffers the effect of mutations by establishing a diploid state, and most importantly shuffles loci all over these recombined genomes so that deleterious mutations can be concentrated and eliminated in some offspring while others benefit from more fortunate combinations.

Saturday, October 31, 2020

LncRNA: Goblins From the Genomic Junkyard

One more addition to the zoo of functional RNAs.

A major theme over the last couple of decades of molecular biology is the previously unanticipated occurrence of many sorts of small and not so small RNAs that do not code for proteins. In addition to buttressing the general proposition that RNA came early in the history of life and retains many roles beyond being merely the conveying medium of code from DNA to protein, these novel RNAs illuminate some of the complexity that went missing when the human genome came in at under 20,000 protein coding genes.

The current rough accounting of human genetic elements. While we have only 19,954 protein coding genes, we have a lot of other material, including almost as many pseudogenes, (dead copies of protein-coding genes), and even more RNA genes that do not code for proteins. The count of mRNA transcripts is high because splicing of a protein-coding gene is frequently variable and can generate numerous distinct mRNA messages from one gene. Similarly, the start and stop points of transcription can be variable for both mRNA and lncRNA genes.

RNAs perform many functions, such as the catalytic core of the ribosome, the amino-acid complementary-coding role of tRNAs, the catalytic core of the mRNA splicing apparatus, guides to edit and modify ribosomal RNAs, and miRNAs that repress expression of target genes. LncRNA stands for long non-coding RNAs, which were discovered by global analyses of RNA expression. Long transcripts were found that did not have protein coding frames, did not clearly derive from degraded pseudogenes (degraded copies of protein coding genes), and which occasionally still had significant conservation and thus evident selective constraint and function.

Another piece of background is that these expression analyses have found, as the technology advanced in sensitivity and comprehensiveness, that most of our genome is transcribed to RNA. Not only do we have a large amount of junk DNA, like transposons, repetitive elements, pseudogenes, intron and regulatory filler, etc., representing at least 90% of the genome, but most of this DNA is also transcribed at a low level. We have quality control mechanisms that dispose of most of this RNA, but there have been partisans of another perspective, particularly among those who first found all this transcription, that these transcription units are "functional", and thus should not be dismissed as "junk".

Eugene Koonin is not of that persuasion. His recent review of this field, and of lncRNAs in particular, with Alexander Palazzo, generates an extremely interesting model of why most of this is junk, and how such junk RNA can occasionally gain function. Some lncRNAs are important, typically helping nearby genes stay on. One of the most significant lncRNAs, however, represses its nearby genes, and is central to the process of X inactivation. XIST is 17,000 nucleotides long- very long for a non-coding RNA- and binds to dozens of proteins including chromatin remodeling enzymes and X-chromosome scaffold proteins, all in a byzantine process that shuts down the extra X chromosome that females have. This prevents the genes of that chromosome, which encompass many functions, not just sex-specific ones, from being expressed two-fold higher in females than in males.

How to make sense of all this? How can there be many thousands of lncRNAs, but only a few with function, and those functions rather miscellenous, typically local, and centered on transcriptional regulation? The tale begins with one of the many quality control features of the transcription apparatus. When a gene is transcribed, the polymerase as it goes by deposits chromatin marks (on the local histones) that prevent other transcription complexes from initiating within the gene. This prevents extra initiation events that would produce truncated proteins, which can sometimes be very harmful, lacking key regulatory domains. So the theory posits that much of the stray transcription of junk DNA through the rest of the genome, especially in the form of long lncRNAs, has a similarly repressive effect, reducing local initiation within those "gene" bounds. This might be particularly helpful to prevent interference with regulatory events happening in those regions, controlling transcription through the region rather than allowing it to happen sporadically all over.

As a first step, it is innocent enough, and not likely to have strong selection constraints, typically of a low level, and perhaps eventually responsive to some regulatory events, depending on the needs of the nearby coding gene. Nor would the lncRNA that is made have any function at all. It would be junk very literally, would not get spliced, or exported out of the nucleus, and probably get degraded promptly. Its sequence is under no particular selection, and would drift in neutral fashion. The second step then happens if this RNA were to gain some kind of function, such as binding some regulatory protein. There are many RNA and DNA binding proteins in the cell, so this is not difficult. Xist binds to over 80 different proteins. These proteins then might have local effects, as long as the lncRNA remains attached to its own transcription complex during its own synthesis. Such effects might be activating the nearby gene, loosening or tightening nearby chromatin. Given the (arbitrarily large) size of the lncRNA, and the typically small size of nucleic acid binding determinants that proteins recognize, there is little limit to how many such interactions could be accumulated over time, always subject to selection that likely centers around fine-tuning of effects on nearby genes. Indeed, this regulation could allow the relaxation or loss of more proximal regulators, making the lncRNA increasingly essential. After enough interactions accumulate, the lncRNA may remain tethered to local landmarks, and its activity persist after its synthesis ends, prompting selection against its degradation.

In this way, increasingly elaborate mechanisms can be built up, out of very modest selective effects, combined with a lot of drift and exploration of neutral mutational space. This theory provides a rationale for what we are seeing in the lncRNA landscape- a huge number with little to no ascertainable function, but a few that have grown into significant regulators of their local or extended chromatin landscape. It also informs the mechanism by which they function, not as some new exciting mechanism of action by a discrete RNA species, as was found with miRNAs, for instance, but rather an agglomeration of adventitious interactions that will be different in each case, and highly variable in effect.
"Indeed, although complexity in biology is generally regarded as evidence of “fine tuning” or “sophistication,” large biological conglomerates might be better interpreted as the consequences of runaway bureaucracy—as biological parallels of nonsensically complex Rube Goldberg machines that are over-engineered to perform a single task"

 

Saturday, October 24, 2020

Rise and Fall of the US

What happened to our 20th century solidarity?

A recent issue of The New Republic carried an article by its publisher that discussed Robert Putnam's diagnosis of the decline of American civic community and solidarity. In the generational arcs of US history, we have had high solidarity, and consequent productive and progressive political eras, only a few times- the colonial era, the Republican interlude while the South had seceded, the progressive era around the turn of the 20th century, and the post-WW2 boom. Perhaps much of the 20th century could be classified that way, up to the 1970s. At any rate, we are obviously not in such an era now. We are, in contrast, floundering in an era of incredible political and social divisiveness, of unproductive public institutions, and of social atomization.

"Just as Putman and Garrett identify an upswing, they also trace a decline beginning in the 1970s. For this, too, they offer an explanation that departs from the standard historical narrative, suggesting that it was not Ronald Reagan who brought the long period of liberal rule to an abrupt halt, but rather the baby boomer of the 1960s who, turning from the communitarian idealism of the early part of the decade toward a more self-oriented direction, set off a chain reaction that ended up blowing the whole Progressive-liberal order to smithereens."

But the article does not really articulate what happened, other than to cite the many dramas of that time, and propose that the US had a bit of a "nervous breakdown", in a transition from a conformist 50's, through the wide-open and tumultuous 60's, to the me-centered 70's. Perhaps this dates me, but I did live through some of those times, and I think can offer a more specific analysis. I'd suggest that the principal elements of the downturn arose from fundamental violations of trust by the state. The US had conducted WW2 with great moral and logistical authority. The grunts always grumbled, and there were plenty of fiascos along the way, but overall, there was a consensus that the elites and people in charge knew what they were doing. They not just won the war, but fostered unprecedented prosperity in its wake. 

All this turned around in the late 60's. I am also reading a history of the CIA, by Tim Weiner, "Legacy of Ashes". This is a deeply biased book, focusing on every failure of the CIA, pronouncing it as an institution utterly and irredeemably incompetent. What is noticeable, however, is that the CIA's successes are generally far more costly than its failures. The coups it sponsored in Iran and Guatemala, et al. came back to haunt us down to the present day. Eisenhower founded this pandora's box of disastrous meddling, (i.e. covert action) and Kennedy accelerated its use. One of its signature accomplishments was the slow process of getting us enmeshed in the Vietnam war. This was the single most influential disaster that discredited the US government to its own citizens. While in principle, we were doing a great thing- saving South Vietnam from communism and totalitarianism- in practice, we had no idea what we were doing, did not understand the nature of the civil war, or the impossible corruption of our allied government, and conducted the war in a fog of lies and delusions. The daily body counts were a visceral expression of revulsion against the state.

But this kind of incompetence became a pattern in major events like Watergate, inflation, the oil crisis, and the Iran hostage crisis. Each one showed that our leaders did not know what they were doing- the best and brightest turned out unequal to the crises we faced. A succession of presidents fell victim to fundamental breaches of trust with the country. Inflation, for example, made us feel helpless- that the money itself was being eaten away by processes that were virtually occult in their mystery and darkness. Gerald Ford urged a kind of vodoo economics- that perhaps a public relations campaign urging personal savings and voluntary spending reductions could heal "the economy". But the solidarity he was counting on was evaporating, and the rationale was transparently absurd and unequal to the crisis, which had been brought on by the oil shock and by profligate government spending and interest policy through the Vietnam era. It would not be until the advent of Paul Volcker that we would get a public servant with the courage and intellect to slay this beast, through an extremely costly campaign of squelching private investment.


So it was not Ronald Reagan who started the process of me-ism over patriotic solidarity. He was only expressing the sad consequence of a long series of failures and breaches of faith when he claimed that government is not the solution, government is the problem. So what was the alternative? The other major institutions of common action were and remain the corporation, and this era saw the valorization of capitalism as the system that works. It had the Darwinian structure and motivations that enforced effectiveness, even excellence. It was the environment that unleashed entrepreneurial freedom, then harnessed it for the common good. We know now that all this was vastly oversold, and ignored all the reasons why we have states to start with. But the pendulum had swung decisively from the public sphere to the private.

An unfortunate consequence of such a swing is that the party and ideology of privatization has little interest in fostering effective governance. So the competence of the state erodes further with time, becoming increasingly unable to do basic functions, and becoming corrupt as private interests gain relative power. Our current administration, were it not in power, would be a parody of self-serving corruption and incompetence. It is the pinacle of the Reagan revolution, and it is degrading, day by day, our ability to govern ourselves. This seems to be why these generational shifts take so long to correct. It is not only that we need to recognize the hole we have fallen into on an intellectual and scholarly level, but that enough voters (and enough extra to overcome the entrenched powers of capital in propaganda, lobbying, campaign finance, and other forms of corruption) have to have felt this in their bones to give an alternative ideology a chance to retake charge of the state and rebuild its capacity for effective action. 

  • Where are the vaccines? What are the vaccines?
  • Not everyone likes Barrett.
  • Make the Apocalypse great again.

Saturday, October 17, 2020

Vision Bubbles up into Perception

Visual stimuli cause progressive activation of processing stages in the brain, and also slower recurrent, rebounding activation, tied to decision-making.

Perception is our constant companion, but at the same time a deep mystery. How are simple, if massively parallel, sensory inputs assembled into cognition? What is doing the perceiving? Where is the theater of the mind? Modern neuroscience is steadily chipping away at these questions, whether one deems them "hard", unscientific, or theological. The brain is a very physical, if inaccessible, place, forming both the questions and the answers.

A recent paper made use of MEG, magnetoencephalography of electrical events in the brain, to track decision-making as a stepwise process and make some conclusions about its nature, based on neural network modeling and analogies. The researchers used a simple ambiguous decision task, presenting subjects with images of numbers and letters, with variations in between. This setup had several virtues. First is that letters and numbers are recognized, at a high level, in different regions of the brain. So even before the subjects got to the button pressing response stage, the researchers could tell that they had cognitively categorized and perceived one or the other. Second, the possibility of ambiguity put a premium on categorization, and the more ambiguous the presented image, the longer that decision would take, which would then hypothetically be reflected in what could be observed here within the brain.

The test presented. An image was given, and the subject had to judge whether it was a number, a letter, and what number/letter it was.

The researchers used MRI solely to obtain the structures of the subject's brains, on which they could map the dynamic MEG data. MEG is intrinsically much faster in time scale than, say, fMRI, allowing this work to see millisecond scale events. They segmented the observed stages of processing by some kind of statistical method into 1- position and visibility of the stimulus; 2- what  number or letter it is; 3- whether it is a number or a letter; 4- how uncertain these decisions are; 5- the motor action to press the button responding with all these subjective reports. In the author's words, "We estimated, at each time sample separately, the ability of an l2-regularized regression to predict, from all MEG sensors, the five stimulus features of interest." These steps / features naturally happen at different times, perception being necessarily a stepwise process, with the scene being processed at first without bias in a highly parallel way, before features are picked out, categorized, and brought to consciousness, in a progressively less parallel and more diffuse process.

Activation series in time. The authors categorized brain activity by the kind of processing done, based on how it was responding, and mapped these over time. First (A, bottom) comes the basic visual scene processing, followed by higher abstractions of visual perception. Finally (A, top) is activation in the motor area corresponding to pressing a response button. C and D show more detail on the timing of the various processes.

It is hard to tell how self-fulfilling these analyses are, since the researchers knew what they were looking for, binned the data to find it, and then obtained estimates for when & where these binned processes happened. But assuming that all that is valid, they came up with striking figures of cognitive progression, shown above. The initial visual processing signal is very strong and localized, which is understandable first because the early parts of the visual system (located in the rear of the brain) are well understood, and second because those early steps take in the whole scene and are massively parallelized, thus generating a great deal of activity and signal in these kinds of analysis. This processing is also linear with respect to the ambiguous nature of the image, rather than sigmoidal or categorical, which is how higher processing levels behave. That is because at this level, the processing components are just mindlessly dealing with their pixel or other micro-feature of the scene, regardless of its larger meaning. The authors state that 120 milliseconds is on average where the rough location (left or right of the screen) is decided by this low level of the visual system, for instance. By 500 milliseconds (ms), this activity has ceased, the basic visual analysis having been done, and processing having progressed to higher levels. The perception of what the letter is, (225 ms) and whether it is a letter or number (370 ms) happens roughly around the same time, and varies substantially, presumably due to the varying abiguities of what was presented.

At around 600 ms, the processing of just how uncertain the image is seems to peak, a sort of meta-process evaluating what has gone on at lower levels. And finally, the processing directing the motor event of pressing the button to specify what the subject has decided about the item identity or category (just a binary choice of left or right index finger) comes in on average also around 600 ms. Obviously, there is a great deal going on that these categorizations and bins do not capture- the MEG data, though fast, are very crude with respect to location, so only the broadest distinctions can be attempted.

The authors go on to press a further observation, which is that the higher level processing is relatively slow, and processing devoted to these separate aspects of the perceptual event takes longer and trails off slower than one might expect. Comparing all this to what is known from artificial neural networks, they conclude that it could not possibly be consistent with strictly feed-forward processing, where one level simply does its thing and communicates results up to the next level. Rather, the conclusion, in line with a great deal of other work and theorization, is that recurrent processing, making representations that are stable for some time some of these levels, is required to explain what is going on. 

This is hardly pathbreaking, but this paper is notable for the clarity with which the processing sequence, from visual stimulus to motor response, is detected and presented. While working in very broad brushstrokes regarding details of the scene and of its perception, it lays out a clear program for what comes next- filling in the details to track the byways of our thoughts as we attend consciously to a red flower. This tracking extends not only the motor event of a response, but also to whatever constitutes consciousness. This paper did not breathe the word "conscious" or "consciousness" at all, yet a video is provided of the various activations in sequence showed substantial prefrontal activity in the 450 ms range after image presentation, constituting a sixth category in their scheme that deserves a bit more attention, so to speak.

  • A high-level talk about how blood supply in the brain responds to neural activity.
  • The Pope weighs in on climate change. But carries none of the weight himself.
  • On the appalling ineffectiveness of the flu vaccine. It makes one wonder why we are waiting for the phase III trials of the coronavirus vaccines.
  • US to Taliban: "Please?"
  • A day at the Full Circle Ranch.