Saturday, December 28, 2019

On the Origin of Facts

Bruno Latour tours the Salk Institute, finds science taking place, and has a hard time deconstructing it.

There is a production process in science, by which the educational background, institutional setting, funding decisions, social accidents, and happenstances that form directions of research, hunches, hypotheses, and insights are stripped away, intentionally and systematically, to produce "facts" in a form ready for publication. This de- / re-contextualization serves to obfuscate the process and shamanize the practitioners, but more importantly it serves to generalize the resulting fact and put it into its scientific rather than social context. And it is the scientific context- the fact's objective existence- not the social context, that makes it powerful and useful for further construction of other facts. The social context forms part of the essential background / input, but the produced facts and insights are not by nature social, nor should they be received as such.

Experiments are instrumental in this transformation process, which goes from a hunch, to a collegial suggestion, to an hypothesis, to a testable hypothesis, to the hunt for alternative hypotheses and thus for experiments designed to exclude them and to support the main hypothesis, (if true), followed by group presentations and critique, outside peer review that adduces more alternative hypotheses and possible experiments, and finally to publication and collegial acceptance (or rejection and refutation). When all this is done, the murky origins of the hunch necessarily fall away and become trivially unimportant, (other than in memoirs and reminiscences), and the fact stands alone as supported with all the armament that science can bring to bear, both in its technical testing capabilities and its social structure of critique. It thence, if lucky, becomes a sentence in a textbook. In Bruno Latour's words, it is "freed from the circumstances of its production"

The above was a recounting of the conventional (and scientist's) perspective on the evolution of scientific facts. Whether this is the case is contested by social constructivism, a movement in philosophy that adheres to antirealism, which is to say that all of what we regard as outside "reality" is socially constructed, and thus science is likewise a social institution that generates conventions that by its social power it is able to foist onto a naive public, who in turn, like sheep, contribute their taxes to keep the scientific community wallowing in money and social power, cranking out yet more obscure and artificial "facts". Indeed, the very status of truth that is given to facts is fundamentally a social construct made up of a community of believing people, whatever their reasons and supposed evidence.

A few of Latour's works over the years. He declines to be post-modern, because he disagrees with the whole frame of modernity, as being somehow different from or non-continuous with the rest of history. And this attitude comes back to his dismissive attitude towards science and the enlightenment as being a break in kind from prior ways of understanding the world. It has just been fetishes all the way down.

Bruno Latour (along with his co-writer Steve Woolgar) waded into this controversy back in the 1970s with a French philosophical and anthropological background, to investigate what really goes on in a laboratory. He embedded himself into a leading laboratory and learned how it operated, informally and formally. This is recounted in the book "Laboratory Life" (1979; recent review), which, as usual for continental philosophers, is challenging to make sense of. The authors tend to straddle the two perspectives, both respecting and recounting the normal scientific activities and perspectives, (if rather laboriously), and then also persistently suggesting their contrary viewpoint and program that they bring to the project.
"Despite the fact that our scientists held the belief that the inscriptions could be representations or indicators of some entity with an independent existence 'out there', we have argued that such entities were constituted solely through the use of these inscriptions. ... By contrast, we do not conceive of scientists using various strategies as pulling back the curtain on pregiven, but hitherto concealed, truths. Rather, objects (in this case substances) are constituted through the artful creativity of scientists. Interestingly, attempts to avoid the use of terminology which implies the preexistence of objects subsequently revealed by scientists has led us into certain sylistic difficulties. This, we suggest, is precisely because of the prevalence of a certain form of discourse in the description of process. We have therefore found it extremely difficult to formulate descriptions of scientific activity which do not yield to the misleading impression that science is about discovery (rather than creativity and construction). It is not just that a change of emphasis is required; rather, the formulations which characterize historical descriptions of scientific practice require exorcism before the nature of this practice can best be understood."

Exorcism indeed! One might posit a simpler explanation- that science is, in fact, in the business of discovery, though with the caveat that what is to be dis-covered is never fully known beforehand, sometimes not even suspected, and thus there is a great deal of intutition, creativity, variation, and social construction involved in the process, and uneven and unpredictable results coming out. While the status of the resulting fact is never perfectly secure, and is supported by another social process of conventional agreement, that agreement is routinely granted once the preceeding critical hoops have been surmounted and leads generally to the vast pool of factual and "objective" information that finds its home in the academic literature, textbooks, college instruction, Wikipedia, etc. A pool that is further confirmed routinely by succeeding work and technical developments that depend on its objective factuality.

Latour does not, in the end, adhere to the hard program of social construction, for the simple fact that the object of the scientific story he recounts, the thyrotropin releasing hormone (TRF), was found, was found to be a specific and real substance, and went on to a respected place in medical practice and the textbooks, not to mention later earning a Nobel prize. There is real comedy in the attempt, however, as the anthropologist takes on the scientists at their own game, analyzing and plumbing their depths for structures and paradigms that they themselves hardly suspect, complete with diagrams of the laboratory, pictures of the roof and other apparatus, graphs of publication trends, and verbatim interviews with protagonists, underlings, etc. It is a sort of depth-psychology of one laboratory.
"But it would be incorrect to conclude that the TRF story only exhibits the partial influence of sociological features. instead, we claim that TRF is a thoroughly social construction. By maintaining the sense in which we use social, we hope to be able to pursue the strong programme at a level apparently beyond traditional sociological grasp. In Knorr's terms, we want to demonstrate the idiosyncratic, local, heteregeneous, contextual, and multifaceted character of scientific practices. We suggest that the apparently logical character of reasoning is only part of a much more complex phenomenon that Auge calls 'practices of interpretation' and which comprises local, tacit negotiation, constantly changing evaluations, and unconscious or institutionalized gestures. ... In short, we observe how difference between the logic of scientific and non-scientific practices of interpretation are created and sustained within the laboratory."

Granted, most of this is overwrought, but the true worth of this work was that these observers came into an eminent lab and paid minute attention to what was going on, and emphasized that what comes out of the sausage machine in publication and other products is far different than the materials that go in. While the conventional approach would emphasize the preceeding scientific observations and technical developments that led the leader of this lab to even contemplate that the purification of TRF from millions of dissected brains was possible and desirable, Latour emphasizes instead, and with some success, the social contingencies that surrounded the original uncertainties, the slow progress, the false leads and constantly discarded "bad results", the huge amount of money and effort required, and other nitty-gritty that forms the day-today of laboratory life. The latter emphasis is useful in accounting for how science gets done, but discards other crucial inputs, and is ultimately not at all convincing as a general theory of what science accomplishes or is.

I think the confusion arises fundamentally (apart from professional jealousy) from the fact that social constructivism is perfectly valid for some areas of our lives, such as arts, fashion, religion, morality, and to some extent, politics. Many problems do not have an objective criterion, and are socially constructed on an ongoing basis with criteria that boil down to what and who is thought good, whether for the individual, family, collective, etc. And the insistant denial of the total social construction of one's own field- as is understandably routine among scientists- is particularly vehement (and unfounded) in the case of religion and has lent the latter bizarre and extraordinary power through the centuries, which the deconstructivist project is entirely appropriate and well-prepared to investigate. And it should be said that many forms of primitive and pseudo-science partake of this form as well, if not of outright fraud. So the line is hardly stable or absolute. But when it comes to science as practiced in the enlightnement tradition, with a variety of safeguards and institutional practices that feature competition, peer review at multiple levels, and final public transparency, the approach falls flat.

  • A contemporary accounting of this scientific race, last of a 3-part series.
  • TRF is one of a series of "releasing hormones", operating between the hypothalamus and pituitary. Or should the word "is" be put in quotes?
  • A critique of the critique.
  • Mankiw takes on MMT, and obsesses about inflation, along mainstream lines.
  • MMT replies.
  • Limited liberty at Liberty University.
  • Notes from the Taliban.
  • Birds: who cares?
  • A cult is exposed.

Saturday, December 21, 2019

We Are All Special

A study in yeast finds that rare mutations have outsize influence on traits.

The word "mutation" is frowned upon in these politically correct days. While we may have a human reference genome sequence derived from some guy from Buffalo, New York, all genomes are equal, and thus differences between them and the reference are now termed "variations" rather than mutations.

After the first human genome was cranked out, the natural question was- How do we differ, and what do those differences say about our medical futures and our evolutionary past? This led to the 1000 genomes project, and much more, to the point that whole genome sequencing is creeping into normal medical practice, along with even more common panels of a smattering of genes analyzed for specific issues, principally cancers. Well, we differ a lot, and this data has been richly rewarding, especially in forensic and evolutionary terms, but only somewhat in medical terms. The ambition for medical studies has been extremely high- to interpret from our code why exactly we are the way we are, and what our medical fate will be. And this ambition remains unfulfilled. It will take decades to get there, and our code is far from controlling everything- even complete knowledge of our sequences and their impact on our development and medical issues will leave a great deal to accidents of fate.

The first approach to mining the new genomic information, especially variations among humans, for medically useful information was the GWAS study. These put the 1000 genomes (or some other laboriously accumulated set of sequences, which came tagged with medical information) into a blender and asked which variations between people's sequences correlated with variations in their diseases. Did diabetes correlate with particular genes being defective or altered? Despite huge resources and high hopes, these studies yielded very little.

The reason was that the notion of variation (or mutation) and especially the intricate field of evolutionary population genetics, was, among these researchers, in a somewhat primitive state. They only accepted variations that occurred a few times, so that they could be sure they were not just random sequencing mistakes. In a population of, say, 1000, any variation that occurs a few times has a particular nature, which is to say that it must be somewhat stable in the population and have a long history, to allow it to rise to such a (modest, but significant) level of prevalence. This in turn means that it can not have a severe effect, in evolutionary terms, which would otherwise have cut its history in the population rather short. So it turned out that these researchers were studying the variations least likely to have any effect, and for all the powerful statistics they brought to bear, little fruit turned up. It was a very frustrating experience for all concerned.

A recent paper recapitulated some of these arguments in the setting of yeast genetics. The topic remains difficult to approach in humans, because rare variations are, by definition, rare, and hard to link to diseases or traits. Doing so in a clinical study requires statistical power, which arises from the number of times the linkage is seen- a catch-22 unless one can find an obscure family pedigree or a Turkish village where such a trait is rampant. In yeast, one can generate lineages of infinite size at will, and the sequencing is a breeze, with a genome 1/250 the size of ours. The only problem is that the phenotypic range of yeast is slightly impoverished compared to ours(!) Yet what variety they can display is often quantifiable, via growth assays. The researchers used 16 yeast strains from diverse backgrounds as parents, (presumably containing a wide variety of distinctive variations), generated and sequenced 28,000 progeny, and subjected them to 38 growth conditions to elicit various phenotypes.

The major result, graphing the frequency of variations against their phenotypic effect. The effect goes up quite strongly as the frequency goes down.

These researchers claim that they can account for 73% of phenotypic variation from their genetic studies- far higher the rate seen for any complex human trait. They see on average 120 loci affecting each trait across the study, and 12 loci affecting each trait in any one mating. Based on past work with libraries of yeast strains, they could also classify the mutations, er, variations they saw coming from these diverse parents as either common (similar to what was analyzed in the classic GWAS experiments in humans, occurring at 1% or more) or rare. Sure enough, the rarer the allele, the bigger its effect on phenotype, as shown below. In rough terms, the rare variants accounted for half the phenotypic variation, despite comprising only a quarter of the genetic variation.

In an additional analysis, they compared all these variants to their relatives in a close relative of this yeast species, in order to judge which allele (variant / mutant or the reference / normal version) was ancestral, i.e. older. As expected, the rare variations that led to phenotypic effects were mostly of recent origin, and more so the stronger their effect.
"Strikingly, no ancient variant decreased fitness by more than 0.5 SD units, whereas 41 recent variants did."

The upshot is that to learn about the connection between genotype and phenotype, one needs significant (and typically deleterious) mutations, as geneticists have known since the time of Mendel and Morgan. Thus the use of common variants (with small effects) to analyze human syndromes and diseases has yielded very little, either medically or scientifically, while the study of rare variants has been a gold mine. And we all have numerous rare variants- they come up all the time, and are likewise selected out of existence all the time, due to their significant effects.

The scale of the experiments done here in yeast allow high precision genetic mapping. Here, one trait (growth in caffeine) is mapped against correlating genomic variations. The correlations home in on variations in the TOR1 gene, a known target of caffeine and a master regulator of cell growth and metabolism.

  • Stiglitz on neoliberalism.
  • Thoughts about Britain's (and our) first past the post voting system.
  • Economists have no idea what they are talking about- Phillips curve edition.
  • Hayek and the veneration of prices.
  • Real trial or show trial?
  • The case for Justice Roberts.
  • Winning vs success.
  • Psychotic.
  • Lead from gasoline is being remobilized by wildfires.
  • Winter has been averted.

Saturday, December 14, 2019

Success is an Elixir

We are besotted by success. For very obvious evolutionary reasons, but with problematic consequences.

Why is the James Bond franchise so compelling? It got more cartoonish over the years, but the old Sean Connery embodied a heady archetype of the completely successful hero. A man as skilled in vetting wines as in flying planes, as debonair with the ladies as he was in fighting hand-to-hand, all while outwitting the most malevolent and brilliant criminal minds. Handsome, witty, and brutally effective in all he turned his hand to, there was little complexity, just relentless perfection, other than an inexplicable penchant for getting himself into dramatic situations, from which he then suavely extricated himself.

We worship success, for understandable reasons, but sometimes a little too much. As Reagan said, nothing succeeds like success. It is fundamental to our growth from childhood to adulthood, to demonstrate and be recognized for some kind of effectiveness- passing tests, graduating from school, becoming skilled in some art or profession, which is socially recognized as useful, maybe through the medium of money. The ancient rites of passage recognized this, by setting a key test, such as killing the bear, or withstanding some brutal austerity. Only through effectiveness in life can we justify that life to ourselves and to others. The role can take many forms- extroverts tend to focus on social power- the capability of bending others to their will, while introverts may focus more on other skills like making tools or interpreting the natural world.

The Darwinian case is clear enough- each life is a hero's quest to express one's inner gifts and capabilities, in order to succeed not only in thriving in the given environment, but in replicating, creating more successful versions of one's self which do so all over again. Women naturally fall for successful men, as James Bond so amply demonstrated, but as is seen in so many fields, from basketball to finance.


But all this creates some strong cognitive biases that have some influences that are not always positive. Junior high school is the most obvious realm where these play out. Children are getting used to the idea that life is not fair, and that they can communally form social standards and decisions about what constitutes success, which then victimize those on the losing end- what is cool, what is lame, who is a loser, etc. Popularity contests, like politics and the stock market, are notorious for following fashions that valorize what one generation may believe is success, only to have the next generation look back in horror and redefine success as something else. In these cases, success is little more than a commonly held opinion about success, which leads to the success of con men like our current president, who insists that everything he does is perfectly successful, and who inspires sufficient fear, or confidence, or suspension of disbelief, or is so ably assisted by the propaganda of his allies, that many take him seriously. Indeed, it is exactly the unaccountable support of his allies who surely know better that force others in the wider circles of the society to take seriously what no rational or decent person would believe for a second.

The status of minorities is typically a "loser" status, since by definition their beliefs and practices, and perhaps their very existence, are not popular. While this may be a mark of true Darwinian lack of success, it is far more likely to be an accident of, or an even less innocent consequence of, history. In any case, our worship of success frequently blinds us to the value of minorities and minority perspectives, and is a large reason why such enormous effort has been expended over millennia, on religious, legal, constitutional, and cultural planes, to remedy this bias and promote such things as democracy, diversity, due process, and respect for contrasting perspectives.

We are victimized in many other ways by our mania for success- by advertisers, by the gambling industry, by war mongers, among many others, who peddle easy success while causing incalculable damage. While it is hard to insulate ourselves from these social influences and judgements, which are, after all, the soul of evaluating success; as with any other cognitive bias, being in our guard is essential to avoiding cults, traps, and, ultimately, expensive failure.

Saturday, December 7, 2019

Links Only

Due to the press of business, only links this week.

  • Stiglitz on elites, neoliberalism, and the end of history.
  • Inspired by Ukraine.
  • Gaslighting on an epic scale.
  • Or maybe religion is the problem? It was always a factually challenged affair.
  • The gig workers of yesteryear- gyppo loggers in the Northwest.

Saturday, November 30, 2019

Metrics of Muscles

How do the microstructures of the muscle work, and how do they develop to uniform sizes?

Muscles are gaining stature of late, from being humble servants for getting around, to being a core metabolic organ and playing a part in mental health. They are one of the most interesting tissues from a cell biology and cytoarchitectural standpoint, with their electrically-activated activity, and their complex and intensely regimented organization. How does that organization happen? There has been a lot of progress on this question, one example of which is a recent paper on the regulation of Z-disc formation, using flies as a model system.

A section of muscle, showing its regimented structure. Wedged in the left middle is a cell nucleus. The rest of these cells are given over to sarcomeres- the repeating structure of muscles, with dark myosin central zones, and the sharp Z-lines in the light regions that anchor actin and separate adjacent sarcomeres.

The basic repeating unit of muscle is the sarcomere, which occurs end-to-end within myofibrils, which are bundled together into muscle fibers, which constitute single muscle cells. Those cells are then in turn bundled into fascicles, bundles, and whole muscles. The sarcomere contains end-plates called the Z-disk, which attach actin filaments that travel lengthwise into the sarcomere (to variable distances depending on contraction). In the center of the sarcomere, interdigitated with the actin filaments, are myosin filaments, which look much thicker in the microscope. Myosin contains the ATP-driven motor which pulls along the actin, causing the whole sarcomere to contract. The two assemblies contact each other like two combs with interdigitated teeth.

Some molecular details of the sarcomere. Myosin is in green, actin in red. Titin is in blue, and nubulin in teal. The Z-disks are in light blue at the sides, where actin and titin attach. Note how the titin molecules extend from the Z-disks right through the myosin bundles, meet in the middle. Titin is highly elastic, unfolding like an accordion, and also has stress sensitivity, containing a protein kinase domain (located in the central M-band region) that can transmit mechanical stress signals. The diagram at bottom shows the domain structure of nebulin, which has the significant role of metering the length of the actin bundles. It is also typical in containing various domains that interact with numerous other proteins, in addition to repetitive elements that contribute to its length.

There are over a hundred other molecules involved in this structure, but some of more notable ones are huge structural proteins, the biggest in the genome, which provide key guides for the sizes of some sarcomeric dimensions. Nubulin is a ~800 kDa protein that wraps around the actin filaments as they are assembled out from the Z-disk and sets the length of the actin polymer. The sizes of all the components of the sarcomere are critical, so that the actin filaments don't run into each other during contraction, the myosins don't run into the Z-disk wall, etc. Everything naturally has to be carefully engineered. Conversely, titin is a protein of ~4,000 kDa (over 34,000 amino acids long) that is highly elastic and spans from the Z-disk, through the myosin bundles, and to a pairing site at the M-line. In addition to forming the core around which the myosin motors cluster, thus determining the length of the myosin region, it appears to set the size of the whole sarcomere, and forms a spring that stores elastic force, among much else.

Many of these proteins come together at the Z-disk. Actin attaches to alpha-actinin there, and to numerous other proteins. One of these is ZASP, the subject of the current paper. ZASP joins the Z-disk very early, and contains domains (PDZ) that bind to alpha actinin, a key protein that anchors actin filaments, and other domains that bind to each other (ZM and LIM). To make things interesting, ZASP comes in several forms, from a couple of gene duplications and also from alternative splicing that includes or discards various exons during the processing of transcripts from these genes. In humans, ZASP has 14 exons and at least 12 differently spliced forms. Some of these forms include more or fewer of the self-interacting LIM domains. These authors figured that if the ZASP protein plays an early and guiding role in controlling Z-disk size, it may do so by arriving in its full-length, fully interlocking version early in development, and then later arriving in  shorter "blocking" versions, lacking self-interacting domains, thereby terminating growth of the Z-disks.

Overexpression of the ZASP protein (bottom panels) causes visibly larger, yet also somewhat disorganized, Z-disks in fly muscles. Note how beautifully regular the control muscle tissue is, top. Left sides show fluorescence labels for both actin and ZASP, while right sides show fluorescence only from ZASP for the same field.

The authors show (above) that overexpressing ZASP makes Z-disks grow larger and somewhat disorganized, while conversely, overexpressing truncated versions of ZASP leads to smaller Z-disks. They then show (below) that in the wild-type state, the truncated forms (from a couple of diverged gene duplicates) tend to reside at the outsides of the Z-disks, relative to the full length forms. They also show in connection with this that the truncated forms are also expressed later in development in flies, in concordance with the theory.

Images of Z-disks, end-on. These were not mutant, but are expressing fluorescently labelled ZASP proteins from the major full length form (Zasp52, c and d), or from endogenous gene duplicates that express "blocking" shortened forms (Zasp66 and Zasp67, panels in d). They claim by their merged image analysis (right) to find that full length ZASP resides with higher probability near the centers of the disks, while the shorter forms reside more towards the outsides.

Compared with what else is known, (and unknown), this is a tiny step. It also begs a lot of questions- could gene expression be so finely controlled as to create the extremely regimented Z-disk pattern? (Unlikely) And if so, what controls all this gene expression and alternative splicing, both in normal development, and in wound repair and other times when muscle needs to be rebuilt, which can not be solely time-dependent, but appears, from the regularity of the pattern, to follow some independent metric of ideal Z-disk size? It is likely that there is far more to this story that will come out during further analysis.

It is notable that the Z-disk is a hotbed of genes that cause myopathies of various sorts when mutated. Thus the study of these structures, while fascinating in its own right and a window into the wonders of biology and our own bodies, is also informative in medical terms, and while unlikely to lead to significant treatments until the advent of gene therapy, may at least provide understanding of syndromes that might otherwise be though of as acts of a cruel god.


Saturday, November 23, 2019

Redistribution is Not Optional, it is Essential

Physics-inspired economic models of inequality.

Thomas Piketty marveled at the way wealth concentrates under normal capitalist conditions, as if by magic. He chalked it up to the maddening persistence of positive interest rates, even under conditions where capital is in vast excess. Once you have a certain amount of wealth, and given even modest interest, money just breeds on its own, certainly without labor, and almost without thinking.

A recent Scientific American article offered a different explanation, cast in a more physics-style framework. It recounts what is called a "yard sale" model of a perfectly free economic exchange, where each transaction is voluntary and transfers net wealth in a random direction. Even under such conditions, wealth concentrates inexorably, till one agent owns everything. Why? The treatment is a bit like statistical mechanics of gasses, that follow random walks of individual particles. But where gasses are subject to constant balancing force of pressure that strongly discourages undue concentrations, the economic system contains the opposite- ratchets by which each agent greedily holds on to what it has. At the same time, poorer agents can only transact from what little they have, but stand to lose more (relatively) when they do. They thus have a stricter limit on how often they can play the game, and are driven to penury long before wealthier players. Even a small wealth advantage insulates that player against random adversity. Put that through a lengthy random walk, and the inevitable result is that all the wealth ends up in one place.
"In the absence of any kind of wealth redistribution, Boghosian et al. proved that all of the wealth in the system is eventually held by a single agent. This is due to a subtle but inexorable bias in favor of the wealthy in the rules of the YSM [yard sale model]: Because a fraction of the poorer agent’s wealth is traded, the wealthy do not stake as large a fraction of their wealth in any given transaction, and therefore can lose more frequently without risking their status. This is ultimately due to the multiplicative nature of the transactions on the agents’ wealth, as pointed out by Moukarzel." - Boghosian, Devitt-Lee, Wang, 2016
"If we begin at the point 1/2, the initial step size is 1/4. Suppose the first move is to the right, reaching the point 3/4. Now the step size is 1/8. If we turn back to the left, we do not return to our starting point but instead stop at 5/8. Where will we wind up after n steps? The probability distribution for this process has an intricate fractal structure, so there is no simple answer, but the likeliest landing places get steadily closer to the end points of the interval as n increases. This skewed probability distribution is the ratchetlike mechanism that drives the yard-sale model to states of extreme imbalance." ... "If some mechanism like that of the yard-sale model is truly at work, then markets might very well be free and fair, and the playing field perfectly level, and yet the outcome would almost surely be that the rich get richer and the poor get poorer." - Hayes, 2002

It is important to emphasize that the yard sale model is a libertarian's dream. It models perfect freedom and voluntary economic activity, if on a very simplistic level. But its implications are profound. It describes why most people in a free economic system own little more than their labor. The authors supplement this model with three more parameters, to align it better with reality. First is a wealth advantage factor. Our free economic system is not free or fair as a matter of fact, and the wealthy have many economic advantages, from lower interest rates (on loans), better returns on investments, to better education and more political power. Obviously, this is hardly conducive to greater equality, but rather to sharper and faster inequality. Second is a redistribution factor, in recognition that taxes and other costs have a redistributing effect, however small. And third is an allowance for negative wealth, which characterizes a fair portion of most societies, given our addiction to debt. Using these extra factors, these researchers can easily model wealth distributions that match reality very closely.

Lorenz curves showing income inequality in the US, and its growth in recent decades. Higher income families are on the right bottom, and their cumulative share of income are dramatically higher than those of lower income families. This graph gives rise to the Gini coefficient. Since this graph is binned in quintiles, it hides even more dramatic acceleration of income at the highest 10%, 1% and 0.1% levels.

An example of a model curve. The teal area (C) represents negative wealth, a fact of life for much of the population. The intersection of curve B with the right axis represents a result where one person or family is has 40% of all wealth. We are not quite there in reality, but it is not an unrealistic outcome considering current trends. Gini coefficients are generally defined as the areas A/(A+B).

The article, and other work from this group, finds that the redistrubution factor is absolutely critical to the fate of society. Sufficiently high, it can perpetually forestall collapse to total inequality, or even oligarchy, which is the common human condition. But if left below that threshold, it may delay, but can not forestall the inevitable.

What is that threshold? Obviously, it depends quite a bit on the nature of the society- on its settings of wealth advantage and redistribution. But rather small amounts of redistribution, on the order of 1 or 2 %, prevent complete concentration in one person's or oligarchy's hands. To make a just society, however, one that mitigates all this accidental unfairness of distribution, would take a great deal more.

There have traditionally been several social solutions to gross inequality, after humanity gained the capacity to account and accumulate wealth. One is public works and the dole, which the Romans were partial to. In their heyday, the rich vied to attain high offices and fund great works which benefitted Roman society. Another is a debt jubilee, where debts were forgiven at some interval or on special occasions. Another, of course, is revolution and forcible reforms of land and other forms of wealth. Karl Marx, along with many others, clearly sensed that something was deeply wrong with the capitalist system when allowed to run unfettered. And despite all the ameliorating regulations and corrective programs since, we are back in a gilded age today, with all time highs of gross unequality. To make matters worse, we have been backsliding on the principle of inheritance taxes, which should prevent the transgenerational and wholly undeserved accumulation of wealth and power.

Redistribution turns out, on this analysis, to be essential to a sustainable and just society. It is not a pipe dream or violation of the natural order, or of "rights". Rather, it is the right of every member of a society to expect that society to function in a fair and sustainable way to provide the foundation for a flourishing life by building each member's talents and building the social and material structures that put them to effective use. Capitalism and free exchange is only one ingredient in this social system, not its purpose or its overriding mechanism. That is why the weath tax that has been proposed by Elizabeth Warren is so significant and has generated such interest and support. It speaks directly and effectively to one of the central problems of our time- how to make a sustainable system out of capitalism.

Saturday, November 16, 2019

Gene Duplication and Ramification

Using yeast to study the implications of gene duplication.

Genes duplicate all the time. Much of our forensic DNA technology relies (or at least used to rely) on repetitive, duplicated DNA features that are not under much selection, thus can vary rapidly in the human population due to segment duplication and recombination/elimination. Indeed, whole genomes duplicate with some (rare) frequency. Many plants are polyploid, having duplicated their genomes one, two, three or more times, attaining prodigious genome sizes. What are the consequences when a gene suddenly finds itself making products in competition with, or collaboration with, another copy?

A recent paper explored this issue to some small degree in the case of proteins that form dimeric protein complexes in yeast cells. Saccharomyces cerevisiae is known to have undergone a whole genome duplication in the distant past, which led to a large set of related proteins called paralogs, which is to say homologs (similar genes) that originated by gene duplication and have subsequently diverged. Even more specifically, they are termed ohnologs, since they arise from a known genome duplication event (this special class is interesting since for such organisms, it makes up a huge class of duplicates that all arose at the same time, making some aspects of evolutionary analysis easier). A question is whether that divergence is driven by neutral evolution, in which case their resemblance quickly degrades, or whether selection continues for one homodimer, for both homodimers, or even for the complex between the two partners, which is termed a heterodimer.

The authors go through simulations of several different selection regimes, done at atomic scale to known protein paralogs, to ask what effect selection on one feature has on the retention or degradation of other features of the system. Another term for this is genetic relationship is pleiotropy, which means the effects that one gene can have on multiple functions, in this case heterodimeric complexes in addition to homodimeric complexes, which often have different, even opposite, roles.
One example of a homodimeric protein (GPD1, glycerol-3-phosphate dehydrogenase) that has an ohnolog (GPD2) with which it can heterodimerize. At top is the structure- yellow marks the binding interface, while blue and pink mark the rest of each individual protein monomer. The X axis of each graph is time, as the simulation proceeds, adding mutations to the respective genes, and enforcing selection as the experimenters wish, based on binding energy of the protein-protein interface, as calculated from the chemistry. That binding energy is the Y-axis. Dark blue is one homodimer (GPD1-GPD1), pink is the other homodimer (GPD2-GPD2), and purple is the binding energy of the heterodimer (GPD1-GPD2).

In a neutral evolution regime lacking all selection, (top graph), obviously there is no maintenance of any function, and the ability of the molecules to form complexes of any kind steadily degrades with time- the binding energy of the dimers goes to zero, at the origin. But if selection is maintained for the ability of each gene product to form its own homodimer, then heterdimer formation is maintained as well, apparently for free (second graph). Similarly, if only selection for a heterodimer is maintained, the ability of each to form homodimers is also maintained for free. At bottom, if only one homodimer is under positive selection, then the formation of the other homodimer degrades most rapidly, and the heterodimer degrades a bit less rapidly.

All this is rather obvious from the fact that the binding interface (see the structure at the top of figure for the example of GPD1) is the same for both proteins, so the maintenance of this binding interface through positive selection will necessarily keep it relatively unchanged, which will keep it likewise functional for other interactions that occur on exactly the same face, which is to say the heterodimeric interaction, when the homodimeric interaction is selected for, or vice versa, etc.

So why keep these kinds of duplicates around? One reason is that, while preserving their binding interface with each other, they may diverge elsewhere in their sequence, adopting new functions over time. This kind of thing can lead to the formation of ever more elaborate complexes, which are quite common. Having two genes coding for related functions can also insulate the organism from mutational defects in either one, which would otherwise impair the homodimeric complex more fully. By the same token, this insulation can allow variational space for the development of novel functions, as in the first point.

So, nothing earthshaking in this paper, (which incidentally included a good bit of experimental work which I did not mention, to validate their computational findings), but it is nice to see yeast still serving as a key model system for basic questions in molecular biology. Its genomic history, which includes a whole genome duplication, and its exquisite genetic and molecular tool chest, make it ideal for this kind of study.


Saturday, November 9, 2019

Power

And lack of power.

The recent power shutdowns in California were maddening and disruptive. They also showed how utterly dependent we are on the oceans of fossil fuels we burn. With every convenience, gadget, trip, comfort, appliance, and delivery we get more enmeshed in this dependence, and become zombies when the juice is suddenly cut off. Not only is our society manifestly not robust, but every drop of fuel burned makes the problem still worse: the biosphere's decline to miserable uninhabitability. The children are right be be pissed off.

Do we have the power to kick this habit? This addiction makes opioids look like amateurs.  It won't be a matter of checking into rehab and going through a few weeks of detox. No, it is going to take decades, maybe centuries, of global detox to kick this problem from hell. Living without our fix of CO2 is impossible on any level- personal, social, political, economic, military. And the pushers have been doing their part to lull us even further into complacency, peddling lies about the risks and hazards they deal with as an industry, their own research into climate change and what our future looks like, not to mention our complicity in it.

Do we have the moral and political power to get off fossil fuels? Not when half of our political community is in denial, unwilling to take even one step along the 12 step path. I am studying the Civil War on the side, which exhibits a similar dynamic of one half of the US political system mired in, even reveling in, its moral turpitude. It took decades for the many compromises and denials to play themselves out, for the full horror to come clear enough that decent people had had enough, and were ready to stamp out the instution of slavery. Which was, somewhat like the fossil fuels of today, the muscular force behind the South's economy and wealth.

Do we have the technical and intellectual power to kick this habit? Absolutely. Solar and wind are already competitive with coal. The last remaining frontier is the storage problem- transforming intermittant and distributed forms of power into concentrated, dispatchable power. And that is largely a cost problem, with many possible solutions available, each at its price. So given a high enough price on fossil carbon, we could rapidly transition to other sources of power, for the majority of uses.

A 300 MW solar power plant in the Mojave.

Does the US have the power to affect climate change policy around the world? We don't have all the power, but have a great deal. If we were to switch from a regressive laggard to a leader in decarbonization, we would have a strong effect globally, both by our example and influence, and by the technical means and standards we would propagate. We could amplify those powers by making some of our trade policy and other relations more integrated with decarbonization policy.

Do individuals have the power to address these issues? The simple answer is no- all the virtuous recycling, biking, and light-bulb changing has little effect, and mostly liberates the unused fossil fuels for someone else to use at the currently criminally low prices. Individuals also have little power over the carbon intensity of the many products, services, and infrastructure they use. Maybe it is possible to eat less meat, and avoid fruit from Chile. But we can not unplug fully from this system- we need to rewire the system. It is fundamental economics that dictates this situation, which is why a stiff carbon tax and related regulation, with the associated political and moral will are so important.

Finally, does the State of California have the power to take responsibility for the PG&E mess? Absolutely, but probably not the will. The power shutdowns led to a common observation that the state should just buy PG&E at its bankrupt price and run it in the public interest. But keen observers have noted that the state's politicians would much rather have someone else to blame, than be saddled with a no-win institution that puts the blame on them. Power lines are going to cause fires in any case, unless we cough up the billions needed to put them underground. Customers will always complain about the price of utilities, so it is hard to see the state stepping up to this mess, or even reforming the public utilities commission, which has been so negligent as well.

  • Why did the GOP nominate, and the American people elect, a Russian asset to the White House?
  • Battle lines on health care.
  • Point to Bernie.
  • The church and psycho-social evolution.

Saturday, November 2, 2019

To Model is to Know

Getting signal out of the noise of living systems, by network modeling.

Biology is complex. That is as true on the molecular level as it is on the organismal and ecological levels. So despite all the physics envy, even something as elegant as the structure of DNA rapidly gets wound up in innumerable complexities as it meets the real world and needs reading, winding, cutting, packaging, synapsing, recombining, repairing, etc. This is particularly true of networks of interactions- the pathways of (typically) protein interactions that regulate what a cell is and does.

An article from several years ago discussed an interesting and influential way to learn about these interactions. The advent of "big data" in biology allowed us to do things like tabulate all the interactions of individual proteins in a cell, or sample the abundance of every transcript or protein in cells of a tissue. But it turned out that these alone did not lead directly to the elucidation of how things work. Where the genome was a part ordering list, offering one catalog number and description for each part, these experiments provided the actual parts list- how many screws, how many manifolds, how many fans, etc., and occasionally, what plugs into what else. These were all big steps ahead, but hardly enough to figure out how complicated machinery works. We still lack the blueprint, one that ideally is animated to show how the machine runs. But that is never going to happen unless we build it ourselves. We need to build a model, and we need more information to do so.

These authors added one more dimension to the equation- time, via engineered perturbations to the system. Geneticists and biochemists have been doing this (aka experiments- such as mutations, reconstitutions, and titrations) forever, including in gene expression panels and other omics data collections. But employing a perturbation method in a systematic and informative way on the big data level in biology remains a significant advance. The problem is called network inference- figuring out how a complex system works, (which is to say, making a model), now that we are given some but not all important information of its composition and activities. And the problem is difficult because biological networks are frequently very large and can be modeled in an astronomical number of ways, given that we have scanty information about key internal aspects. For instance, even if many individual interactions are known from experimental data, not all are known, many key conditions (like tissue, cell type, phase of the cell cycle, local nutrient conditions etc.) are unknown, and quantitative data is very rare.

One way to get around some of these specifics is to poke the system somewhere and track what happens thereafter. It is a lot like epistasis analysis in genetics, where if you, say, mutate a set of genes acting in a linear process, the ones towards the end can not be cured by supplying chemical intermediates that are made upstream- the later genes are "epistatic" to those earlier in the process. Such logic needs to be expanded exponentially to address inference over realistic biological networks, and gets the authors into some abstruse statistics and mathematics. Their goal is to simplify the modeling and search problem enough to make it computationally tractable, while still exploring the most promising parts of their parameter space to come up with reasonably accurate models. They also seek to iterate- to bring in new perturbation information and use it to update the model. One step is to discretize the parameters, rather than exploring continuous space. Second is to use preliminary calculations to get near optimal values for their model parameters, and thereafter explore those approximated local spaces, rather than all possible space.

Speed of this article's method (teal) compared with a conventional method for the same task (green).

All this is done over again for experimental cases, where some perturbation has been introduced into their real system, generating new data for input to the model. Their improved speed of calculation is critical here, enabling as many iterations and alterations as needed, to update and refine the model. If the model is correct, it will respond accurately to the alteration, giving the output that is also observed in real life. Such a model then makes it possible to perform virtual perturbations, such as simulating the effect of a drug on the model, which then predicts entirely in silico what the effects will be on the biological network.
"It is also useful, as an exercise, to evaluate the overall performance of the BP algorithm on data sets engineered from completely known networks. With such toy datasets we achieve the following: (i) demonstrate that BP converges quickly and correctly; (ii) compare BP network models to a known data-generating network; and (iii) evaluate performance in biologically realistic conditions of noisy data from sparse perturbations."
...
"Each model is then a set of differential equations describing the behavior of the system in response to perturbations."

The upshot of all this is models that are roughly correct, and influential on later work. The figure below shows (A) the smattering of false positives and missing (false negatives) interactions, but (B) accounts for most of this error as shortcuts of various kinds- the inference of regulation that is globally correct, but may be missing a step here or there. So they suggest that the scoring is actually better than the roughly 50 - 70% correct rate that they report.

An example pair of interconnecting pathways inferred from experimental protein abundance data and perturbed abundance data, with the protein molecules as nodes and their interactions as arrows. Where would a drug have the most effect if it inhibited one of these proteins?

They offer one pathway as an example, with an inferred pattern of activity, (above), and a few predictions about what proteins would be good drug targets. For example, PLK1 in this diagram is a key node, and has dramatic effects if perturbed. This came up automatically from their analysis, but PLK1 happens to already be an anticancer drug target with two drugs under development. Any biologist in the field could have told them about this target, but they went ahead with proof-of-principle experiments to show that yes, indeed, treatment of their RAF-inhibitor drug-resistant melanoma cells, which were the subject of modeling, with an experimental anti-PLK1 drug results in dramatic cell killing at quite low concentrations. They had used other drugs as perturbation agents in the physical experiments to develop this model, but not this one, so at least in their terms, this is a novel finding, arrived at out of their modeling work.

Given that these authors were working from scratch, not starting with manually curated pathway models that incorporate a lot of known individual interactions, this is impressive work (and they note parenthetically that using such curated data would be a big help to their modeling). Having computationally tractable ways to generate and refine large molecule networks based on typical experimentation is a recipe for advancement in the field- I hope these tools become more popular.



  • Ruminations on PGE. Minimally, the PUC needs to be publically elected. Maximally, the state needs to take over PGE entirely and take responsibility.
  • Study on the effect of automation on labor power ... which is minor.
  • What has happened to the Supreme Court?
  • Will bribery help?

Saturday, October 26, 2019

Meritocracy

Is meritocracy intrinsically bad, or good for some things, not so good for others?

A recent book review in the New Yorker ruminated on the progress and defects of the meritocracy, a word born in sarcasm, now become an ideology and platitude. I am not sure that the review really touched on the deeper issues involved, so am motivated to offer a followup. The term was coined by a British sociologist, which is significant, as it describes a fundamental shift from the preceding system, the class system, as a way of allocating educational opportunity, professional work, military grades, and social status in general. It would be natural for someone of the British upper class to decry such a change, though the coiner, Michael Young, was generally a socialist and egalitarian, though eventually made into a Baron for his services ... ironically.

The book review focused mostly on the educational establishment, where the greatest sea change has occurred. Where elite schools used to lazily accept their students from elite prep academies, from certain rich families and class backgrounds, now they make a science of student selection, searching far and wide, high and low, for the most meritorious candidates. Are SAT scores useful? Not very, the new consensus has it, especially as such tests unconsciously reproduce various cultural biases, instead of rendering the true grail- a score of merit, whatever that really might be. But anyhow the slicing is done, higher education is now an intense, mostly meritocratic sorting process, granting opportunities and education on the basis of qualifications, intent on funneling the most capable people into the higher rungs of the ladder of professional activities and status.

One question is whether all this laborious sorting of students has been a good thing, overall. Do we get better staffed hospitals, better filled jobs throughout the economic system by virtue of this exquisitely and remorselessly selective weeding system? Yes we do, perhaps at the cost of some social serendipity, of finding CEO material in the mailroom, and the like.

But the deeper question is whether all this selection has been good for our society at large. There is answer has to be more guarded. If economic efficiency is the only goal, then sure. But it isn't, and some of our social atomization, and creeping class-ism and despair in the lower rungs of society comes from the intensification of meritocratic selection, which spills over to many other areas of society, directly through income and wealth, and indirectly through many other mechanisms of status, particularly politics. Much of Trump's support comes from people sick of the "elites"- those selected by SAT scores, course grades, and the like to rule over the working class. It is not clear that grubbing for grades and mastering standardized exams have done such a good job at selecting a ruling political class. That class has not done a very good job, and that poor performance has sapped our social solidarity. The crisis is most glaring in the stark cost of losing out- homelessness and destitution- the appalling conditions that are the mirror of billionaires also produced by this Darwinian system.

The problem is that we need areas of our lives that are not plugged into the rat race, for both psychological and sociological reasons. Such areas are increasingly scarce as this new gilded age gobbles up all our social relations under the rubric of the market, paticularly with its newly internet-extended capabilities. Religion has traditionally been a social locus where every one is worth the same- many classes come together to share some profound feelings, and occasionally explicit anti-establishment messages, (though also often a message of exalted status vs some other sect, faith, or unbelievers). But religion is dying, for good reason.

A town meeting

Civic associations and volunteer life have in the US been a frequent antidote to class-ism, with people of all classes coming together to make each others' lives better. But modern transportation has enabled the definitive sorting of classes by socioeconomic level, rendering civic activity, even when it occurs, poor at social mixing. No longer does a geographic community have to include those of all professions and walks of life to be viable. We can have lilly-white suburbs and gated communities, and have any tradespeople and retail employees commute in from far away. That is a problem, one caused ultimately by fossil fuels and the freedom that they bring. The civic sector has also been invaded by an army of vanity foundations sponsored by the rich- a patronizing and typically futile approach to social betterment. Volunteerism has also been sapped by lack of time and money, as employees throughout the economic system are lashed ever more tightly to their jobs, stores kept open at all hours, and wages for most stagnate. Unions are another form of civic association that have withered.

All this has frayed the local civic and social connections, which are the ultimate safety net and source of civic solidarity. While Republicans bray about how terrible government is at replacing these services with top-down programs, (with some justification), they have at the same time carried out a decades-long battle to weaken both government and civic life, leaving a smoldering ruin in the name of a new feudal overlordship of the "job-creators"- the business class. That is the ultimate problem with meritocracy, and while appreciating its role in spreading social justice in the distribution of educational and professional opportunity, (a promise that is far from fully realized), we need to realize its cost in other areas of our national culture, and work to restore community diversity, community institutions, and community solidarity.

Where love rules, there is no will to power; where power predominates, there love is lacking. The one is the shadow of the other. – Carl Jung

Saturday, October 19, 2019

The Participation Mystique

How we relate to others, things, environments.

We are all wrapped up in the impeachment drama now, wondering what could be going on with a White House full of people who have lost their moral compasses, their minds. Such drama is an exquisite example of participation mystique, on our part as we look on in horror as the not very bright officials change their stories by the day, rats start to leave the sinking ship, and the president twists in the wind. We might not sympathize, but we recognize, and voyeuristically participate in, the emotions running and the gears turning.

Carl Jung took the term, participation mystique, from the anthropologist Lucien Levy Bruhl. The original conception was a rather derogotory concept about the animism common among primitive people, that they project anthropomorphic and social characters to objects in the landscape, thus setting up mystical connections with rocks, mountains, streams, etc. Are such involvements characteristic of children and primitive people, but not of us moderns? Hardly. Modern people have distancing and deadening mechanisms to manage our mental involvement with projected symbologies, foremost among which is the scientific mindset. But our most important and moving experiences partake of identification with another- thing or person, joining our mental projection with their charisma, whatever that might be.

Participation mystique remains difficult to define and use as a concept, despite books being written about it. But I would take it as any empathetic or identification feelings we have toward things and people, by which the boundaries in between become blurred. We have a tremendous mental power to enter into other's feelings, and we naturally extend such participation (or anthropomorphism) far beyond its proper remit, to clouds, weather events, ritual objects, etc. This is as true today with new age religions and the branding relationships that every company seeks to benefit from, as it is in the more natural setting of imputing healing powers to special pools of water, or standing in awe of a magnificent tree. Such feelings in relation to animals has had an interesting history, swinging from intense identification on the part of typical hunters and cave painters, to an absurd dismissal of any soul or feeling by scientistic philosophers like Descartes, and back to a rather enthusiastic nature worship, nature film-making, and a growing scientific and philosophical appreciation of the feelings and moral status of animals in the present day.




Participation mystique is most directly manipulated and experienced in the theater, where a drama is specifically constructed to draw our sympathetic feeings into its world, which may have nothing to do with our reality, or with any reality, but is drenched in the elements of social drama- tension, conflict, heroic motivations, obstacles. If you don't feel for and with Jane Eyre as she grows from abused child, to struggling adult, to lover, to lost soul, and finally to triumphant partner, your heart is made of stone. We lend our ears, but putting it psychologically, we lend a great deal more, with mirror neurons hard at work.

All this is involuntary and unconscious. Not that it does not affect our conscious experience, but the participation mystique arises as an automatic response from brain levels that we doubtless share with many other animals. Seeing squirrels chase each other around a tree gives an impression of mutual involvement and drama that is inescapable. Being a social animal requires this kind of participation in each other's feelings. So what of the psychopath? He seems to get these participatory insights, indeed quite sensitively, but seems unaffected- his own feelings don't mirror, but rather remain self-centered. He uses his capabilities not to sympathise with, but to manipulate, others around him or her. His version of participation mystique is a truncated half-experience, ultimately lonely and alienating.

And what of science, philosophy and other ways we systematically try to escape the psychology of subjective identification and participation? As mentioned above in the case of animal studies, a rigid attitude in this regard has significantly retarded scientific progress. Trying to re-establish objectively what is so obvious subjectively is terribly slow, painstaking work. Jane Goodall's work with chimpanzees stands as a landmark here, showing the productive balance of using both approaches at once. But then when it comes to physics and the wide variety of other exotic phenomena that can not be plausibly anthropomorphized or participated in via our subjective identification, the policy of rigorously discarding all projections and identifications pays off handsomely, and it is logic alone that can tell us what reality is.

  • The Democratic candidates on worker rights.
  • Was it trade or automation? Now that everything is made in China, the answer should be pretty clear.
  • On science.
  • Turns out that Google is evil, after all.
  • Back when some Republicans had some principles.
  • If all else fails, how about a some nice culture war?
  • What is the IMF for?
  • #DeleteFacebook
  • Graphic: who is going to tax the rich? Who is pushing a fairer tax system overall? Compare Biden with Warren carefully.

Saturday, October 12, 2019

Thinking Ahead in Waves

A computational model of brain activity following simple and realistic Bayesian methods of internal model development yields alpha waves.

Figuring how the brain works remains an enormous and complicated project. It does not seem susceptible to grand simplifying theories, but has in contrast been a mountain climbed by thousands, in millions of steps. A great deal of interest revolves around brain waves, which are so tantalizingly accessible and reflective of brain activity, yet still not well understood. They are definitely not carrying information in the way radio stations send information, whether in the AM or FM. But they do seem to represent synchronization between brain regions that are exchanging detailed information through their anatomical, i.e. neuronal, connections. A recent paper and review discuss a theory-based approach to modeling brain computation, one that has the pleasant side effect of generating alpha waves- one of the strongest and most common of the brain wave types, around 10 Hz, or 10 cycles per second- automatically, and in ways that explain some heretofore curious features.

The model follows the modern view of sensory computation, as a Bayesian modeling system. Higher levels make models of what reality is expected to look/hear/feel like, and the lower level sensory systems, after processing their inputs in massively parallel fashion, send only error signals about what differs from the model (or expectation). This is highly efficient, such that boring scenery is hardly processed or noticed at all, while surprises form the grist of higher level attention. The model is then updated, and rebroadcast back down to the lower level, in a constant looping flow of data. This expectation/error cycle can happen at many levels, creating a cascade or network of recurrent data flows. So when such a scheme is modeled with realistic neuronal communication speeds, what happens?

A super-simple model of what this paper implements. The input node sends data, in the form of error reports, (x(t)), to the next, higher level node. In return, it gets some kind of data (y(t)), indicating what that next level is expecting, as its model of how things are at time t.

The key parameters are the communication speeds in both directions, (set at 12 milliseconds), the processing time at each level, (set at 17 milliseconds), and a decay or damping factor accounting for how long neurons would take to return to their resting level in the absence of input, (set at 200 milliseconds). This last parameter seems most suspect, since the other parameters assume instantaneous activation and de-activation of the respective neurons involved. A damping outcome/behavior is surely realistic from general principles, but one wonders why a special term would be needed if one models the neurons in a realistic way, which is to say, mostly excitatory and responsive to inputs. Such a system would naturally fall to inactivity in the absence of input. On the other hand, a recurrent network is at risk of feedback amplification, which may necessitate a slight dampening bias.

The authors generate and run numerical models for a white noise visual field being processed by a network with such parameters, and generate two-dimensional fields of waves for two possibilities. First is the normal case of change in the visual field, generating forward inputs, from lower to higher levels. Second is a lack of new visual input, generating stronger backward waves of model information. Both waves happen at about 8 times the communication delay, or about 100 milliseconds, right in the alpha range. Not only did such waves happen in 2-layer models with just one pair of interacting units, but when multiple modules were modeled in a 2-dimensional field, traveling alpha waves appeared.

When modeled with multiple levels and two dimensions, the outcome, aside from data processing, is a traveling alpha wave that travels in one direction when inputs predominate (forward) and in the opposite direction when inputs are low and backward signals predominate.

Turning to actual humans, the researchers looked more closely at actual alpha waves, and found the same thing happening. Alpha waves have been considered odd in that, while generally the strongest of all brain waves, and the first to be characterized, they are strongest while resting, (awake, not sleeping), and become less powerful when the brain is actively attending/watching, etc. Now it turns out that what had been considered the "idle" brain state is not idle at all, but a sign of existing models / expectations being propagated in the absence of input- the so-called default mode network. Tracking the direction of alpha waves, they were found to travel up the visual hierarchy when subjects viewed screens of white noise. But when their eyes were closed, the alpha waves traveled in the opposite direction. The authors argue that normal measurements of alpha waves fail to properly account for the power of forward waves, which may be hidden in the more general abundance of backward, high-to-low level alpha waves.

Example of EEG from human subjects, showing the directionality of alpha wave travel, in this case forward from input (visual cortex in the back of the brain) to higher brain levels.

"Therefore, we conjecture that it may simply not be possible for a biological brain, in which communication delays are non-negligible, to implement predictive coding without also producing alpha-band reverberations. Moreover, a major characteristic of alpha-band oscillations—i.e., their propagation through cortex as a traveling wave—could also be explained by a hierarchical multilevel version of our predictive coding model. The waves predominantly traveled forward during stimulus processing and backward in the absence of inputs. ...  Importantly though, in our model none of the units is an oscillator or pacemaker per se, but oscillations arise from bidirectional communication with temporal delays."

Thus brain waves are increasingly understood as a side effect of functional synchronization and, in this case, intrinsically associated with the normal back-and-forth of data processing, which looks nothing like the stream of data from a video camera, but something far more efficient, using a painstakingly-built internal database of reality to keep tabs on, and detect deviations from, new sensations arriving. It remains to ask what exactly this model function is, which the authors term y(t) - the data sent from higher levels to lower levels. Are higher levels of the brain modeling their world in some explicit way and sending back a stream of imagery which the lower levels compare with their new data? No- the language must far more localized. The rendition of full scenery would be reserved for conscious consideration. The interactions considered in this paper are small-scale and progressive, as data is built up over many small steps. Thus each step would contain a pattern of what is assumed to be the result of the prior step. Yet what exactly gets sent back, and what gets sent onwards, is not intuitively clear.