Saturday, July 9, 2016

Religion as Science Fiction

What if theology is regarded as SciFi?

How seriously do you take science fiction? Obviously, being called "fiction", it is neither science nor any other kind of truth. Yet it is full of "truthiness"- plausible-ish technologies, settings related to our own, typically in the future, and human dramas more or less rich. It also often offers sweeping, even eschatological-scale, plots. But how can science fiction deal in human meaning if it does not deal in theology, given that the religiously inclined naturally think that meaning is given to us, not by our own ideas or efforts, but by a theos?

Obviously, one can turn that around and claim that theologies are themselves made up, and, far from scientifically observing a meaning given from on high, are exercises in making meaning, all the more effective for denying their underlying fictionality. In any case, I think science fiction is clearly the closest genre to religion, and caters to readers/viewers who have basically religious needs and temperaments.
From Jesus and Mo.

It is the science fiction fans who expect philosophical ruminations on what it means to be human, tales of a far future when humanity will have escaped the bonds of earth, often magical events and capabilities, and unimaginably powerful alien beings. Subspace, mind-melds, apocalyptic wars ... it is just a another word for supernatural.

Likewise, our ancestors clearly had the same idea(s). How better to illustrate their dreams, both bad and good, but with inflated archetypal beings and conflicts? The Ramayana reads like a Hollywood SciFi blockbuster. Why are there two versions of the Garden of Eden? It isn't because each is scientifically accurate. It is a clear statement that both are science fictions- tales of an idyll, and of an archetype.

Rama, flying in his vimana.

Why our cultures should have harbored such humorless, spiritually dead people as to take these tales seriously is beyond me. It is probably a testament to the bureaucratic mindset- the organization-alized person who clutches at tradition and order, (and certainty/explanation), over imagination and play. And over time, the original imaginative, introspective impulse is so crusted over that even the most sensitive and insightful people have no choice but to take the truth-dogma seriously as an external or historical reality, and proceed to make nonsense of what began as a wonderful work of art.

  • Religion and big data.
  • Groups needing to own it...
  • Masons, and the convention of conventions.
  • Bill Mitchell on Brexit. "Labour was advocating continued membership of an arrangement that is now broadly seen as a vehicle of the elites to suppress wages, employment and push more people into compliant poverty."
  • More thoughts on Brexit.
  • The financial elites are not making good policy. And not providing economic growth.
  • South Korea, heading authoritarian.
  • Can atheists and chaplains interact usefully?
  • German economics: Schacht v Euken.
  • Our friends the Saudis.
  • Another theologian employed at a public university- heaven knows why.
  • Fox and Friends. Or frenemies.

Sunday, July 3, 2016

SPOC vs MEN: Mechanics of Cell Division

Some of the self-regulating mechanisms underlying the cell division process, including the spindle position checkpoint.

Cell division looks elegantly choreographed, and indeed, "alive". Yet we know in principle that it is also a mechanical contrivance, composed entirely of chemical reactions that through their slowly evolved complexity have achieved a highly reliable, self-checking mechanism of DNA and cytoplasmic segregation. Figuring out just what that mechanism is continues to fascinate many researchers.

Yeast cell spindle, combined fluorescence and DIC image. Microtubules (alpha tubulin) are green, pushing the respective DNA/nuclei to opposite ends of the incipient mother (large) and daughter (small) cells. Gamma tubulin, which is a special component of the core of the spindle pole body, is red, and the DNA is blue.
A couple of recent papers studied one of these homeostatic mechanisms- also called checkpoints- by which each side of the mitotic spindle knows that it has gotten to the right place in the cell, and can initiate disassembly. The spindle is the complex of microtubules in all eukaryotic cells that is nucleated from the centrioles/basal bodies/astral center/MTOC/spindle pole body and extends to the individual chromosomes, holding them in an organized array (the metaphase plate) before pulling each divided half apart into the two nascent cells, at which point it disassembles again, allowing nuclei to re-form around the DNA of each new cell. (Though unlike in other eukaryotes, yeast nuclei never break down, but are divided and dragged to their destination intact.) Several points of this process have checkpoints to prevent further steps from taking place before the current one is complete. No mitosis can be going on, and the cell must have reached some new threshold of size, before a new round of DNA synthesis can be kicked off for a new cell division. All the DNA has to be replicated before microtubules can engage at the centromeres. All the chromosomes have to be captured before separation between the homologs can be initiated. And so forth.

The particular checkpoint dealt with here is called the SPOC, or spindle position checkpoint. Each side of the spindle, centered at its respective spindle pole body, needs to know somehow that it is at a site within the new cell, rather than just floating around in the old cell. In yeast, where this work was done, the new cell starts off as a little bud that fills with cytoplasm for a while before the DNA replication and segregation process happens. So there is a pre-prepared site for the new cell spindle pole body to go, and that site is marked by a special molecule, called Lte1.

How one end of the spindle knows to get into the bud, and the other end to remain in the mother, is a different story we won't get into here. At any rate, as long as neither end has made it into the daughter bud, a complex of molecules enforce the SPOC. How this works is that protein kinase Kin4 is also asymmetric, located in the mother cell, and inhibits a key function at the centriole. The protein Spc72 is a dock for the core tubulin (gamma) at the centriole, which in turn attracts the major alpha tubulin. Spc72 also is the docking point for Kin4, allowing it to encourage (by preventing their inactivation by CDC5, one of the classic cell division cycle kinases) the activity of Bfa1/Bub2, two proteins that in combination are key inhibitors of Tem1, a GTPase that begins the molecular cascade of the mitotic escape network, or MEN.

Model of SPOC to MEN transition, where the spindle pole (gold) that gets into the daughter bud (green) triggers / undergoes the molecular steps that license entry into anaphase, or exit from mitosis. 

But as soon as one of the spindle ends has made it into the daugher bud, it escapes the influence of Kin4, and enters the zone of Lte1 activity. Lte1 inhibits the kinase activity of Kin4 directly, and also apparently activates Tem1 since it has the exact opposite activity (guanine exchange factor, or GEF) from the Bfa1/Bub2 pair, which constitute a GTPase-activating complex (GAP). Tem1 then activates the escape from mitosis, (MEN), which includes disassembly of the spindle, decondensation of the DNA,  as well as the closing and abscission of the bud neck. Thus yeast cells have taken advantage of their unusual shape characteristics to create a clean, if in our terms still complicated, system to enforce the correct placement of the daughter's genetic material.

While one of the recent papers was a better analysis of the system, doing some very intricate ablation of select microtubules in tiny dividing yeast cells to conclude that the SPOC is not so much a measure of spindle mis-alignment as it is a brake while both spindle ends are still in the mother, the other paper looked at the molecular structure at the centriole with a particularly interesting method.

It is difficult to get structural details about systems like this, unless one is willing to do a great deal of protein crystalization. But a few workarounds have been developed, one of which is fluorescence energy transfer, or FRET. If you engineer a pair of molecular sites, such as two proteins, with emitting and absorbing fluorophores, such that the one is close to the other, (in the 10 to 100 ångstrom range), and such that the one emits at wavelengths that the other absorbs, then you can roughly measure the distance between them at angstrom scales simply by exciting the emitter, and measuring the degree of local quenching by the absorber. And this can be done in live cells and in real time.

Model derived from FRET and other data, suggesting that Bfa1 under SPOC conditions is prevented from interacting with Spc72 by a Kin4 phosphorylation that allows interaction with Bmh1 and allows continued inhibitory activity of Bfa1/Bub2 over Tem1 and the MEN.

Using the fluorophores positioned on Bfa1 and either Spc72, Cnm67 or Nud1 (all stably associated centrosomal proteins), these authors find that Bfa1 is clearly closest to the Spc72 protein, and also near the Nud1 protein, and not detectably close to the Cnm67, which serves here as a control. In addition, they found that while the association of Bfa1 with Nud1 is more or less stable through the SPOC and MEN process, its proximity to Spc72, which as noted above is the docking site for the SPOC-activating Kin4, is reduced when in the active presence of Kin4, presumably due to the arrival of yet another protein, Bmh1, which can bind to Bfa1 once Bfa1 has been phosphorylated by Kin4.

In addition, they deploy yet another fluorescence technique, photo-bleaching, to show that in the absence of Kin4, the Bfa1 association with the whole centrosome, including Spc72 and Nud1, is loosened substantially. They bleached the region close to one spindle pole, and waited for natural exchange and diffusion to restore fluorescence and especially FRET from the spindle pole site. In settings where Kin4 is not active, they see six times the speed of exchange, indicating that phosphorylation of Bfa1 by Cdc5, even though it correlates with closer proximity to Spc72, also correlates with an overall loosening of Bfa1 attachment, which makes sense given that its presence is key to promoting SPOC and inhibiting MEN via its inhibition of Tem1.

The molecular system of the cell cycle was first worked out in the yeast model system, and it is gratifying to see continued, if slow, progress in this system on a variety of fronts to work out its details.

  • Migration and job loss is the key to Brexit.
  • Reflections on Brexit.
  • EU gone to pot. Will the US go there as well?
  • Meetings by officials are not official acts.
  • Rent is the enemy. But reducing real estate rents will be very difficult.
  • What happens when you have an authoritarian temperament.
  • Infrastructure builds future productivity.
  • The Taliban is gaining ground.
  • Has Turkey been supporting ISIS?
  • The pain in Puerto Rico. Either we bail them out, or they get put through the wringer.
  • Economic graph of the week. Income trends for various percentiles of the global population, over the last 20 years.

Saturday, June 25, 2016

Extremism is no Accident

Gun owners and Muslims have something in common.

One of the watershed events leading up to the American Civil War was the caning of Senator Charles Sumner by Senator Preston Brooks of South Carolina, in 1856. Sumner was an outright abolitionist from Massachusetts, and had just given a landmark speech on the fate of Kansas, upon which the future direction of the nation hinged. Sumner let it all hang out, portraying Kansas as a virgin raped by the slave powers which were working to corrupt its institutions and force it to come into the union as a slave state. Hours long and shooting insults in all directions, it was quite extreme for its time. Two days later Brooks attacked Sumner in the Senate chamber with a metal-topped cane and beat Sumner, who was trapped at his desk, about the head to within an inch of his life. Brooks was assisted by a colleague, congressman Henry Edmundson from Virginia, who held off Sumner's supporters with a pistol.

No event illustrated quite so markedly the depths to which the sectional divide had sunk, or the clash of cultures between the industrializing, striving North, and the feudal South. The action was extremist, but clearly arose from the respective tribal affiliations and ideologies at work. Southerners apparently welcomed it as honor vindicated, while to the North, it exemplified the barbarity inherent in the slave system, not only embodied in the abject subjugation of slaves, but in the moral coursening of their masters.

Turning to today's ideological battles and extremist events, the rote responses by the various communities implicated in the Orlando shootings bring up the issue of collective responsibility. Does being Muslim implicate you and your beliefs in such an event? Does being a member of the NRA with its opposition to controlling military-grade weapons? If a group's ideology is exemplified by such an event, what is the degree of individual responsibility and culpability?

One can turn to the German question as another, closer, historical example. Germans today still bear a stigma and responsibility deriving from the disastrous world wars of the last century. That is as it should be. Germany and Germans are not pacifist, but they have taken on an extra measure of responsibility for restitution viz-a-vis Jews and Israel, and for peace in Europe. Their economic policies may be tending in the opposite direction, but on the whole, they have meant well over the post-war decades, forging a close relationship with France and various structures of European cooperation.

People are part of groups for a reason, and bear both the costs and benefits of their groups. Homosexuals may be so stigmatized by their membership (which is, unlike some other groups under discussion, not a voluntary membership) that they deny that membership and stay in the closet. Southerners dedicated to the perpetuation of slavery were rarely extremist, but derived benefits from extremist acts. The brutality at the margins, vs escaped slaves, exhibitions of black pride, and unsympathetic whites, kept the system intact, for the benefit all those who didn't want to get their hands dirty.

Membership in Islam is less directly culpable, given the large size of the group compared to the proportionately small numbers of extremists. For example, polls that show that a majority of Muslims do not support ISIS, yet the remainder amounts to “roughly 50 million people [who] express sympathy”. Which is quite a large pool of people to draw on, and more importantly a significant problem for the community of Islam as a whole, not to mention the rest of us. The scriptures of Islam are a well from which countless fundamentalists have learned to hate, and to see violence as both beneficial and sacred. While some forms of Islam, notably Sufi-ism, have renounced these aspects of their tradition in wholesale fashion, the dominant forms of Islam today have not. The strain coming from Saudi Arabia is particularly filled with negativity- towards women, towards apostates, towards Shia, towards infidels.

Extremists are not opposed to their group. As a rule, they believe that most members are insufficiently dedicated to the group’s overall goals and ideology. Fundamentalism, be it gun nuttery or Islamism, is usually couched in original scriptures, and a dismissal of the wishy-washy-ness of the mass of members who value civility and moderation over principle. The ideology of the group leads directly to the occurrence of extremists, if only in a minority of cases. It is the ideology that is at fault if, taken seriously, it leads what the rest of us characterize as extremism. And historically, extremism in Islam has been highly successful, providing the military motivation and success that makes so many people Muslim today. Moderates benefit from extremism.

In ideological terms, ideologies such as religions tout themselves as truth in both moral and scientific senses. The latter is naturally absurd, while the former is just as wrong, since morality is never a closed book. However, given the truths that the Koran provides, it is only natural to hate infidels, apostates, and love God, his messenger, and the jihad that will convert the world. Only an attitude of skepticism about these truths would prompt someone to take a more moderate stance, such as accepting the legitimacy of non-Muslim narratives and approaches to life. Why would a community want to foster such skepticism? That would be counter-productive. Thus extremists are countenanced as their over-enthusiastic brethren, as maybe a political problem with regard to the outside world, but far from an ideological or theoretical problem. This makes communities unable to successfully reprove their extremists, be they anti-abortion zealots, islamic terrorists, or gun-rightists.

This is why we need to pay attention to the tenor of the entire community from which extremists spring. It is not Islamophobia to see a fundamental ideological problem within Islam which fosters bigotry, regressive institutions, and violence so pervasively in the Middle East. Turning back to our Civil War, the extremist violence that occurred in the Senate in 1856 was not an isolated incident, but part of a pattern of discontent and insurrection that had been building in the South, especially South Carolina, for decades. Slavery was at the core of a culture that had diverged to a startling degree from that in the North, and elsewhere in the Western world- a relic of feudalism or worse. Its desperation to persist in a fundamentally immoral institution, and spread it to other sections of the frontier, was inimical to modernity and to decency. The ideologies of the North and the South were incompatible, and the sparks of extremist acts such as the caning of Sumner, and the raid on Harper’s Ferry, were harbingers of the cataclysm to come, as well as cultural and ideological divides we still struggle with today.

Postscript: As for gun ideology, it is not as directly militant as fundamentalist Islam. In fact, it entertains a sort of apple pie fantasy, where all gun owners are rock-ribbed patriots standing with their exquisite training and dedication to the CONSTITUTION athwart the forces of chaos and tyranny. The problem here is not (mostly) what the ideology includes, but what it leaves out- which is reality.  Every empirical study shows that the more guns there are, the more deaths, for a variety of causes like suicide, domestic violence, and accidents, not to mention mass murders, where in every recent case the guns were recently purchased for the purpose. Guns are extremely dangerous, and not everyone who has them makes a fetish of personal firearms training and lock-down security.  Clutching their pearls when some extremist uses a legally purchased semi-automatic to mow down innocent people is a clearly insufficient response to the problem by the gun community. A fantasy of responsibility can not hide a reality of frequent irresponsibility and utterly unnecessary danger.

Saturday, June 18, 2016

Perception is Not a One-Way Street

Perceptions happen in the brain as a reality-modeling process that uses input from external senses, but does so gradually in a looping (i.e. Bayesian) refinement process using motor activity to drive attention and sensory perturbation.

The fact that perceptions come via our sensory organs, and stop once those organs are impaired, strongly suggests a simple camera-type model of one-way perceptual flow. Yet, recent research all points in the other direction, that perception is a more active process wherein the sense organs are central, but are also directed by attention and by pre-existing models of the available perceptual field in a top-down way. Thus we end up with a cognitive loop where the mind holds models of reality which are incrementally updated by the senses, but not wiped and replaced as if they were simple video feeds. The model is the perception and is more valuable than the input.

One small example of a top-down element in this cognitive loop is visual attention. Our eyes are little outposts of the brain, and are told where to point. Even if something surprising happens in the visual field, the brain has to do a little processing before shifting attention, and thus the eyeballs, to that event. Our eyes are shifting all the time, being pointed to areas of interest, following movie scenes, words on a page, etc. None of this is directed by the eyes themselves, (including the jittery saccade system), but by higher levels of cognition.

The paper for this week notes ironically that visual perception studies have worked very hard to eliminate eye and other motion from their studies, to provide consistent mapping of what the experimenters present, to where the perceptions show up in visual fields of the brain. Yet motion and directed attention are fundamental to complex sensation.

Other sensory systems vary substantially in their dependence on motion. Hearing is perhaps least dependent, as one can analyze a scene from a stationary position, though movement of either the sound or the subject, in time and space, are extremely helpful to enrich perception. Touch, through our hairs and skin, is intrinsically dependent on movement and action. Taste and smell are also, though in a subtler way. Any monotonic smell will die pretty rapidly, subjectively, as we get used to it. It is the bloom of fresh tastes with each mouthful or new aromas that create sensation, as implied by the expression "clearing the palate". Aside from the issues of the brain's top-down construction of these perceptions through its choices and modeling, there is also the input of motor components directly, and dynamic time elements, that enrich / enliven perception multi-modally, beyond a simple input stream model.

The many loops from sensory (left) to motor (right) parts of the perceptual network. This figure is focused on whisker perception by mice.

The current paper discusses these issues and makes the point that since our senses have always been embodied and in-motion, they are naturally optimized for dynamic learning. And that the brain circuits mediating between sensation and action are pervasive and very difficult to separate in practice. The authors hypothesize very generally that perception consists of a cognitive quasi-steady state where motor cues are consistent with tactile and other sensory cues (assuming a cognitive model within which this consistence is defined), which is then perturbed by changes in any part of the system, especially sensory organ input, upon which the network seeks a new steady state. They term the core of the network the motor-sensory-motor (MSM) loop, thus empahsizing the motor aspects, and somewhat unfairly de-emphasizing the sensory aspects, which after all are specialized for higher abundance and diversity of data than the motor system. But we can grant that they are an integrated system. They also add that much perception is not conscious, so the fixation of a great deal of research on conscious reports, while understandable, is limiting.

"A crucial aspect of such an attractor is that the dynamics leading to it encompass the entire relevant MSM-loop and thus depend on the function transferring sensor motion into receptors activation; this transfer function describes the perceived object or feature via its physical interactions with sensor motion. Thus, ‘memories’ stored in such perceptual attractors are stored in brain-world interactions, rather than in brain internal representations."

A simple experiment. A camera is set up to watch a video screen, which shows  light and dark half-screens which can move side-to-side. The software creates a sensory-motor loop to pan motors on the camera to enable it to track the visual edge, as shown in E. It is evident that there is not much learning involved, but simply a demonstration of an algorithm's effective integration of motor and sensory elements for pursuit of a simple feature.

Eventually, the researchers present some results, from a mathematical model and robot that they have constructed. The robot has a camera and motors to move around with, plus computer and algorithm. The camera only sends change data, as does the retina, not entire visual scenes, and the visual field is extremely simple- a screen with a dark and light side, which can move right or left. The motorized camera system, using equations approximating a MSM loop, can relatively easily home in on and track the visual right/left divider, and thus demonstrate dynamic perception driven by both motor and sensory elements. The cognitive model was naturally implicit in the computer code that ran the system, which was expecting to track just such a light/dark line. One must say that this was not a particularly difficult or novel task, so the heart of the paper is its introductory material.


  • The US has long been a wealthy country.
  • If we want to control carbon emissions, we can't wait for carbon supplies to run out.
  • Market failure, marketing, and fraud, umpteenth edition. Trump wasn't the only one getting into the scam-school business.
  • Finance is eating away at your retirement.
  • Why is the House of Representatives in hostile hands?
  • UBI- utopian in a good way, or a bad way?
  • Trump transitions from stupid to deranged.
  • The fight against corruption and crony Keynesianism, in India.
  • Whom do policymakers talk to? Hint- not you.
  • Whom are you talking to at the call center? Someone playing hot potato.
  • Those nutty gun nuts.
  • Is Islam a special case, in how it interacts with political and psychological instability?
  • Graph of the week- a brief history of inequality from 1725.

Saturday, June 11, 2016

This is Progress?

We are eating ourselves out of house and home.

Werner Herzog made a documentary about Chauvet cave, the respository of spectacular cave art from circa 31,000 years ago. One striking aspect is that virtually all the animals pictured there, and whose remains are found there, are extinct. The aurochs, cave bears, steppe bison, northern rinoceri, cave lions, cave hyenas- all gone. These are animals that had taken hundreds of thousands, if not millions years, to evolve, yet a few tens of thousands of years later, they, along with the mammoths and other denizens of countless prior ice ages, are gone. What happened to them? We killed and ate them.

We then proceeded to raise human populations through agriculture, and carve up the Earth's surface for farming. We have been clearing competitors continuously, from wolves and lions, down to insects. After a false start with overly destructive DDT, agriculture has now settled on neonicotinoids, which, while less persistent in the food chain, have created a silent holocaust of insects, resulting in dead zones throughout agricultural areas, the not-so mysterious collapse of bees, and declines in all kinds of once-common insects.

Similarly, the oceans have been vacuumed of fish, with numerous collapsed and collapsing populations. And topping it all off is climate change and ocean acidification, which is gradually finishing the job of killing off Australia's Geat Barrier Reef, many other reefs around the world, as well as terrestrial species at high latitudes and altitudes.

Have humans made progress? We have, in technical, organizational, and even moral terms. But while we pat ourselves on the back for our space age, smart phones, and hyper-connected intelligence, we also live on an ever-more impoverished planet, due mostly to overpopulation plus the very same develpment we value so much. Institutions and ideologies like the Catholic church who continue to see nothing wrong with infinite population increase in a competitive quest for domination by sheer, miserable numbers are, in this limited and declining world, fundamentally immoral.

The US, after its destruction and displacement of Native Americans, has grown up on an ideology of open frontiers and endless space. But now the political and social ramifications of overpopulation and overdevelopment are beginning to be felt. Trumpism is one reaction- the visceral feeling that we just do not have the room any more, given our unwillingness to develop the requisite infrastructure, and our evident environmental degradation, even in a relatively sparsely populated country, for millions of further immigrants.

Economic inequality is not directly associated with this deep underlying Malthusian trend, since humans can degrade their environment under any economic regime- socialist, capitalist, or Keynesian. But it does provide a metaphor, with us humans lording it over our fellow creatures on the planet. Creatures whom we frequently invoke in our art and spiritual rhetoric and claim to regard with caring stewardship, even humane-ness. But then we keep killing and mistreating them anyhow.

We need to take sustainability seriously, both in terms of human populations and stewardship of the planet generally. E. O. Wilson has advocated for returning half our land to the wild, for the creatures that need it so desperately. This would be a supreme act of generosity and abstention. Though not even enough, in this age of global warming, it is part of the answer towards true sustainability.


Saturday, June 4, 2016

Modeling Gene Regulatory Circuitry

The difficult transition from cartoons to quantitative analysis of gene regulation

As noted a few weeks ago, gene regulation is a complicated field, typically with cartoonish views developed from small amounts of data. Mapping out the basic parameters is one thing, but creating quantitative models of how regulation happens in a dynamic environment is something quite different- something still extremely rare. A recent paper uses yeast genetics to develop a more thorough way to model gene regulation, and to decide among and refine such models.

A cartoon of glutamine (nitrogen) source regulation in yeast cells. Glutamine is a good food, and tif available outside, turns off the various genes needed to synthesize it. Solid lines are known interactions, and dashed lines are marginal or hypothesized interactions. Dal80 and Gzf both form dimers, which act more strongly (as inhibitor and activator, respectively) than single proteins.
When times are good for yeast cells, in nitrogen terms, an upstream signaling system inhibits the gene activators Gat1 and Gln3, leaving the repressors Dal80 and Gzf3 present and active to repress the various target genes that contribute to the synthesis of the key nitrogen-containing molecule glutamine, since it is available as food. All these regulators bind similiar sequences, the GATA motif, near their target genes, (which number about 90), so presence of the repressors can block the activity of the activators as well as shutting off gene expression directly. Conversely, when times are bad and no glutamine is coming in as food, then the suite of glutamine synthesis genes are turned on by Gat1 and Gln3.

Binding site preferences for each regulatory protein discussed. One can tell that they are not always very well-defined.
But things are not so simple, since, evolution being free to come up with any old system and always tinkering with its inheritance, there are feedback loops in several places which exist, at least in part to provide a robust on/off switch out of this analog logic. In fact, the GAT1, DAL80, and GZF genes each have the GATA motif in their own regulatory regions. Even with such a small system, arrows are going every which way, and soon it is very difficult to come up with a defensible, intuitive understanding of how the network behaves.

Edging towards a model. Individual aspects of the known or hypothesized interactions are encoded in computable form.
The data behind the work is a collection of mRNA abundance (i.e. gene expression) studies run under various conditions, especially in mutants of the various genes, and under conditions of nitrogen rich or poor conditions. Panels of the abundance of all mRNAs of interest can be easily run- the problem really is interpretation, and the generation or design of the various mutants and environmental conditions to be informative perturbations.

This is where modelling comes into play. The authors set up the known and hypothesized interactions, each into its own equation, whose parameters could vary. Though the number of elements are few, the large number of interactions / equations meant the models, (with 5 interactions, 13 states, and 41 parameters), given a partial set of data, could not be solved analytically, but were rather approximated by Monte Carlo methods, which is to say, by guessing with sample data. Models with various hypothesized interactions were compared with each other in performance over perturbation, where the model is given a change in conditions, such as a switch to low-nitrogen medium, or an inactivating mutation in one component. The model comparison method was Bayesian because it was iterative and took into account well-known data, such as the established interactions and their key parameter levels, wherever known.

Given a model, its ability to match the experimental data from the mRNA expression profiles under various conditions can be measured, adjusted, and re-iterated. Many models can be compared, and eventually a competitive process reveals which models work better. This is informative if the models are sufficiently detailed, and there is enough detailed data to measure them on, which is one of the strong points of this well-studied regulatory system. Whether this method can be extended to other systems with far less data is questionable.

In this case, one hypothesized interaction stood out as always contributing to more succesful models. That was the inhibition of Gzf3 by Dal80, its close relative. Also, in further selections, hypothesis 2 was also strongly supported, which is the auto-activation of Gat1, probably by binding to its own promoter. On the other hand, models that were missing the hypothesized interactions 1,3, and 5 were the top performers, indicating that these (auto-inhibition of Dal80, inhibition of Dal80 by Gzf3, and cooperative binding by Gln3 and Gat1) are probably not real, or at least significant under the measured conditions.

Lastly, the authors do a bit of model validation by creating new experiments against which to measure model predictions. Using their best model, the expression of Dal80 (Y-axis) under various perturbations is reasonably well-fit.

New experiments support model predictions reasonably well. In this case, the perturbation (a, b) was shifting form poor to rich (glutamine) food source, thereby inducing the repressor regulators such as Dal80, and repressing the glutamine synthetic genes. In c, d, the perturbation was the reverse, moving cells from a rich source to a drug which directly shuts off the signaling of rich conditions, thereby releasing repression.
And given a model, one can isolate individual aspects of interest, such as the predicted occupancy of target promoters/binding sites by the regulatory factors., which they do in great detail. In the end, the authors complain that much remains unknown about this system (give us more funding!). But the far more pressing question is what to do about the thousands of other networks and species with far more complication and less data. How can they be modelled usefully, and what is the minimal amount of data needed to do so?

  • More on regulatory logic.
  • The state can work effectively.
  • A little pacifism: "Our government has roughly eight hundred foreign military bases."
  • While we have been stagnating, the rest of the world has been catching up and doing better.
  • ECB and helicopter money, but not for Greece.
  • Pakistan is not the only one playing a double game in Afghanistan.
  • Fed, on the wrong track.
  • Every day is opposite day. Do gun nuts know anything about Christianity? "Collectivism: humanity's oldest disease."
  • Methods of a con artist.
  • Abenomics looks a lot more like austerity.

Saturday, May 28, 2016

The Housing Crisis- or is it a Transportation Crisis?

Bustling areas of the US are in the grips of a housing, transportation, and homelessness crisis.

While tracts of empty houses remain from the recent near-depression in areas like Florida, Nevada, and Detroit, other areas suffer from the opposite problem. Average detached house prices are at a million dollars in the San Francisco Bay Area. While this number is proudly trumpeted by the local papers for their satisfied home-owning constituents, the news for others is not so good. Houses are priced far above construction cost, clearly unaffordable for average workers, and rents are rising to unaffordable levels as well. How did we get here?

One of the great ironies is that environmentalism has allied with other status quo forces to stall development for decades. Existing homeowners have little interest in transforming their sprawly neighborhoods into denser, more efficient urban centers. Then they pat themselves on the back for preserving open space and small-town ambiance, along with inflated property values. Public officials have been stymied by proposition 13 and other low-tax movements from funding infrastructure to keep up with population growth. Local roads are now frequently at a standstill, making zoning for more housing essentially unthinkable. Add in a drought, and the policy response to growth is to hide one's head in the sand.

Then ... a scene from Dark Passage.

There is a basic public policy failure to connect population and business growth with the necessary supporting structures- a failure of planning. No new highway has been built for decades, even as the population of the Bay Area has increased by 10% since just 2000, the number of cars increased even more, and the population of the state has doubled since 1970. How was that supposed to work?

Now ... at the bay bridge.

An alternative approach would have been to limit population growth directly, perhaps via national immigration restrictions or encouragement for industry to move elsewhere. But that doesn't seem attractive to our public officials either, nor is it very practical. In a tragedy of common action, people flock to an attractive area, but eventually end up being driven away based on how crowded and unbearable the area becomes. A Malthusian situatuion, not from lack of food, but of other necessities. But with modern urban design & planning, it doesn't have to be that way- just look at Singapore, Hong Kong, New York, and other metropolises.

In the post-war era, the US, and California in particular, built infrastructure ahead of growth, inviting businesses to a beautiful and well maintained state. But once one set of roads was built, and a great deal of settled activity accumulated around them, expansion became inceasingly difficult. Now that a critical mass of talent and commercial energy is entrenched and growing by network forces, the contribution from the state has descended to negligible, even negative levels, as maintenance is given short shrift, let alone construction of new capacity, for roads, housing development, water, and sewer infrastructure. Prop 13 was, in retrospect, the turning point.

It is in miniature the story of the rise, decline, and fall of civilization. For all the tech innovation, the Bay Area is showing sclerosis at the level of public policy- an inability to deal with its most basic problems. The major reason is that the status quo has all the power. Homeowners have a direct financial interest in preventing further development, at least until the difficulties become so extreme as to result in mass exodus. One hears frequently of trends of people getting out of the area, but it never seems to have much effect, due to the area's basic attractiveness. Those who can afford to be here are also the ones investing in and founding businesses that keep others coming in their wake.

The post-war era was characterized by far more business influence on government, (especially by developers, the ultimate bogey-men for the environmentalists and other suburban status-quo activists), even while the government taxed businesses and the wealthy at far higher levels. Would returning to that system be desirable? Only if our government bodies can't get their own policy acts together. The various bodies that plan our infrastructure (given that the price signal has been cancelled by public controls on development) have been far too underfunded and hemmed in by short-sighted status quo interests- to whom the business class, which is typically interested in growth and labor availability more than holding on to their precious property values, are important counter-weights.

The problem is that we desperately need more housing to keep up with population, to keep housing affordable, and ultimately also resolve the large fraction of homelessness that can be addressed by basic housing affordability. But housing alone, without a full package of more transportion and other services, makes no sense on its own. So local planning in areas like the Bay Area needs a fundamental reset, offering residents better services first (more transit, cleared up roads) before allowing more housing. Can we build more roads? Or a new transit system? We desperately need another bridge across the bay, for example, and a vastly expanded BART system, itself a child of the post-war building boom, now fifty years old.

BART system map, stuck in time.

Incidentally, one can wonder why telecommuting hasn't become more popular, but the fact that a region like the Bay Area has built up a concentration of talent that is so enduring and growing despite all the problems of cost, housing, and transportation speaks directly to the benefits of (or at least the corporate desire for) corporeal commuting. Indeed, it is common for multinational companies to set up branches in the area to take advantage of the labor pool willing to appear in person, rather than trying to lure talent to less-connected areas cybernetically or otherwise.

One countervailing argument to more transit and road development is that the housing crisis and existing road network has motivated commuters to live in ever farther-flung outlying areas, even to Stockton. Thus building more housing first, in dense, central areas, might actually reduce traffic, by bringing those commuters back to their work places. This does not seem realistic, unfortunately. One has to assume that any housing increment will lead to more people, cars, and traffic, not less. There is no way to channel housing units to only those people who will walk to work, or take transit, etc., especially in light of the poor options currently available. The only way to relieve the transportation gridlock is to make using it dramatically more costly, or to provide more transportation- especially, more attractive transit options.

Another argument is that building more roads just leads to more usage and sprawl. This is true to some extent, but the solution is not to make the entire system dysfunctional in hopes of pushing marginal drivers off the road or out of the area in dispair. A better solution, if building more capacity is out of the question, is to take aim directly at driving by raising its price. The gas tax is far too low, and the California carbon tax (we have one, thankfully!) is also too low. There is already talk of making electric vehicle drivers pay some kind of higher registration or per-mile fee to offset their lack of gas purchases, but that seems rather premature and counter-productive from a global warming perspective. To address local problems, tolls could be instituted, not just at bridges as they are now, but at other areas where congestion is a problem, to impose costs across the board on users, as well as to fund improvements. This would also address the coming wave of driverless cars, which threatens to multiply road usage yet further.

In the end, housing and transportation are clearly interlinked, on every level. Each of us lives on a street, after all. Solving one problem, such as homelessness and the stratospheric cost of housing, requires taking a step back, looking at the whole system, and addressing root causes, which come down to zoning, transportation, money, and the quality and ambition of our planning.



  • Hey- how about those objective, absolutely true values?
  • Bernie has some mojo, and doing some good with it.
  • We know nothing ... at the State department.
  • The Fed is getting ready to make another mistake.
  • For the umpteenth time, we need more fiscal policy.
  • Cheating on taxes, the Trump way.
  • Yes, Trump is this stupid, and horrible.
  • Another disaster from Hillary Clinton's career.
  • Corporations are doing well the old-fashioned way, though corruption.
  • What happens when labor is too cheap, and how trade is not so peaceful after all.

Saturday, May 21, 2016

Tinier and Tinier- Advances in Electron Microscopy

Phase contrast and phase plates for electrons;  getting to near-atomic resolution.

Taken for granted today in labs around the world, phase contrast light microscopy won a Nobel prize in 1953. It is a fascinating manipulation of light to enhance the visibility of objects that may be colorless, but have a refractive index different from the medium. This allowed biologists especially to see features of cells while they were still alive, rather than having to kill and stain them. But it has been useful for minerology and other fields as well.

Optical phase contrast apparatus. The bottom ring blocks all but that ring of light from coming into the specimen from below, while the upper ring captures that light, dimming it and shifting its phase.

Refraction of light by a sample has very minor effects in normal bright field microscopy, but does two important things for phase contrast microscopy. It bends the light slightly, like a drink bends the image of a straw, and secondly, it alters the wave phase of the light as well, retarding it slightly relative to the unaffected light. Ultimately, these are both effects of slowing light down in a denser material.

The phase contrast microscope takes advantage of both properties. Rings are strategically placed both before and after the sample so that the direct light is channeled in a cone that is then intercepted after hitting the sample with the phase plate. This plate both dims to direct light, so that it does not compete as heavily with the scarcer refracted light, and more importantly, it also phase-retards the direct light by 90 degrees.

Light rotational phase relationships in phase contrast. The phase plate shifts the direct (bright) light from -u- to -u1-. Light that has gone through the sample and been refracted is -p-, which interferes far more effectively with -u1- (or -u2-, an alternate method) than with the original -u-, generating -p1- or -p2-, respectively.

The diagram above shows the phase relationships of light in phase contrast. The direct light is u on the right diagram, and p is the refracted and phase-shifted light from the specimen. d is the radial difference in phasing. Interference between the two light sources, given their slight phase difference, is also slight and gives very little contrast. But if the direct light is phase shifted by 90 degrees, either in the negative (orginal method, left side u1) or positive directions (right, u2), then adding the d vector via interference with the refracted light has much more dramatic effects, resulting in the phase contrast effect. Phase shifting is done with special materials, such as specifically oriented quartz.

Example of the dramatic enhancement possible with optical phase contrast.

A recent paper reviews methods for generating phase contrast for electron microscopy, which, with its far smaller wavelength, is able to resolve much finer details, and also revolutionized biology when it was invented, sixty years ago. But transmission electron microscopy is bedeviled, just as light microscopy was, by poor contrast in many specimens, particularly biological ones, where the atomic composition is all very light-weight: carbons, oxygens, hydrogens, etc, with little difference from the water medium or the various cellular or protein constituents. Elaborate staining procedures using heavy metals have been used, but it would be prefereable to image flash-frozen and sectioned samples more directly. Thus a decades-long quest to develop an electon analogue of phase contrast imaging, and a practical electron phase plate in particular.

Electrons have waves just as light does, but they are far smaller and somewhat harder to manipulate. It turns out that a thin plate of randomly deposited carbon, with a hole in the middle, plus electrodes to bleed off absorbed electrons and even bias the voltage to manipulate them, is enough to do the trick. Why the hole?  This is where the un-shifted electrons come through, (which mostly also do not interact significantly with the specimen), which then interfere with the refracted and shifted ones coming through the carbon plate outside. Which has the effect of emphasizing those electrons phase-shifted by the specimen which escape the destructive interference.
"A cosine-type phase-contrast transfer function emerges when the phase-shifted scattered waves interfere with the non-phase-shifted unscattered waves, which passed through the center hole before incidence onto the specimen."

The upshot is that one can go from the image on the right to the one on the left- an amazing difference.
Transmission electron microscopy of a bacterium. Normal is right, phase contrast is left.

At a more molecular scale, one can see individual proteins better, here the GroEL protein chaperone complex, which is a barrel-shaped structure inside of which other proteins are encouraged to fold properly.
Transmission electron microscopy of individual GroEL complexes, normal on left, phase contrast on right. 



Saturday, May 14, 2016

Dissection of an Enhancer

Enhancers provide complex, combinatorial control of gene expression in eukaryotes. Can we get past the cartoons?

How can humans get away with having no more genes than a nematode or a potato? It isn't about size, but how you use what you've got. And eukaryotes use their genes with exquisite subtlety, controlling them from DNA sequences called enhancers that can be up to a million base pairs away. Over the eons, countless levels of regulatory complexity have piled onto the gene expression system, more elements of which come to light every year. But the most powerful contol over genes comes from modular cassettes (called enhancers) peppered over the local DNA to which regulatory proteins bind to form complexes that can either activate or repress expression. These proteins themselves are expressed from yet other genes and regulatory processes that form a complex network or cascade of control.

When genome sequencing progressed to the question of what makes people different, and especially what accounts for differences in disease susceptibility, researchers quickly came up with a large number of mutations from GWAS, or genome-wide association studies, in data from large populations. But these mutations gave little insight into the diseases of interest, because the effect of each mutation was very weak. Otherwise the population would not be normal, as these were, typically, but afflicted. A slight change in disease susceptibility coming from a mutation somewhere in the genome is not likely to be informative until we have much more thorough understanding of the biological pathway of that disease.

This is one reason why biology is still going on, a decade and a half after the human genetic code was broken. The weak effect mutations noted above are often far away from any gene, and figuring out what they do is rather difficult, both because of their weakness, their perhaps uninformative position, and also because of the complexity of disease pathways and the relevant environmental effects.

Part of the problem comes down to a need to understand enhancers better, since they play such an important role in gene expression. Many sequencing projects study the exome, which comprises the protein-coding bits of the genome, and thus ignore regulatory regions completely. But even if the entire genome is studied, enhancers are maddening subjects, since they are so darned degenerate. Which is a technical term for being under-specified with lots of noise in the data. DNA-binding proteins tend to bind to short sites, typically of seven to ten nucleotides, with quite variable/noisy composition. But if helped by a neighbor, they may bind to a quite different site.. who knows? Such short sequences are naturally very common around the genome, so which ones are real, and which are decoys, among the tens or hundreds of thousands of basepairs around a gene? Again, who knows?

Thus molecular biologists have been content to do very crude analyses, deleting pieces of DNA around a specific gene, measuring a target gene's expression, and marking off sites of repression and enhancement using those results. Then they present a cartoon:

Drosophila Runt locus, with its various control regions (enhancers) mapped out a top on the genomic locus, and the proto-segmental stripes in the embryo within which each enhancer contributes to activate expression below. The locus spans 80,000 basepairs, of which the coding region is the tiny blue set of exons marked at top in blue with "Run".

This is a huge leap of knowlege, but is hardly the kind of quantative data that allows computational prediction and modeling of biology throughout the relevant regulatory pathways, let alone for other genes to which some of the same regulatory proteins bind. That would require a whole other level of data about protein-DNA binding propensities, effects from other interacting proteins, and the like, put on a quantitative basis. Which is what a recent paper begins to do.
"The rhomboid (rho) enhancer directs gene expression in the presumptive neuroectoderm under the control of the activator Dorsal, a homolog of NF-κB. The Twist activator and Snail repressor provide additional essential inputs"
A Drosophila early embryo, stained for gene expression of Rhomboid, in red. The expression patterns of the regulators Even-skipped (stripes) and Snail (ventral, or left) are both stained in green. The dorsal (back) direction is right, ventral (belly) is left, and the graph is of Rhomboid expression over the ventral->dorsal axis. The enhancer of the Rhomboid gene shown at top has its individual regulator sites colored as green (Dorsal), red (Snail) and yellow (Twist). 

Their analysis focused on one enhancer of one gene, the Rhomboid gene of the fruit fly, which directs embryonic gene expression just dorsal to the midline, shown above in red. The Snail regulator is a repressor of transcription, while Dorsal and Twist are both activators. A few examples of deleting some of these sites are shown below, along with plots of Rhomboid expression along the ventral/dorsal axis.

Individual regulator binding sites within the Rhomboid enhancer (B, boxes), featuring different site models (A) for each regulator. The fact that one regulator such as Dorsal can bind to widely divergent site, such as DL1 and DL2/3, suggests the difficulty of finding such sites computationally in the genome. B shows how well the models match the actual sequence at sites known to be bound by the respective regulators.

Plots of ventral-> dorsal expression of Rhomboid after various mutations of its Dorsal / Twist/ Snail enhancer. Black is the wild-type case, blue is the mutant data, and red is the standard error.

It is evident that the Snail sites, especially the middle one, plays an important role in restricting Rhomboid expression to the dorsal side of the embryo. This makes sense from the region of Snail expression shown previously, which is restricted to the ventral side, and from Snail's activity, which is repression of transcription.
"Mutation of any single Dorsal or Twist activator binding site resulted in a measurable reduction of peak intensity and retraction of the rho stripe from the dorsal region, where activators Dorsal and Twist are present in limiting concentrations. Strikingly, despite the differences in predicted binding affinities and relative positions of the motifs, the elimination of any site individually had similar quantitative effects, reducing gene expression to approximately 60% of the peak wild-type level"

However, when they removed pairs of sites and other combinations, the effects became dramatically non-linear, necessitating more complex modelling. In all they tested 38 variations of this one enhancer by taking out various sites, and generated 120 hypothetical models (using a machine learning system) of how they might cooperate in various non-linear ways.
"Best overall fits were observed using a model with cooperativity values parameterized in three 'bins' of 60 bp (scheme C14) and quenching in four small 25 or 35 bp bins (schemes Q5 and Q6)."
Example of data from some models (Y-axis) run on each of the 38 mutated enhancer data (X-axis). Blue is better fit between the model and the data.

What they found was that each factor needed to be modelled a bit differently. The cooperativity of the Snail repressor was quite small. While the (four) different sites differ in their effect on expression, they seem to act independently. In contrast, the activators were quite cooperative, an effect that was essentially unlimited in distance, at least over the local enhancer. Whether cooperation can extend to other enhancer modules, of which there can be many, is an interesting question.

Proof of their pudding was in the extension of their models to other enhancers, using the best models they came up with in a general form to predict expression from other enhancers that share the same regulators.

Four other enhancers (Ventral nervous system defective [vnd], Twist,  and Rhomboid from two other species of Drosophila, are scored for the modeled expression (red) over the dorsal-ventral axis, and actual expression in black.

The modeling turns out pretty decent, though half the cases are the same Rhomboid gene enhancer from related Drosophila species, which do not present a very difficult test. Could this model be extended to other regulators? Can their conclusion about the cooperativity of repressors vs activators be generalized? Probably not, or not very strongly. It is likely that similar studies would need to be carried out for most major classes of regulators to accumulate the basic data that would allow more general and useful prediction.

And that leaves the problem of finding the sites themselves, which this paper didn't deal with, but which is increasingly addressable with modern genomic technologies. There is a great deal yet to do! This work is a small example of the increasing use of modeling in biology, and the field's tip-toeing progress towards computability.

  • Seminar on the genetics of Parkinson's.
  • Whence conservatism?
  • Krugman on the phony problem of the debt.
  • Did the medievals have more monetary flexibility?
  • A man for our time: Hume, who spent his own time in "theological lying".
  • Jefferson's moral economics.
  • Trump may be an idiot, just not a complete idiot.
  • Obama and Wall Street, cont...
  • The deal is working.. a little progress in Iran.
  • More annals of pay for performance.
  • Corruption at the core of national security.
  • China's investment boom came from captive savings, i.e. state financial control.