Saturday, June 18, 2016

Perception is Not a One-Way Street

Perceptions happen in the brain as a reality-modeling process that uses input from external senses, but does so gradually in a looping (i.e. Bayesian) refinement process using motor activity to drive attention and sensory perturbation.

The fact that perceptions come via our sensory organs, and stop once those organs are impaired, strongly suggests a simple camera-type model of one-way perceptual flow. Yet, recent research all points in the other direction, that perception is a more active process wherein the sense organs are central, but are also directed by attention and by pre-existing models of the available perceptual field in a top-down way. Thus we end up with a cognitive loop where the mind holds models of reality which are incrementally updated by the senses, but not wiped and replaced as if they were simple video feeds. The model is the perception and is more valuable than the input.

One small example of a top-down element in this cognitive loop is visual attention. Our eyes are little outposts of the brain, and are told where to point. Even if something surprising happens in the visual field, the brain has to do a little processing before shifting attention, and thus the eyeballs, to that event. Our eyes are shifting all the time, being pointed to areas of interest, following movie scenes, words on a page, etc. None of this is directed by the eyes themselves, (including the jittery saccade system), but by higher levels of cognition.

The paper for this week notes ironically that visual perception studies have worked very hard to eliminate eye and other motion from their studies, to provide consistent mapping of what the experimenters present, to where the perceptions show up in visual fields of the brain. Yet motion and directed attention are fundamental to complex sensation.

Other sensory systems vary substantially in their dependence on motion. Hearing is perhaps least dependent, as one can analyze a scene from a stationary position, though movement of either the sound or the subject, in time and space, are extremely helpful to enrich perception. Touch, through our hairs and skin, is intrinsically dependent on movement and action. Taste and smell are also, though in a subtler way. Any monotonic smell will die pretty rapidly, subjectively, as we get used to it. It is the bloom of fresh tastes with each mouthful or new aromas that create sensation, as implied by the expression "clearing the palate". Aside from the issues of the brain's top-down construction of these perceptions through its choices and modeling, there is also the input of motor components directly, and dynamic time elements, that enrich / enliven perception multi-modally, beyond a simple input stream model.

The many loops from sensory (left) to motor (right) parts of the perceptual network. This figure is focused on whisker perception by mice.

The current paper discusses these issues and makes the point that since our senses have always been embodied and in-motion, they are naturally optimized for dynamic learning. And that the brain circuits mediating between sensation and action are pervasive and very difficult to separate in practice. The authors hypothesize very generally that perception consists of a cognitive quasi-steady state where motor cues are consistent with tactile and other sensory cues (assuming a cognitive model within which this consistence is defined), which is then perturbed by changes in any part of the system, especially sensory organ input, upon which the network seeks a new steady state. They term the core of the network the motor-sensory-motor (MSM) loop, thus empahsizing the motor aspects, and somewhat unfairly de-emphasizing the sensory aspects, which after all are specialized for higher abundance and diversity of data than the motor system. But we can grant that they are an integrated system. They also add that much perception is not conscious, so the fixation of a great deal of research on conscious reports, while understandable, is limiting.

"A crucial aspect of such an attractor is that the dynamics leading to it encompass the entire relevant MSM-loop and thus depend on the function transferring sensor motion into receptors activation; this transfer function describes the perceived object or feature via its physical interactions with sensor motion. Thus, ‘memories’ stored in such perceptual attractors are stored in brain-world interactions, rather than in brain internal representations."

A simple experiment. A camera is set up to watch a video screen, which shows  light and dark half-screens which can move side-to-side. The software creates a sensory-motor loop to pan motors on the camera to enable it to track the visual edge, as shown in E. It is evident that there is not much learning involved, but simply a demonstration of an algorithm's effective integration of motor and sensory elements for pursuit of a simple feature.

Eventually, the researchers present some results, from a mathematical model and robot that they have constructed. The robot has a camera and motors to move around with, plus computer and algorithm. The camera only sends change data, as does the retina, not entire visual scenes, and the visual field is extremely simple- a screen with a dark and light side, which can move right or left. The motorized camera system, using equations approximating a MSM loop, can relatively easily home in on and track the visual right/left divider, and thus demonstrate dynamic perception driven by both motor and sensory elements. The cognitive model was naturally implicit in the computer code that ran the system, which was expecting to track just such a light/dark line. One must say that this was not a particularly difficult or novel task, so the heart of the paper is its introductory material.


  • The US has long been a wealthy country.
  • If we want to control carbon emissions, we can't wait for carbon supplies to run out.
  • Market failure, marketing, and fraud, umpteenth edition. Trump wasn't the only one getting into the scam-school business.
  • Finance is eating away at your retirement.
  • Why is the House of Representatives in hostile hands?
  • UBI- utopian in a good way, or a bad way?
  • Trump transitions from stupid to deranged.
  • The fight against corruption and crony Keynesianism, in India.
  • Whom do policymakers talk to? Hint- not you.
  • Whom are you talking to at the call center? Someone playing hot potato.
  • Those nutty gun nuts.
  • Is Islam a special case, in how it interacts with political and psychological instability?
  • Graph of the week- a brief history of inequality from 1725.

Saturday, June 11, 2016

This is Progress?

We are eating ourselves out of house and home.

Werner Herzog made a documentary about Chauvet cave, the respository of spectacular cave art from circa 31,000 years ago. One striking aspect is that virtually all the animals pictured there, and whose remains are found there, are extinct. The aurochs, cave bears, steppe bison, northern rinoceri, cave lions, cave hyenas- all gone. These are animals that had taken hundreds of thousands, if not millions years, to evolve, yet a few tens of thousands of years later, they, along with the mammoths and other denizens of countless prior ice ages, are gone. What happened to them? We killed and ate them.

We then proceeded to raise human populations through agriculture, and carve up the Earth's surface for farming. We have been clearing competitors continuously, from wolves and lions, down to insects. After a false start with overly destructive DDT, agriculture has now settled on neonicotinoids, which, while less persistent in the food chain, have created a silent holocaust of insects, resulting in dead zones throughout agricultural areas, the not-so mysterious collapse of bees, and declines in all kinds of once-common insects.

Similarly, the oceans have been vacuumed of fish, with numerous collapsed and collapsing populations. And topping it all off is climate change and ocean acidification, which is gradually finishing the job of killing off Australia's Geat Barrier Reef, many other reefs around the world, as well as terrestrial species at high latitudes and altitudes.

Have humans made progress? We have, in technical, organizational, and even moral terms. But while we pat ourselves on the back for our space age, smart phones, and hyper-connected intelligence, we also live on an ever-more impoverished planet, due mostly to overpopulation plus the very same develpment we value so much. Institutions and ideologies like the Catholic church who continue to see nothing wrong with infinite population increase in a competitive quest for domination by sheer, miserable numbers are, in this limited and declining world, fundamentally immoral.

The US, after its destruction and displacement of Native Americans, has grown up on an ideology of open frontiers and endless space. But now the political and social ramifications of overpopulation and overdevelopment are beginning to be felt. Trumpism is one reaction- the visceral feeling that we just do not have the room any more, given our unwillingness to develop the requisite infrastructure, and our evident environmental degradation, even in a relatively sparsely populated country, for millions of further immigrants.

Economic inequality is not directly associated with this deep underlying Malthusian trend, since humans can degrade their environment under any economic regime- socialist, capitalist, or Keynesian. But it does provide a metaphor, with us humans lording it over our fellow creatures on the planet. Creatures whom we frequently invoke in our art and spiritual rhetoric and claim to regard with caring stewardship, even humane-ness. But then we keep killing and mistreating them anyhow.

We need to take sustainability seriously, both in terms of human populations and stewardship of the planet generally. E. O. Wilson has advocated for returning half our land to the wild, for the creatures that need it so desperately. This would be a supreme act of generosity and abstention. Though not even enough, in this age of global warming, it is part of the answer towards true sustainability.


Saturday, June 4, 2016

Modeling Gene Regulatory Circuitry

The difficult transition from cartoons to quantitative analysis of gene regulation

As noted a few weeks ago, gene regulation is a complicated field, typically with cartoonish views developed from small amounts of data. Mapping out the basic parameters is one thing, but creating quantitative models of how regulation happens in a dynamic environment is something quite different- something still extremely rare. A recent paper uses yeast genetics to develop a more thorough way to model gene regulation, and to decide among and refine such models.

A cartoon of glutamine (nitrogen) source regulation in yeast cells. Glutamine is a good food, and tif available outside, turns off the various genes needed to synthesize it. Solid lines are known interactions, and dashed lines are marginal or hypothesized interactions. Dal80 and Gzf both form dimers, which act more strongly (as inhibitor and activator, respectively) than single proteins.
When times are good for yeast cells, in nitrogen terms, an upstream signaling system inhibits the gene activators Gat1 and Gln3, leaving the repressors Dal80 and Gzf3 present and active to repress the various target genes that contribute to the synthesis of the key nitrogen-containing molecule glutamine, since it is available as food. All these regulators bind similiar sequences, the GATA motif, near their target genes, (which number about 90), so presence of the repressors can block the activity of the activators as well as shutting off gene expression directly. Conversely, when times are bad and no glutamine is coming in as food, then the suite of glutamine synthesis genes are turned on by Gat1 and Gln3.

Binding site preferences for each regulatory protein discussed. One can tell that they are not always very well-defined.
But things are not so simple, since, evolution being free to come up with any old system and always tinkering with its inheritance, there are feedback loops in several places which exist, at least in part to provide a robust on/off switch out of this analog logic. In fact, the GAT1, DAL80, and GZF genes each have the GATA motif in their own regulatory regions. Even with such a small system, arrows are going every which way, and soon it is very difficult to come up with a defensible, intuitive understanding of how the network behaves.

Edging towards a model. Individual aspects of the known or hypothesized interactions are encoded in computable form.
The data behind the work is a collection of mRNA abundance (i.e. gene expression) studies run under various conditions, especially in mutants of the various genes, and under conditions of nitrogen rich or poor conditions. Panels of the abundance of all mRNAs of interest can be easily run- the problem really is interpretation, and the generation or design of the various mutants and environmental conditions to be informative perturbations.

This is where modelling comes into play. The authors set up the known and hypothesized interactions, each into its own equation, whose parameters could vary. Though the number of elements are few, the large number of interactions / equations meant the models, (with 5 interactions, 13 states, and 41 parameters), given a partial set of data, could not be solved analytically, but were rather approximated by Monte Carlo methods, which is to say, by guessing with sample data. Models with various hypothesized interactions were compared with each other in performance over perturbation, where the model is given a change in conditions, such as a switch to low-nitrogen medium, or an inactivating mutation in one component. The model comparison method was Bayesian because it was iterative and took into account well-known data, such as the established interactions and their key parameter levels, wherever known.

Given a model, its ability to match the experimental data from the mRNA expression profiles under various conditions can be measured, adjusted, and re-iterated. Many models can be compared, and eventually a competitive process reveals which models work better. This is informative if the models are sufficiently detailed, and there is enough detailed data to measure them on, which is one of the strong points of this well-studied regulatory system. Whether this method can be extended to other systems with far less data is questionable.

In this case, one hypothesized interaction stood out as always contributing to more succesful models. That was the inhibition of Gzf3 by Dal80, its close relative. Also, in further selections, hypothesis 2 was also strongly supported, which is the auto-activation of Gat1, probably by binding to its own promoter. On the other hand, models that were missing the hypothesized interactions 1,3, and 5 were the top performers, indicating that these (auto-inhibition of Dal80, inhibition of Dal80 by Gzf3, and cooperative binding by Gln3 and Gat1) are probably not real, or at least significant under the measured conditions.

Lastly, the authors do a bit of model validation by creating new experiments against which to measure model predictions. Using their best model, the expression of Dal80 (Y-axis) under various perturbations is reasonably well-fit.

New experiments support model predictions reasonably well. In this case, the perturbation (a, b) was shifting form poor to rich (glutamine) food source, thereby inducing the repressor regulators such as Dal80, and repressing the glutamine synthetic genes. In c, d, the perturbation was the reverse, moving cells from a rich source to a drug which directly shuts off the signaling of rich conditions, thereby releasing repression.
And given a model, one can isolate individual aspects of interest, such as the predicted occupancy of target promoters/binding sites by the regulatory factors., which they do in great detail. In the end, the authors complain that much remains unknown about this system (give us more funding!). But the far more pressing question is what to do about the thousands of other networks and species with far more complication and less data. How can they be modelled usefully, and what is the minimal amount of data needed to do so?

  • More on regulatory logic.
  • The state can work effectively.
  • A little pacifism: "Our government has roughly eight hundred foreign military bases."
  • While we have been stagnating, the rest of the world has been catching up and doing better.
  • ECB and helicopter money, but not for Greece.
  • Pakistan is not the only one playing a double game in Afghanistan.
  • Fed, on the wrong track.
  • Every day is opposite day. Do gun nuts know anything about Christianity? "Collectivism: humanity's oldest disease."
  • Methods of a con artist.
  • Abenomics looks a lot more like austerity.

Saturday, May 28, 2016

The Housing Crisis- or is it a Transportation Crisis?

Bustling areas of the US are in the grips of a housing, transportation, and homelessness crisis.

While tracts of empty houses remain from the recent near-depression in areas like Florida, Nevada, and Detroit, other areas suffer from the opposite problem. Average detached house prices are at a million dollars in the San Francisco Bay Area. While this number is proudly trumpeted by the local papers for their satisfied home-owning constituents, the news for others is not so good. Houses are priced far above construction cost, clearly unaffordable for average workers, and rents are rising to unaffordable levels as well. How did we get here?

One of the great ironies is that environmentalism has allied with other status quo forces to stall development for decades. Existing homeowners have little interest in transforming their sprawly neighborhoods into denser, more efficient urban centers. Then they pat themselves on the back for preserving open space and small-town ambiance, along with inflated property values. Public officials have been stymied by proposition 13 and other low-tax movements from funding infrastructure to keep up with population growth. Local roads are now frequently at a standstill, making zoning for more housing essentially unthinkable. Add in a drought, and the policy response to growth is to hide one's head in the sand.

Then ... a scene from Dark Passage.

There is a basic public policy failure to connect population and business growth with the necessary supporting structures- a failure of planning. No new highway has been built for decades, even as the population of the Bay Area has increased by 10% since just 2000, the number of cars increased even more, and the population of the state has doubled since 1970. How was that supposed to work?

Now ... at the bay bridge.

An alternative approach would have been to limit population growth directly, perhaps via national immigration restrictions or encouragement for industry to move elsewhere. But that doesn't seem attractive to our public officials either, nor is it very practical. In a tragedy of common action, people flock to an attractive area, but eventually end up being driven away based on how crowded and unbearable the area becomes. A Malthusian situatuion, not from lack of food, but of other necessities. But with modern urban design & planning, it doesn't have to be that way- just look at Singapore, Hong Kong, New York, and other metropolises.

In the post-war era, the US, and California in particular, built infrastructure ahead of growth, inviting businesses to a beautiful and well maintained state. But once one set of roads was built, and a great deal of settled activity accumulated around them, expansion became inceasingly difficult. Now that a critical mass of talent and commercial energy is entrenched and growing by network forces, the contribution from the state has descended to negligible, even negative levels, as maintenance is given short shrift, let alone construction of new capacity, for roads, housing development, water, and sewer infrastructure. Prop 13 was, in retrospect, the turning point.

It is in miniature the story of the rise, decline, and fall of civilization. For all the tech innovation, the Bay Area is showing sclerosis at the level of public policy- an inability to deal with its most basic problems. The major reason is that the status quo has all the power. Homeowners have a direct financial interest in preventing further development, at least until the difficulties become so extreme as to result in mass exodus. One hears frequently of trends of people getting out of the area, but it never seems to have much effect, due to the area's basic attractiveness. Those who can afford to be here are also the ones investing in and founding businesses that keep others coming in their wake.

The post-war era was characterized by far more business influence on government, (especially by developers, the ultimate bogey-men for the environmentalists and other suburban status-quo activists), even while the government taxed businesses and the wealthy at far higher levels. Would returning to that system be desirable? Only if our government bodies can't get their own policy acts together. The various bodies that plan our infrastructure (given that the price signal has been cancelled by public controls on development) have been far too underfunded and hemmed in by short-sighted status quo interests- to whom the business class, which is typically interested in growth and labor availability more than holding on to their precious property values, are important counter-weights.

The problem is that we desperately need more housing to keep up with population, to keep housing affordable, and ultimately also resolve the large fraction of homelessness that can be addressed by basic housing affordability. But housing alone, without a full package of more transportion and other services, makes no sense on its own. So local planning in areas like the Bay Area needs a fundamental reset, offering residents better services first (more transit, cleared up roads) before allowing more housing. Can we build more roads? Or a new transit system? We desperately need another bridge across the bay, for example, and a vastly expanded BART system, itself a child of the post-war building boom, now fifty years old.

BART system map, stuck in time.

Incidentally, one can wonder why telecommuting hasn't become more popular, but the fact that a region like the Bay Area has built up a concentration of talent that is so enduring and growing despite all the problems of cost, housing, and transportation speaks directly to the benefits of (or at least the corporate desire for) corporeal commuting. Indeed, it is common for multinational companies to set up branches in the area to take advantage of the labor pool willing to appear in person, rather than trying to lure talent to less-connected areas cybernetically or otherwise.

One countervailing argument to more transit and road development is that the housing crisis and existing road network has motivated commuters to live in ever farther-flung outlying areas, even to Stockton. Thus building more housing first, in dense, central areas, might actually reduce traffic, by bringing those commuters back to their work places. This does not seem realistic, unfortunately. One has to assume that any housing increment will lead to more people, cars, and traffic, not less. There is no way to channel housing units to only those people who will walk to work, or take transit, etc., especially in light of the poor options currently available. The only way to relieve the transportation gridlock is to make using it dramatically more costly, or to provide more transportation- especially, more attractive transit options.

Another argument is that building more roads just leads to more usage and sprawl. This is true to some extent, but the solution is not to make the entire system dysfunctional in hopes of pushing marginal drivers off the road or out of the area in dispair. A better solution, if building more capacity is out of the question, is to take aim directly at driving by raising its price. The gas tax is far too low, and the California carbon tax (we have one, thankfully!) is also too low. There is already talk of making electric vehicle drivers pay some kind of higher registration or per-mile fee to offset their lack of gas purchases, but that seems rather premature and counter-productive from a global warming perspective. To address local problems, tolls could be instituted, not just at bridges as they are now, but at other areas where congestion is a problem, to impose costs across the board on users, as well as to fund improvements. This would also address the coming wave of driverless cars, which threatens to multiply road usage yet further.

In the end, housing and transportation are clearly interlinked, on every level. Each of us lives on a street, after all. Solving one problem, such as homelessness and the stratospheric cost of housing, requires taking a step back, looking at the whole system, and addressing root causes, which come down to zoning, transportation, money, and the quality and ambition of our planning.



  • Hey- how about those objective, absolutely true values?
  • Bernie has some mojo, and doing some good with it.
  • We know nothing ... at the State department.
  • The Fed is getting ready to make another mistake.
  • For the umpteenth time, we need more fiscal policy.
  • Cheating on taxes, the Trump way.
  • Yes, Trump is this stupid, and horrible.
  • Another disaster from Hillary Clinton's career.
  • Corporations are doing well the old-fashioned way, though corruption.
  • What happens when labor is too cheap, and how trade is not so peaceful after all.

Saturday, May 21, 2016

Tinier and Tinier- Advances in Electron Microscopy

Phase contrast and phase plates for electrons;  getting to near-atomic resolution.

Taken for granted today in labs around the world, phase contrast light microscopy won a Nobel prize in 1953. It is a fascinating manipulation of light to enhance the visibility of objects that may be colorless, but have a refractive index different from the medium. This allowed biologists especially to see features of cells while they were still alive, rather than having to kill and stain them. But it has been useful for minerology and other fields as well.

Optical phase contrast apparatus. The bottom ring blocks all but that ring of light from coming into the specimen from below, while the upper ring captures that light, dimming it and shifting its phase.

Refraction of light by a sample has very minor effects in normal bright field microscopy, but does two important things for phase contrast microscopy. It bends the light slightly, like a drink bends the image of a straw, and secondly, it alters the wave phase of the light as well, retarding it slightly relative to the unaffected light. Ultimately, these are both effects of slowing light down in a denser material.

The phase contrast microscope takes advantage of both properties. Rings are strategically placed both before and after the sample so that the direct light is channeled in a cone that is then intercepted after hitting the sample with the phase plate. This plate both dims to direct light, so that it does not compete as heavily with the scarcer refracted light, and more importantly, it also phase-retards the direct light by 90 degrees.

Light rotational phase relationships in phase contrast. The phase plate shifts the direct (bright) light from -u- to -u1-. Light that has gone through the sample and been refracted is -p-, which interferes far more effectively with -u1- (or -u2-, an alternate method) than with the original -u-, generating -p1- or -p2-, respectively.

The diagram above shows the phase relationships of light in phase contrast. The direct light is u on the right diagram, and p is the refracted and phase-shifted light from the specimen. d is the radial difference in phasing. Interference between the two light sources, given their slight phase difference, is also slight and gives very little contrast. But if the direct light is phase shifted by 90 degrees, either in the negative (orginal method, left side u1) or positive directions (right, u2), then adding the d vector via interference with the refracted light has much more dramatic effects, resulting in the phase contrast effect. Phase shifting is done with special materials, such as specifically oriented quartz.

Example of the dramatic enhancement possible with optical phase contrast.

A recent paper reviews methods for generating phase contrast for electron microscopy, which, with its far smaller wavelength, is able to resolve much finer details, and also revolutionized biology when it was invented, sixty years ago. But transmission electron microscopy is bedeviled, just as light microscopy was, by poor contrast in many specimens, particularly biological ones, where the atomic composition is all very light-weight: carbons, oxygens, hydrogens, etc, with little difference from the water medium or the various cellular or protein constituents. Elaborate staining procedures using heavy metals have been used, but it would be prefereable to image flash-frozen and sectioned samples more directly. Thus a decades-long quest to develop an electon analogue of phase contrast imaging, and a practical electron phase plate in particular.

Electrons have waves just as light does, but they are far smaller and somewhat harder to manipulate. It turns out that a thin plate of randomly deposited carbon, with a hole in the middle, plus electrodes to bleed off absorbed electrons and even bias the voltage to manipulate them, is enough to do the trick. Why the hole?  This is where the un-shifted electrons come through, (which mostly also do not interact significantly with the specimen), which then interfere with the refracted and shifted ones coming through the carbon plate outside. Which has the effect of emphasizing those electrons phase-shifted by the specimen which escape the destructive interference.
"A cosine-type phase-contrast transfer function emerges when the phase-shifted scattered waves interfere with the non-phase-shifted unscattered waves, which passed through the center hole before incidence onto the specimen."

The upshot is that one can go from the image on the right to the one on the left- an amazing difference.
Transmission electron microscopy of a bacterium. Normal is right, phase contrast is left.

At a more molecular scale, one can see individual proteins better, here the GroEL protein chaperone complex, which is a barrel-shaped structure inside of which other proteins are encouraged to fold properly.
Transmission electron microscopy of individual GroEL complexes, normal on left, phase contrast on right. 



Saturday, May 14, 2016

Dissection of an Enhancer

Enhancers provide complex, combinatorial control of gene expression in eukaryotes. Can we get past the cartoons?

How can humans get away with having no more genes than a nematode or a potato? It isn't about size, but how you use what you've got. And eukaryotes use their genes with exquisite subtlety, controlling them from DNA sequences called enhancers that can be up to a million base pairs away. Over the eons, countless levels of regulatory complexity have piled onto the gene expression system, more elements of which come to light every year. But the most powerful contol over genes comes from modular cassettes (called enhancers) peppered over the local DNA to which regulatory proteins bind to form complexes that can either activate or repress expression. These proteins themselves are expressed from yet other genes and regulatory processes that form a complex network or cascade of control.

When genome sequencing progressed to the question of what makes people different, and especially what accounts for differences in disease susceptibility, researchers quickly came up with a large number of mutations from GWAS, or genome-wide association studies, in data from large populations. But these mutations gave little insight into the diseases of interest, because the effect of each mutation was very weak. Otherwise the population would not be normal, as these were, typically, but afflicted. A slight change in disease susceptibility coming from a mutation somewhere in the genome is not likely to be informative until we have much more thorough understanding of the biological pathway of that disease.

This is one reason why biology is still going on, a decade and a half after the human genetic code was broken. The weak effect mutations noted above are often far away from any gene, and figuring out what they do is rather difficult, both because of their weakness, their perhaps uninformative position, and also because of the complexity of disease pathways and the relevant environmental effects.

Part of the problem comes down to a need to understand enhancers better, since they play such an important role in gene expression. Many sequencing projects study the exome, which comprises the protein-coding bits of the genome, and thus ignore regulatory regions completely. But even if the entire genome is studied, enhancers are maddening subjects, since they are so darned degenerate. Which is a technical term for being under-specified with lots of noise in the data. DNA-binding proteins tend to bind to short sites, typically of seven to ten nucleotides, with quite variable/noisy composition. But if helped by a neighbor, they may bind to a quite different site.. who knows? Such short sequences are naturally very common around the genome, so which ones are real, and which are decoys, among the tens or hundreds of thousands of basepairs around a gene? Again, who knows?

Thus molecular biologists have been content to do very crude analyses, deleting pieces of DNA around a specific gene, measuring a target gene's expression, and marking off sites of repression and enhancement using those results. Then they present a cartoon:

Drosophila Runt locus, with its various control regions (enhancers) mapped out a top on the genomic locus, and the proto-segmental stripes in the embryo within which each enhancer contributes to activate expression below. The locus spans 80,000 basepairs, of which the coding region is the tiny blue set of exons marked at top in blue with "Run".

This is a huge leap of knowlege, but is hardly the kind of quantative data that allows computational prediction and modeling of biology throughout the relevant regulatory pathways, let alone for other genes to which some of the same regulatory proteins bind. That would require a whole other level of data about protein-DNA binding propensities, effects from other interacting proteins, and the like, put on a quantitative basis. Which is what a recent paper begins to do.
"The rhomboid (rho) enhancer directs gene expression in the presumptive neuroectoderm under the control of the activator Dorsal, a homolog of NF-κB. The Twist activator and Snail repressor provide additional essential inputs"
A Drosophila early embryo, stained for gene expression of Rhomboid, in red. The expression patterns of the regulators Even-skipped (stripes) and Snail (ventral, or left) are both stained in green. The dorsal (back) direction is right, ventral (belly) is left, and the graph is of Rhomboid expression over the ventral->dorsal axis. The enhancer of the Rhomboid gene shown at top has its individual regulator sites colored as green (Dorsal), red (Snail) and yellow (Twist). 

Their analysis focused on one enhancer of one gene, the Rhomboid gene of the fruit fly, which directs embryonic gene expression just dorsal to the midline, shown above in red. The Snail regulator is a repressor of transcription, while Dorsal and Twist are both activators. A few examples of deleting some of these sites are shown below, along with plots of Rhomboid expression along the ventral/dorsal axis.

Individual regulator binding sites within the Rhomboid enhancer (B, boxes), featuring different site models (A) for each regulator. The fact that one regulator such as Dorsal can bind to widely divergent site, such as DL1 and DL2/3, suggests the difficulty of finding such sites computationally in the genome. B shows how well the models match the actual sequence at sites known to be bound by the respective regulators.

Plots of ventral-> dorsal expression of Rhomboid after various mutations of its Dorsal / Twist/ Snail enhancer. Black is the wild-type case, blue is the mutant data, and red is the standard error.

It is evident that the Snail sites, especially the middle one, plays an important role in restricting Rhomboid expression to the dorsal side of the embryo. This makes sense from the region of Snail expression shown previously, which is restricted to the ventral side, and from Snail's activity, which is repression of transcription.
"Mutation of any single Dorsal or Twist activator binding site resulted in a measurable reduction of peak intensity and retraction of the rho stripe from the dorsal region, where activators Dorsal and Twist are present in limiting concentrations. Strikingly, despite the differences in predicted binding affinities and relative positions of the motifs, the elimination of any site individually had similar quantitative effects, reducing gene expression to approximately 60% of the peak wild-type level"

However, when they removed pairs of sites and other combinations, the effects became dramatically non-linear, necessitating more complex modelling. In all they tested 38 variations of this one enhancer by taking out various sites, and generated 120 hypothetical models (using a machine learning system) of how they might cooperate in various non-linear ways.
"Best overall fits were observed using a model with cooperativity values parameterized in three 'bins' of 60 bp (scheme C14) and quenching in four small 25 or 35 bp bins (schemes Q5 and Q6)."
Example of data from some models (Y-axis) run on each of the 38 mutated enhancer data (X-axis). Blue is better fit between the model and the data.

What they found was that each factor needed to be modelled a bit differently. The cooperativity of the Snail repressor was quite small. While the (four) different sites differ in their effect on expression, they seem to act independently. In contrast, the activators were quite cooperative, an effect that was essentially unlimited in distance, at least over the local enhancer. Whether cooperation can extend to other enhancer modules, of which there can be many, is an interesting question.

Proof of their pudding was in the extension of their models to other enhancers, using the best models they came up with in a general form to predict expression from other enhancers that share the same regulators.

Four other enhancers (Ventral nervous system defective [vnd], Twist,  and Rhomboid from two other species of Drosophila, are scored for the modeled expression (red) over the dorsal-ventral axis, and actual expression in black.

The modeling turns out pretty decent, though half the cases are the same Rhomboid gene enhancer from related Drosophila species, which do not present a very difficult test. Could this model be extended to other regulators? Can their conclusion about the cooperativity of repressors vs activators be generalized? Probably not, or not very strongly. It is likely that similar studies would need to be carried out for most major classes of regulators to accumulate the basic data that would allow more general and useful prediction.

And that leaves the problem of finding the sites themselves, which this paper didn't deal with, but which is increasingly addressable with modern genomic technologies. There is a great deal yet to do! This work is a small example of the increasing use of modeling in biology, and the field's tip-toeing progress towards computability.

  • Seminar on the genetics of Parkinson's.
  • Whence conservatism?
  • Krugman on the phony problem of the debt.
  • Did the medievals have more monetary flexibility?
  • A man for our time: Hume, who spent his own time in "theological lying".
  • Jefferson's moral economics.
  • Trump may be an idiot, just not a complete idiot.
  • Obama and Wall Street, cont...
  • The deal is working.. a little progress in Iran.
  • More annals of pay for performance.
  • Corruption at the core of national security.
  • China's investment boom came from captive savings, i.e. state financial control.

Saturday, May 7, 2016

A Son of Hamas Turns His Back

Review of the documentary, the Green Prince. Spoiler alert.

In one of the more bizarre twists of the Palestinian drama, the son of a Hamas leader turned into a tireless worker for the Shin Bet from about 1997 to  2007. Now he lives in the US, at undisclosed locations. This film is essentially an memoir of this story, with two people talking to the camera, Mosab Hassan Yousef, the son, and Gonen Ben Yitzhak, his Israeli intellegence handler.

The format was oddly compelling, because the people are compelling- intelligent and dedicated. But to what? Yousef was raised in the West Bank, the eldest son in a leading family, and became his father's right hand. His father was one of the main people you would hear screaming on the news, preaching publicly about the evils of Israel, the righteousness of Islam and the Intifada, and the need for Hamas to run things in the West Bank as well as Gaza. As Hamas goes, he was not the most extreme, but nor was he a member of the Palestinian Authority- the Palestinian patsies.

Father HassanYousef at a Hamas Rally.

So turning to the Shin Bet was unthinkable in tribal terms. But when Yousef had his first experience in prison, courtesy of an Israeli checkpoint where he was found with some guns, he had a chance to compare tribes. While the Israelis were harsh, they had limits and operated under some kind of lawful system.

The Hamas cell in the prison, however, was brutally sadistic. Yousef describes the killing of scores of putative spies and informants in horrific fashion, with scant evidence. For an idealistic youth, it presented a problem, especially in contrast to the idealized version of the Palestinian cause that he had grown up with. Where at first he didn't take the offer from the Shin Bet seriously, now he had second thoughts. What if his idealism was more about non-violence, peace, and saving lives than about tribal competition?

There follows a lengthy career relaying information from his position at the center of Hamas with his father to the core of Shin Bet, preventing attacks, preventing assassinations, and also, in essence, dictating his father's fate. A central conundrum of intelligence work like this is how to use the informant's information without giving away his or her identity. To maintain Yousef's cover for a decade bespeaks very careful work on all sides.

But the larger issue remains untouched. While Yousef comes off as heroic and idealistic, the Israeli occupation of the West Bank is no more justified by Israel's lawful and partial restraint (or by its relentless stealing of land) than it is by the bottomless resentment and madness of Hamas. Treat people like prisoners and animals, and they often act that way. Moreover, Israel holds total control. They need no "partners" to resolve their doomed and immoral occupation. They only need to get out, and get their settlers out.


  • Muslims are screwing up the Netherlands and Europe generally.
  • Obama and Wall Street. Next showing: Hillary and Wall Street.
  • Do Republicans know anything about growth?
  • The Saudis are hurting.
  • Another business that "cares" for its customers.
  • Another case of pay for performance.
  • Non-competes run amok. "The Treasury Department has found that one in seven Americans earning less than $40,000 a year is subject to a non-compete. This is astonishing, and shows how easily businesses abuse their power over employees."
  • Our medical system is so dysfunctional and complex that error is third leading cause of death.
  • It almost makes you nostalgic for Richard Nixon.
  • Feel the heart, and the Bern.
  • Deflation and below-target monetary growth is a policy mistake.
  • Will extreme Christians let go of politics, at long last?
  • A little brilliant parenting.

Sunday, May 1, 2016

Audio Perception and Oscillation

Brains are reality modeling machines, which isolate surprising events for our protection and delectation. Does music have to be perpetually surprising, to be heard?

Imagine the most boring thing imaginable. Is it sensory deprivation? More likely it will something more active, like a droning lecturer, a chattering relative, or driving in jammed traffic. Meditation can actually be very exciting, (just think of Proust!), and sensory deprivation generates fascinating thought patterns and ideas. LSD and similar drugs heighten such internal experiences to the point that they can become life-altering. Which indicates an interesting thing about the nature of attention- that it is a precious resource that feels abused not when it is let loose, but when it is confined to some task we are not interested in, and particularly, that we are learning nothing from.

Music exists, obviously, not to bore us but to engage us on many levels, from the physical to the meditative and profound. Yet it is fundamentally based on the beat, which would seem a potentially boring structure. Beats alone can be music, hypnotically engaging, but typically the real business of music is to weave around the beat fascinating patterns whose charm lies in a tension between surprise and musical sense, such as orderly key shifts and coherent melody.

Why is all this attractive? Our brains are always looking ahead, forecasting what comes next. Their first rule is ... be prepared! Perception is a blend of getting new data from the environment and fitting it into models of what should be there. This has the virtues of providing understanding, since only by mapping to structured models of reality are new data understandable. Secondly, it reduces the amount of data processing, since only changes need to be attended to. And thirdly, it focuses effort on changing or potentially changing data, which are naturally what we need to be paying attention to anyhow ... the stuff about the world that is not boring.

"Predictive coding is a popular account of perception, in which internal representations generate predictions about upcoming sensory input, characterized by their mean and precision (inverse variance). Sensory information is processed hierarchically, with backward connections conveying predictions, and forward connections conveying violations of these predictions, namely prediction errors." 
"It is thus hypothesised that superficial cell populations calculate prediction errors, manifest as gamma-band oscillations (>30 Hz), and pass these to higher brain areas, while deep cell populations [of cortical columns] encode predictions, which manifest as beta band oscillations (12–30 Hz) and pass these to lower brain areas." 
"In the present study, we sought to dissociate and expose the neural signatures of four key variables in predictive coding and other generative accounts of perception, namely surprise, prediction error, prediction change and prediction precision. Here, prediction error refers to absolute deviation of a sensory event from the mean of the prior prediction (which does not take into account the precision of the prediction). We hypothesised that surprise (over and above prediction error) would correlate with gamma oscillations, and prediction change with beta oscillations."

A recent paper (and review) looked at how the brain perceives sound, particularly how it computes the novelty of a sound relative to an internal prediction. Prediction in the brain is known to resemble a Bayesian process where new information is constantly added to adjust an evolving model.

The researchers circumvented the problems of low-resolution fMRI imaging by using volunteers undergoing brain surgery for epilepsy, who allowed these researchers to study separate parts of their brains- the auditiory cortex- for purposes completely unrelated to their medical needs. They also allowed the researchers to only record from the surfaces of their brains, but to stick electrodes into their auditory cortexes to sample the cortical layers at various depths. It is well-known that the large sheet of the cortex does significantly different things in its different layers.

Frequencies of tones (dots) given to experimental subjects, over time.

The three subjects were played a series of tones at different frequencies, and had to do nothing in return- no task at all. The experiment was merely to record the brain's own responses at different positions and levels of the auditory cortex, paying attention to the various frequencies of oscillating electrical activity. The point of the study was to compare the data coming out with statistical models that they generated separately from the same stimuli- ideal models of Bayesian inference for what one would expect to hear next, given the sequence so far.

Electrode positions within the auditory areas of the subject's brains.

Unfortunately, their stimulus was not quite musical, but followed a rather dull algorithm: "For each successive segment, there is a 7/8 chance that that segment’s f [frequency] value will be randomly drawn from the present population, and a 1/8 chance that the present population will be replaced, with new μ [mean frequency] and σ [standard deviation of the frequency] values drawn from uniform distributions."

Correlations were calculated out between the observed and predicted signals, giving data like the following:

Prediction error and surprise are closely correlated, but the experimenters claim that surprise is a better correlated to the gamma band brain waves observed (B).

The difference between observation and prediction, and between surprise and prediction error. Surprise apparently takes into account the spread of the data, i.e. if uncertainty has changed as well as the mean predicted value.

What they found was that, as others have observed, the highest frequency oscillations in the brain correlate with novelty- surprise about how perceptions are lining up with expectations. The experimenter's surprise (S) measurement and prediction error (Xi) are very closely related, so they both correlate with each other and with the gamma wave signal. The surprise measure is slightly better correlated, however.

On the other hand, they observed that beta oscillations (~20 Hz) were correlated with changes in the predicted values. They hypothesized that beta oscillations are directed downward in the processing system, to shape and update the predictions being used at the prior levels.

Lastly, they find that the ~10 Hz alpha oscillations (and related bands) correlate with the uncertainty or precision of the predicted values. And theta oscillations at ~6 Hz were entrained to the sound stimulus itself, hitting when the next sound was expected, rather than encoding a derived form of the stimulus.

It is all a bit neat, and the conclusions are dredged out of very small signals, as far as is shown. But the idea that key variables of cognition and data processing are separated into different oscillatory bands in the auditory cortex is very attractive, has quite a bit of precedent, and is certainly an hypothesis that can and should be pursued by others in greater depth. The computational apparatus of the brain is very slowly coming clear.
"These are exciting times for researchers working on neural oscillations because a framework that describes their specific contributions to perception is finally emerging. In short, the idea is that comparatively slow neural oscillations, known as “alpha” and “beta” oscillations, encode the predictions made by the nervous system. Therefore, alpha and beta oscillations do not communicate sensory information per se; rather, they modulate the sensory information that is relayed to the brain. Faster “gamma” oscillations, on the other hand, are thought to convey the degree of surprise triggered by a given sound."

  • Bill Mitchell on the Juncker regime.
  • Who exactly is corrupt in Brazil, and how much?
  • There are too many people.
  • But not enough debt.
  • The fiscal "multiplier" is not constant.
  • Population has outstripped our willingness to build and develop.
  • What's going on in the doctor's strike?
  • Schiller on lying in business, Gresham's dynamics, and marketing.
  • Lying in religion.
  • Stiglitz on economics: "The strange thing about the economics profession over the last 35 year is that there has been two strands: One very strongly focusing on the limitations of the market, and then another saying how wonderful markets were."
  • Should banks be public institutions?
  • Does democratic socialism have a future in Russia?
  • A Sandersian / Keynesian stimulus is only effective if the Fed plays along.
  • Science yearns to be free.
  • Trump's brush with bankruptcy and friends in high places.

Saturday, April 23, 2016

Locating Abstractions in the Brain

The most human part of the brain is also the murkiest and least understood. Visualization studies of what is going on in the frontal cortex.

While it was in vogue, the lobotomy operation was used to treat in the neighborhood of 100,000 people in the mid twentieth century, rendering them more manageable- something that has since been more easily achieved with drugs. From the Wiki page:
"The purpose of the operation was to reduce the symptoms of mental disorder, and it was recognized that this was accomplished at the expense of a person's personality and intellect. British psychiatrist Maurice Partridge, who conducted a follow-up study of 300 patients, said that the treatment achieved its effects by 'reducing the complexity of psychic life'. Following the operation, spontaneity, responsiveness, self-awareness and self-control were reduced. Activity was replaced by inertia, and people were left emotionally blunted and restricted in their intellectual range."

What is odd is that for such a massive disruption to the brain, the effects were diffuse and hard to understand (though in fairness, the methods used were hardly uniform). "The first bilateral lobectomy of a human subject was performed by the American neurosurgeon Walter Dandy in 1930. The neurologist Richard Brickner reported on this case in 1932, relating that the recipient, known as 'Patient A', while experiencing a flattening of affect, had suffered no apparent decrease in intellectual function and seemed, at least to the casual observer, perfectly normal."

Some effects were that the subject no longer dreamed, they also lost their theory of mind, or the ability to empathize with others. Some entered a stupor or started suffering siezures. There were various intellectual and personality deficits- one became "smiling, lazy and satisfactory patient with the personality of an oyster". Five percent died. One subject mentioned:
"It took a great deal of effort to keep an abstraction in mind. For example, in talking with the speech therapist I would begin to give a definition of an abstract concern, but as I held it in mind it would sort of fade, and chances were that I'd end up giving a simplified version rather than one at the original level of conception. It was as though giving an abstraction required so much of my addled intelligence that halfway through the definition I would run out of the energy available to me and regress to a more concrete answer. Something like this happened again and again."

An irony is that the Soviet Union took the lead in banning the procedure, "Doctors in the Soviet Union concluded that the procedure was 'contrary to the principles of humanity' and 'through lobotomy an insane person is changed into an idiot.'"

Modern brain scanning allows researchers to peer into the frontal lobes and start figuring out what is going on there. A recent paper described some early work in that direction, devising simple tasks to differentiate levels of abstract thought and mapping where they happen, using fMRI. They manage to map separate zones in the frontal cortex that handle temporal / time shifting abstractions, category switching abstractions, and feature attention control.

The subjects were presented with points that through several frames that added up to a diagram, (C), a star with letters on the outside, with a color applied. There were several rules imposed, such as if the color setting was purple, the letters were supposed to be added up to form a word across the star (TABLET, in this case). If the color was orange, the subject was supposed to just trace the points of the star with her eyes. Then delay rules were added, asking whether the trail was the same type or a different type than the one before. Or the subject was given a new diagram but asked to maintain their place in the old diagram, to be recalled later. Then distraction periods were added in between to test for memory retention. It all begins to look like an intelligence test, for the subject's ability to keep ideas and rules in mind successfully.

Test design, in part. C shows the basic image presented to the subject, which would have included color as well, and varied the shape and text presented. The points of the star were not presented at once, but fed out one point at a time. B shows the combined tests that were devised. For instance. The restart test asked the subject not to delay their analysis, but just presented with a new diagram and asked to resolve the color and text diagram by the agreed rules.

The tests were designed to separate three topics of thought, and were added together in various combinations to allow the researchers to run combinatorial tests. The upshot was that they were able to map the three tasks to different parts of the frontal cortex:

Distinct mappings of each task to its region. Handling time delay and abstraction occupies the very front of the brain, (rostral), while simpler abstractions keeping track of the local context of a task, or attending to selected features of an image/task occupy precincts farther back (caudal). This is in addition to separate zones in the mid-brain
"Regressing these measures onto activation revealed a clear gradient such that caudal LPFC [lateral prefrontal cortex] was related to current, but not future processing, while rostral LPFC was related to future, but not current processing, with mid LPFC showing activity related to both current and future processing "


They end up with a beautiful depiction of the regions of the brain where their various tasks took place. Unfortunately, fMRI imaging technology remains very crude, in time and space, so their task breakdown was similarly crude to suit. It will probably take new technology to go to deeper detail on what is going on in the human frontal cortex- the part of the brain most responsible for making us human, but also, since it handles abstractions farthest from detailed concrete processing, the most nebulous and hard to define.

  • Inequality isn't just a bleeding heart issue, but an investment and prosperity issue.
  • Solow on labor power and inequality.
  • Tax complexity isn't entirely the government's fault, but another dividend of corruption.
  • Retirement is another big front in the inequality debate.
  • Utopia now and then.
  • Globalization is a problem.
  • Some problems with supply side theory. Perhaps taxes make people work harder.
  • Pay is a complicated construct.
  • We need more debt.
  • But perhaps less bail.