Saturday, May 28, 2016

The Housing Crisis- or is it a Transportation Crisis?

Bustling areas of the US are in the grips of a housing, transportation, and homelessness crisis.

While tracts of empty houses remain from the recent near-depression in areas like Florida, Nevada, and Detroit, other areas suffer from the opposite problem. Average detached house prices are at a million dollars in the San Francisco Bay Area. While this number is proudly trumpeted by the local papers for their satisfied home-owning constituents, the news for others is not so good. Houses are priced far above construction cost, clearly unaffordable for average workers, and rents are rising to unaffordable levels as well. How did we get here?

One of the great ironies is that environmentalism has allied with other status quo forces to stall development for decades. Existing homeowners have little interest in transforming their sprawly neighborhoods into denser, more efficient urban centers. Then they pat themselves on the back for preserving open space and small-town ambiance, along with inflated property values. Public officials have been stymied by proposition 13 and other low-tax movements from funding infrastructure to keep up with population growth. Local roads are now frequently at a standstill, making zoning for more housing essentially unthinkable. Add in a drought, and the policy response to growth is to hide one's head in the sand.

Then ... a scene from Dark Passage.

There is a basic public policy failure to connect population and business growth with the necessary supporting structures- a failure of planning. No new highway has been built for decades, even as the population of the Bay Area has increased by 10% since just 2000, the number of cars increased even more, and the population of the state has doubled since 1970. How was that supposed to work?

Now ... at the bay bridge.

An alternative approach would have been to limit population growth directly, perhaps via national immigration restrictions or encouragement for industry to move elsewhere. But that doesn't seem attractive to our public officials either, nor is it very practical. In a tragedy of common action, people flock to an attractive area, but eventually end up being driven away based on how crowded and unbearable the area becomes. A Malthusian situatuion, not from lack of food, but of other necessities. But with modern urban design & planning, it doesn't have to be that way- just look at Singapore, Hong Kong, New York, and other metropolises.

In the post-war era, the US, and California in particular, built infrastructure ahead of growth, inviting businesses to a beautiful and well maintained state. But once one set of roads was built, and a great deal of settled activity accumulated around them, expansion became inceasingly difficult. Now that a critical mass of talent and commercial energy is entrenched and growing by network forces, the contribution from the state has descended to negligible, even negative levels, as maintenance is given short shrift, let alone construction of new capacity, for roads, housing development, water, and sewer infrastructure. Prop 13 was, in retrospect, the turning point.

It is in miniature the story of the rise, decline, and fall of civilization. For all the tech innovation, the Bay Area is showing sclerosis at the level of public policy- an inability to deal with its most basic problems. The major reason is that the status quo has all the power. Homeowners have a direct financial interest in preventing further development, at least until the difficulties become so extreme as to result in mass exodus. One hears frequently of trends of people getting out of the area, but it never seems to have much effect, due to the area's basic attractiveness. Those who can afford to be here are also the ones investing in and founding businesses that keep others coming in their wake.

The post-war era was characterized by far more business influence on government, (especially by developers, the ultimate bogey-men for the environmentalists and other suburban status-quo activists), even while the government taxed businesses and the wealthy at far higher levels. Would returning to that system be desirable? Only if our government bodies can't get their own policy acts together. The various bodies that plan our infrastructure (given that the price signal has been cancelled by public controls on development) have been far too underfunded and hemmed in by short-sighted status quo interests- to whom the business class, which is typically interested in growth and labor availability more than holding on to their precious property values, are important counter-weights.

The problem is that we desperately need more housing to keep up with population, to keep housing affordable, and ultimately also resolve the large fraction of homelessness that can be addressed by basic housing affordability. But housing alone, without a full package of more transportion and other services, makes no sense on its own. So local planning in areas like the Bay Area needs a fundamental reset, offering residents better services first (more transit, cleared up roads) before allowing more housing. Can we build more roads? Or a new transit system? We desperately need another bridge across the bay, for example, and a vastly expanded BART system, itself a child of the post-war building boom, now fifty years old.

BART system map, stuck in time.

Incidentally, one can wonder why telecommuting hasn't become more popular, but the fact that a region like the Bay Area has built up a concentration of talent that is so enduring and growing despite all the problems of cost, housing, and transportation speaks directly to the benefits of (or at least the corporate desire for) corporeal commuting. Indeed, it is common for multinational companies to set up branches in the area to take advantage of the labor pool willing to appear in person, rather than trying to lure talent to less-connected areas cybernetically or otherwise.

One countervailing argument to more transit and road development is that the housing crisis and existing road network has motivated commuters to live in ever farther-flung outlying areas, even to Stockton. Thus building more housing first, in dense, central areas, might actually reduce traffic, by bringing those commuters back to their work places. This does not seem realistic, unfortunately. One has to assume that any housing increment will lead to more people, cars, and traffic, not less. There is no way to channel housing units to only those people who will walk to work, or take transit, etc., especially in light of the poor options currently available. The only way to relieve the transportation gridlock is to make using it dramatically more costly, or to provide more transportation- especially, more attractive transit options.

Another argument is that building more roads just leads to more usage and sprawl. This is true to some extent, but the solution is not to make the entire system dysfunctional in hopes of pushing marginal drivers off the road or out of the area in dispair. A better solution, if building more capacity is out of the question, is to take aim directly at driving by raising its price. The gas tax is far too low, and the California carbon tax (we have one, thankfully!) is also too low. There is already talk of making electric vehicle drivers pay some kind of higher registration or per-mile fee to offset their lack of gas purchases, but that seems rather premature and counter-productive from a global warming perspective. To address local problems, tolls could be instituted, not just at bridges as they are now, but at other areas where congestion is a problem, to impose costs across the board on users, as well as to fund improvements. This would also address the coming wave of driverless cars, which threatens to multiply road usage yet further.

In the end, housing and transportation are clearly interlinked, on every level. Each of us lives on a street, after all. Solving one problem, such as homelessness and the stratospheric cost of housing, requires taking a step back, looking at the whole system, and addressing root causes, which come down to zoning, transportation, money, and the quality and ambition of our planning.



  • Hey- how about those objective, absolutely true values?
  • Bernie has some mojo, and doing some good with it.
  • We know nothing ... at the State department.
  • The Fed is getting ready to make another mistake.
  • For the umpteenth time, we need more fiscal policy.
  • Cheating on taxes, the Trump way.
  • Yes, Trump is this stupid, and horrible.
  • Another disaster from Hillary Clinton's career.
  • Corporations are doing well the old-fashioned way, though corruption.
  • What happens when labor is too cheap, and how trade is not so peaceful after all.

Saturday, May 21, 2016

Tinier and Tinier- Advances in Electron Microscopy

Phase contrast and phase plates for electrons;  getting to near-atomic resolution.

Taken for granted today in labs around the world, phase contrast light microscopy won a Nobel prize in 1953. It is a fascinating manipulation of light to enhance the visibility of objects that may be colorless, but have a refractive index different from the medium. This allowed biologists especially to see features of cells while they were still alive, rather than having to kill and stain them. But it has been useful for minerology and other fields as well.

Optical phase contrast apparatus. The bottom ring blocks all but that ring of light from coming into the specimen from below, while the upper ring captures that light, dimming it and shifting its phase.

Refraction of light by a sample has very minor effects in normal bright field microscopy, but does two important things for phase contrast microscopy. It bends the light slightly, like a drink bends the image of a straw, and secondly, it alters the wave phase of the light as well, retarding it slightly relative to the unaffected light. Ultimately, these are both effects of slowing light down in a denser material.

The phase contrast microscope takes advantage of both properties. Rings are strategically placed both before and after the sample so that the direct light is channeled in a cone that is then intercepted after hitting the sample with the phase plate. This plate both dims to direct light, so that it does not compete as heavily with the scarcer refracted light, and more importantly, it also phase-retards the direct light by 90 degrees.

Light rotational phase relationships in phase contrast. The phase plate shifts the direct (bright) light from -u- to -u1-. Light that has gone through the sample and been refracted is -p-, which interferes far more effectively with -u1- (or -u2-, an alternate method) than with the original -u-, generating -p1- or -p2-, respectively.

The diagram above shows the phase relationships of light in phase contrast. The direct light is u on the right diagram, and p is the refracted and phase-shifted light from the specimen. d is the radial difference in phasing. Interference between the two light sources, given their slight phase difference, is also slight and gives very little contrast. But if the direct light is phase shifted by 90 degrees, either in the negative (orginal method, left side u1) or positive directions (right, u2), then adding the d vector via interference with the refracted light has much more dramatic effects, resulting in the phase contrast effect. Phase shifting is done with special materials, such as specifically oriented quartz.

Example of the dramatic enhancement possible with optical phase contrast.

A recent paper reviews methods for generating phase contrast for electron microscopy, which, with its far smaller wavelength, is able to resolve much finer details, and also revolutionized biology when it was invented, sixty years ago. But transmission electron microscopy is bedeviled, just as light microscopy was, by poor contrast in many specimens, particularly biological ones, where the atomic composition is all very light-weight: carbons, oxygens, hydrogens, etc, with little difference from the water medium or the various cellular or protein constituents. Elaborate staining procedures using heavy metals have been used, but it would be prefereable to image flash-frozen and sectioned samples more directly. Thus a decades-long quest to develop an electon analogue of phase contrast imaging, and a practical electron phase plate in particular.

Electrons have waves just as light does, but they are far smaller and somewhat harder to manipulate. It turns out that a thin plate of randomly deposited carbon, with a hole in the middle, plus electrodes to bleed off absorbed electrons and even bias the voltage to manipulate them, is enough to do the trick. Why the hole?  This is where the un-shifted electrons come through, (which mostly also do not interact significantly with the specimen), which then interfere with the refracted and shifted ones coming through the carbon plate outside. Which has the effect of emphasizing those electrons phase-shifted by the specimen which escape the destructive interference.
"A cosine-type phase-contrast transfer function emerges when the phase-shifted scattered waves interfere with the non-phase-shifted unscattered waves, which passed through the center hole before incidence onto the specimen."

The upshot is that one can go from the image on the right to the one on the left- an amazing difference.
Transmission electron microscopy of a bacterium. Normal is right, phase contrast is left.

At a more molecular scale, one can see individual proteins better, here the GroEL protein chaperone complex, which is a barrel-shaped structure inside of which other proteins are encouraged to fold properly.
Transmission electron microscopy of individual GroEL complexes, normal on left, phase contrast on right. 



Saturday, May 14, 2016

Dissection of an Enhancer

Enhancers provide complex, combinatorial control of gene expression in eukaryotes. Can we get past the cartoons?

How can humans get away with having no more genes than a nematode or a potato? It isn't about size, but how you use what you've got. And eukaryotes use their genes with exquisite subtlety, controlling them from DNA sequences called enhancers that can be up to a million base pairs away. Over the eons, countless levels of regulatory complexity have piled onto the gene expression system, more elements of which come to light every year. But the most powerful contol over genes comes from modular cassettes (called enhancers) peppered over the local DNA to which regulatory proteins bind to form complexes that can either activate or repress expression. These proteins themselves are expressed from yet other genes and regulatory processes that form a complex network or cascade of control.

When genome sequencing progressed to the question of what makes people different, and especially what accounts for differences in disease susceptibility, researchers quickly came up with a large number of mutations from GWAS, or genome-wide association studies, in data from large populations. But these mutations gave little insight into the diseases of interest, because the effect of each mutation was very weak. Otherwise the population would not be normal, as these were, typically, but afflicted. A slight change in disease susceptibility coming from a mutation somewhere in the genome is not likely to be informative until we have much more thorough understanding of the biological pathway of that disease.

This is one reason why biology is still going on, a decade and a half after the human genetic code was broken. The weak effect mutations noted above are often far away from any gene, and figuring out what they do is rather difficult, both because of their weakness, their perhaps uninformative position, and also because of the complexity of disease pathways and the relevant environmental effects.

Part of the problem comes down to a need to understand enhancers better, since they play such an important role in gene expression. Many sequencing projects study the exome, which comprises the protein-coding bits of the genome, and thus ignore regulatory regions completely. But even if the entire genome is studied, enhancers are maddening subjects, since they are so darned degenerate. Which is a technical term for being under-specified with lots of noise in the data. DNA-binding proteins tend to bind to short sites, typically of seven to ten nucleotides, with quite variable/noisy composition. But if helped by a neighbor, they may bind to a quite different site.. who knows? Such short sequences are naturally very common around the genome, so which ones are real, and which are decoys, among the tens or hundreds of thousands of basepairs around a gene? Again, who knows?

Thus molecular biologists have been content to do very crude analyses, deleting pieces of DNA around a specific gene, measuring a target gene's expression, and marking off sites of repression and enhancement using those results. Then they present a cartoon:

Drosophila Runt locus, with its various control regions (enhancers) mapped out a top on the genomic locus, and the proto-segmental stripes in the embryo within which each enhancer contributes to activate expression below. The locus spans 80,000 basepairs, of which the coding region is the tiny blue set of exons marked at top in blue with "Run".

This is a huge leap of knowlege, but is hardly the kind of quantative data that allows computational prediction and modeling of biology throughout the relevant regulatory pathways, let alone for other genes to which some of the same regulatory proteins bind. That would require a whole other level of data about protein-DNA binding propensities, effects from other interacting proteins, and the like, put on a quantitative basis. Which is what a recent paper begins to do.
"The rhomboid (rho) enhancer directs gene expression in the presumptive neuroectoderm under the control of the activator Dorsal, a homolog of NF-κB. The Twist activator and Snail repressor provide additional essential inputs"
A Drosophila early embryo, stained for gene expression of Rhomboid, in red. The expression patterns of the regulators Even-skipped (stripes) and Snail (ventral, or left) are both stained in green. The dorsal (back) direction is right, ventral (belly) is left, and the graph is of Rhomboid expression over the ventral->dorsal axis. The enhancer of the Rhomboid gene shown at top has its individual regulator sites colored as green (Dorsal), red (Snail) and yellow (Twist). 

Their analysis focused on one enhancer of one gene, the Rhomboid gene of the fruit fly, which directs embryonic gene expression just dorsal to the midline, shown above in red. The Snail regulator is a repressor of transcription, while Dorsal and Twist are both activators. A few examples of deleting some of these sites are shown below, along with plots of Rhomboid expression along the ventral/dorsal axis.

Individual regulator binding sites within the Rhomboid enhancer (B, boxes), featuring different site models (A) for each regulator. The fact that one regulator such as Dorsal can bind to widely divergent site, such as DL1 and DL2/3, suggests the difficulty of finding such sites computationally in the genome. B shows how well the models match the actual sequence at sites known to be bound by the respective regulators.

Plots of ventral-> dorsal expression of Rhomboid after various mutations of its Dorsal / Twist/ Snail enhancer. Black is the wild-type case, blue is the mutant data, and red is the standard error.

It is evident that the Snail sites, especially the middle one, plays an important role in restricting Rhomboid expression to the dorsal side of the embryo. This makes sense from the region of Snail expression shown previously, which is restricted to the ventral side, and from Snail's activity, which is repression of transcription.
"Mutation of any single Dorsal or Twist activator binding site resulted in a measurable reduction of peak intensity and retraction of the rho stripe from the dorsal region, where activators Dorsal and Twist are present in limiting concentrations. Strikingly, despite the differences in predicted binding affinities and relative positions of the motifs, the elimination of any site individually had similar quantitative effects, reducing gene expression to approximately 60% of the peak wild-type level"

However, when they removed pairs of sites and other combinations, the effects became dramatically non-linear, necessitating more complex modelling. In all they tested 38 variations of this one enhancer by taking out various sites, and generated 120 hypothetical models (using a machine learning system) of how they might cooperate in various non-linear ways.
"Best overall fits were observed using a model with cooperativity values parameterized in three 'bins' of 60 bp (scheme C14) and quenching in four small 25 or 35 bp bins (schemes Q5 and Q6)."
Example of data from some models (Y-axis) run on each of the 38 mutated enhancer data (X-axis). Blue is better fit between the model and the data.

What they found was that each factor needed to be modelled a bit differently. The cooperativity of the Snail repressor was quite small. While the (four) different sites differ in their effect on expression, they seem to act independently. In contrast, the activators were quite cooperative, an effect that was essentially unlimited in distance, at least over the local enhancer. Whether cooperation can extend to other enhancer modules, of which there can be many, is an interesting question.

Proof of their pudding was in the extension of their models to other enhancers, using the best models they came up with in a general form to predict expression from other enhancers that share the same regulators.

Four other enhancers (Ventral nervous system defective [vnd], Twist,  and Rhomboid from two other species of Drosophila, are scored for the modeled expression (red) over the dorsal-ventral axis, and actual expression in black.

The modeling turns out pretty decent, though half the cases are the same Rhomboid gene enhancer from related Drosophila species, which do not present a very difficult test. Could this model be extended to other regulators? Can their conclusion about the cooperativity of repressors vs activators be generalized? Probably not, or not very strongly. It is likely that similar studies would need to be carried out for most major classes of regulators to accumulate the basic data that would allow more general and useful prediction.

And that leaves the problem of finding the sites themselves, which this paper didn't deal with, but which is increasingly addressable with modern genomic technologies. There is a great deal yet to do! This work is a small example of the increasing use of modeling in biology, and the field's tip-toeing progress towards computability.

  • Seminar on the genetics of Parkinson's.
  • Whence conservatism?
  • Krugman on the phony problem of the debt.
  • Did the medievals have more monetary flexibility?
  • A man for our time: Hume, who spent his own time in "theological lying".
  • Jefferson's moral economics.
  • Trump may be an idiot, just not a complete idiot.
  • Obama and Wall Street, cont...
  • The deal is working.. a little progress in Iran.
  • More annals of pay for performance.
  • Corruption at the core of national security.
  • China's investment boom came from captive savings, i.e. state financial control.

Saturday, May 7, 2016

A Son of Hamas Turns His Back

Review of the documentary, the Green Prince. Spoiler alert.

In one of the more bizarre twists of the Palestinian drama, the son of a Hamas leader turned into a tireless worker for the Shin Bet from about 1997 to  2007. Now he lives in the US, at undisclosed locations. This film is essentially an memoir of this story, with two people talking to the camera, Mosab Hassan Yousef, the son, and Gonen Ben Yitzhak, his Israeli intellegence handler.

The format was oddly compelling, because the people are compelling- intelligent and dedicated. But to what? Yousef was raised in the West Bank, the eldest son in a leading family, and became his father's right hand. His father was one of the main people you would hear screaming on the news, preaching publicly about the evils of Israel, the righteousness of Islam and the Intifada, and the need for Hamas to run things in the West Bank as well as Gaza. As Hamas goes, he was not the most extreme, but nor was he a member of the Palestinian Authority- the Palestinian patsies.

Father HassanYousef at a Hamas Rally.

So turning to the Shin Bet was unthinkable in tribal terms. But when Yousef had his first experience in prison, courtesy of an Israeli checkpoint where he was found with some guns, he had a chance to compare tribes. While the Israelis were harsh, they had limits and operated under some kind of lawful system.

The Hamas cell in the prison, however, was brutally sadistic. Yousef describes the killing of scores of putative spies and informants in horrific fashion, with scant evidence. For an idealistic youth, it presented a problem, especially in contrast to the idealized version of the Palestinian cause that he had grown up with. Where at first he didn't take the offer from the Shin Bet seriously, now he had second thoughts. What if his idealism was more about non-violence, peace, and saving lives than about tribal competition?

There follows a lengthy career relaying information from his position at the center of Hamas with his father to the core of Shin Bet, preventing attacks, preventing assassinations, and also, in essence, dictating his father's fate. A central conundrum of intelligence work like this is how to use the informant's information without giving away his or her identity. To maintain Yousef's cover for a decade bespeaks very careful work on all sides.

But the larger issue remains untouched. While Yousef comes off as heroic and idealistic, the Israeli occupation of the West Bank is no more justified by Israel's lawful and partial restraint (or by its relentless stealing of land) than it is by the bottomless resentment and madness of Hamas. Treat people like prisoners and animals, and they often act that way. Moreover, Israel holds total control. They need no "partners" to resolve their doomed and immoral occupation. They only need to get out, and get their settlers out.


  • Muslims are screwing up the Netherlands and Europe generally.
  • Obama and Wall Street. Next showing: Hillary and Wall Street.
  • Do Republicans know anything about growth?
  • The Saudis are hurting.
  • Another business that "cares" for its customers.
  • Another case of pay for performance.
  • Non-competes run amok. "The Treasury Department has found that one in seven Americans earning less than $40,000 a year is subject to a non-compete. This is astonishing, and shows how easily businesses abuse their power over employees."
  • Our medical system is so dysfunctional and complex that error is third leading cause of death.
  • It almost makes you nostalgic for Richard Nixon.
  • Feel the heart, and the Bern.
  • Deflation and below-target monetary growth is a policy mistake.
  • Will extreme Christians let go of politics, at long last?
  • A little brilliant parenting.

Sunday, May 1, 2016

Audio Perception and Oscillation

Brains are reality modeling machines, which isolate surprising events for our protection and delectation. Does music have to be perpetually surprising, to be heard?

Imagine the most boring thing imaginable. Is it sensory deprivation? More likely it will something more active, like a droning lecturer, a chattering relative, or driving in jammed traffic. Meditation can actually be very exciting, (just think of Proust!), and sensory deprivation generates fascinating thought patterns and ideas. LSD and similar drugs heighten such internal experiences to the point that they can become life-altering. Which indicates an interesting thing about the nature of attention- that it is a precious resource that feels abused not when it is let loose, but when it is confined to some task we are not interested in, and particularly, that we are learning nothing from.

Music exists, obviously, not to bore us but to engage us on many levels, from the physical to the meditative and profound. Yet it is fundamentally based on the beat, which would seem a potentially boring structure. Beats alone can be music, hypnotically engaging, but typically the real business of music is to weave around the beat fascinating patterns whose charm lies in a tension between surprise and musical sense, such as orderly key shifts and coherent melody.

Why is all this attractive? Our brains are always looking ahead, forecasting what comes next. Their first rule is ... be prepared! Perception is a blend of getting new data from the environment and fitting it into models of what should be there. This has the virtues of providing understanding, since only by mapping to structured models of reality are new data understandable. Secondly, it reduces the amount of data processing, since only changes need to be attended to. And thirdly, it focuses effort on changing or potentially changing data, which are naturally what we need to be paying attention to anyhow ... the stuff about the world that is not boring.

"Predictive coding is a popular account of perception, in which internal representations generate predictions about upcoming sensory input, characterized by their mean and precision (inverse variance). Sensory information is processed hierarchically, with backward connections conveying predictions, and forward connections conveying violations of these predictions, namely prediction errors." 
"It is thus hypothesised that superficial cell populations calculate prediction errors, manifest as gamma-band oscillations (>30 Hz), and pass these to higher brain areas, while deep cell populations [of cortical columns] encode predictions, which manifest as beta band oscillations (12–30 Hz) and pass these to lower brain areas." 
"In the present study, we sought to dissociate and expose the neural signatures of four key variables in predictive coding and other generative accounts of perception, namely surprise, prediction error, prediction change and prediction precision. Here, prediction error refers to absolute deviation of a sensory event from the mean of the prior prediction (which does not take into account the precision of the prediction). We hypothesised that surprise (over and above prediction error) would correlate with gamma oscillations, and prediction change with beta oscillations."

A recent paper (and review) looked at how the brain perceives sound, particularly how it computes the novelty of a sound relative to an internal prediction. Prediction in the brain is known to resemble a Bayesian process where new information is constantly added to adjust an evolving model.

The researchers circumvented the problems of low-resolution fMRI imaging by using volunteers undergoing brain surgery for epilepsy, who allowed these researchers to study separate parts of their brains- the auditiory cortex- for purposes completely unrelated to their medical needs. They also allowed the researchers to only record from the surfaces of their brains, but to stick electrodes into their auditory cortexes to sample the cortical layers at various depths. It is well-known that the large sheet of the cortex does significantly different things in its different layers.

Frequencies of tones (dots) given to experimental subjects, over time.

The three subjects were played a series of tones at different frequencies, and had to do nothing in return- no task at all. The experiment was merely to record the brain's own responses at different positions and levels of the auditory cortex, paying attention to the various frequencies of oscillating electrical activity. The point of the study was to compare the data coming out with statistical models that they generated separately from the same stimuli- ideal models of Bayesian inference for what one would expect to hear next, given the sequence so far.

Electrode positions within the auditory areas of the subject's brains.

Unfortunately, their stimulus was not quite musical, but followed a rather dull algorithm: "For each successive segment, there is a 7/8 chance that that segment’s f [frequency] value will be randomly drawn from the present population, and a 1/8 chance that the present population will be replaced, with new μ [mean frequency] and σ [standard deviation of the frequency] values drawn from uniform distributions."

Correlations were calculated out between the observed and predicted signals, giving data like the following:

Prediction error and surprise are closely correlated, but the experimenters claim that surprise is a better correlated to the gamma band brain waves observed (B).

The difference between observation and prediction, and between surprise and prediction error. Surprise apparently takes into account the spread of the data, i.e. if uncertainty has changed as well as the mean predicted value.

What they found was that, as others have observed, the highest frequency oscillations in the brain correlate with novelty- surprise about how perceptions are lining up with expectations. The experimenter's surprise (S) measurement and prediction error (Xi) are very closely related, so they both correlate with each other and with the gamma wave signal. The surprise measure is slightly better correlated, however.

On the other hand, they observed that beta oscillations (~20 Hz) were correlated with changes in the predicted values. They hypothesized that beta oscillations are directed downward in the processing system, to shape and update the predictions being used at the prior levels.

Lastly, they find that the ~10 Hz alpha oscillations (and related bands) correlate with the uncertainty or precision of the predicted values. And theta oscillations at ~6 Hz were entrained to the sound stimulus itself, hitting when the next sound was expected, rather than encoding a derived form of the stimulus.

It is all a bit neat, and the conclusions are dredged out of very small signals, as far as is shown. But the idea that key variables of cognition and data processing are separated into different oscillatory bands in the auditory cortex is very attractive, has quite a bit of precedent, and is certainly an hypothesis that can and should be pursued by others in greater depth. The computational apparatus of the brain is very slowly coming clear.
"These are exciting times for researchers working on neural oscillations because a framework that describes their specific contributions to perception is finally emerging. In short, the idea is that comparatively slow neural oscillations, known as “alpha” and “beta” oscillations, encode the predictions made by the nervous system. Therefore, alpha and beta oscillations do not communicate sensory information per se; rather, they modulate the sensory information that is relayed to the brain. Faster “gamma” oscillations, on the other hand, are thought to convey the degree of surprise triggered by a given sound."

  • Bill Mitchell on the Juncker regime.
  • Who exactly is corrupt in Brazil, and how much?
  • There are too many people.
  • But not enough debt.
  • The fiscal "multiplier" is not constant.
  • Population has outstripped our willingness to build and develop.
  • What's going on in the doctor's strike?
  • Schiller on lying in business, Gresham's dynamics, and marketing.
  • Lying in religion.
  • Stiglitz on economics: "The strange thing about the economics profession over the last 35 year is that there has been two strands: One very strongly focusing on the limitations of the market, and then another saying how wonderful markets were."
  • Should banks be public institutions?
  • Does democratic socialism have a future in Russia?
  • A Sandersian / Keynesian stimulus is only effective if the Fed plays along.
  • Science yearns to be free.
  • Trump's brush with bankruptcy and friends in high places.