Saturday, August 22, 2015

Free Banking: How Good, and How Free?

A fixation of libertarians is banking without any government control, or free banking. Did it work? Can it work?

What if we could get the government off our necks and out of our pockets, reverting to a more blissful state of nature when human relations were voluntary and markets rendered, by their invisible hand, everything we need? That is the dream of libertarians, especially monetary libertarians, who yearn to go back to gold and back to a time when anyone could issue any kind of money they pleased.

It is difficult to take all this seriously, but let's try. We would have to put aside the myriad problems of the gold standard. The gold (and silver) standard did one thing very well in its day, which was to keep money similarly valued over long periods of time. A ducat was worth something reasonably similar for hundreds of years, as was a Pound Sterling, i.e. a pound of silver. The intrinsic scarcity of these metals, and the nature of mining that added to stocks in very rough proportion to human economic activity (at least when economic growth was very slow), made them natural stores of value, virtually universal among pre-industrial human societies.

Free banking has actually happened, apparently most purely in Scotland in the 1700's and 1800's, till the evil British put the Bank of England in charge of monetary policy for good, in 1844. In this system, banks competed by issuing notes on their stocks of gold. These notes constituted their funding for loans, and acted as currency. This was truly a fractional reserve system, where on a certain stock of gold, they could issue several times the value in notes, and expect that redemptions back to gold would be rare enough that they would not, in normal circumstances, face a "run" on those reserves. Their motivation, therefore, was to have their notes be as trusted as possible, and thus as long in circulation as possible, so that loans could be made in large amounts. This style of banking goes back a long way, back to the Venetians and probably beyond. The early US in the 1800's had a similar system, though free banking enthusiasts look askance because it was heavily regulated (and corrupted!) by the states, not to mention that it ended up being extremely unstable, with numerous runs, collapses, and depressions.

Free banking is not really free, in my estimation, but highly restricted in several respects. First is the gold standard. The only commonly and universally recognized form of money at the time was precious metal, for which the notes served merely as an IOU. But were everyone to wish to convert their notes to gold for safe-keeping in their mattresses, (say, if war beckoned), the system would fall apart instantly. So it was a sort of ponzi scheme from the start, as is, in fairness, most banking. The gold standard meant that many banks could operate simultaneously, recognize each other's notes, and clear each other's payments, (and have the motivation to do so), because ultimately, they knew they could settle in gold if the need arose. Crises arise, now as then, when one institution loses this trust, faces lack of credit from intermediaries and / or customers, and spirals immediately into liquidation.

Under what I would take as true libertarian principles, really free banking would be something quite different, where a bank could issue money based on any form of value at all, and compete in the marketplace of customer trust. Its reserves could be land, or timber, cows, or cockle shells. Perhaps this would reduce in practice to the old gold standard, or perhaps to the money of some well-run state far away, as many dollarized economies practice it today(!) In any case, the use of gold/silver would not be forced on the banks and their customers, but be their own choice of funding / backing.

The second restriction was the nature of the bank corporation, which had unlimited liability. Thus, unlike our own corporations, which can escape any adversity through bankruptcy, the owners (partners) of classical banks were on the hook if there was a run or over-lending and their gold stores ran out. The Scottish system had one spectacular failure, of the Ayrs Bank, which reportedly ended up with virtually no losses to the note holders because the partners were cleaned out. Obviously, this rested on a legal system that was willing to hold the partners to literal account, something that seems to be oddly lacking in the current business climate. Likewise, the legal system had to underpin the gold standard, recognizing its central role of value in economic life.

Additionally, arch-libertarian Murray Rothbard brought out a rather tart paper about how unfree the Scottish system actually was, since its banks evaded their gold redemption requirements with great determination, and relied instead on Bank of England notes for most redemptions, treating it as their central bank. It is actually an excellent article about banking in general, graced with terror over proto-Keynesian "rank inflationists".
"Professor Sydney Checkland points out that Scottish banks expanded and contracted credit in a lengthy series of boom-bust cycles, in particular in the years surrounding the crises of the 1760s, 1772, 1778, 1793, 1797, 1802-03, 1809-10, 1810-11, 1818-19, 1825-26, 1836-1837, 1839, and 1845—47. Apparently, the Scottish banks escaped none of the destabilizing, cycle-generating behavior of their English cousins."
"The Scottish system was one of continuous partial suspension of specie payments. No one really expected to be able to enter a Scots bank . . . with a large holding of notes and receive the equivalent immediately in gold or silver. They expected, rather, an argument, or even a rebuff. At best they would get a little specie and perhaps bills on London. If they made serious trouble, the matter would be noted and they would find the obtaining of credit more difficult in the future."
"Bailey overlooked the fundamental Ricardian truth that there is never any social value in increasing the supply of money, as well as the insight that bank credit entails a fraudulent issue of warehouse receipts to nonexistent goods."

Anyhow, given these important restrictions and traditions, the Scottish system was quite competitive and reasonably robust for its day, with a few large and many small banks. The one thing it didn't do was serve the state, which by time was becoming used, in England, to running the monetary system for purposes of both inflation control and its own funding, especially in war. As Rothbard notes, the Scottish system had led to credit gyrations and substantial inflation, issuing roughly fifty times as many notes as the underlying gold/silver. This allowed economic growth, but also was completely unsustainable in terms of a true gold standard.

In the end, we return to the questions that were current in the 1800's as modern banking began to take shape. Should fractional banking be allowed in any form, or should money be 100% backed by the precious metals of tradition? If fractional banking be allowed, how should the creation of money be controlled to prevent the incessant cycles of boom and bust which appeared inherent to the free system whereby banks printed paper money in response to business demand for credit? If fractional banking be not allowed, how could one accommodate economic growth when the supply of money failed to grow- that is, when one's mines (or Imperial thievery) failed to crank out as much new metal as proportionately required for growth in population and technology? Would everyone end up being "crucified on a cross of gold"?

The answer, obviously, is that neither a 100% gold standard, nor free, unregulated banking, are optimal. Given the world's current wealth of about $250 trillion, an ounce of gold would have to be worth $50,000 to back wealth at 100%, which seems not just unrealistic, but obscene and a threat to our natural environment, given the mining this price would entail. Historically, as economic growth far outstripped metal mining, central banks took over macroeconomic policy with regard to money creation, inflation & interest rate control, bank regulation, reserve requirements, and many other practices. Central banks have made their share of grievous errors, particularly in the Great Depression. But some lessons have been learned, and our latest brush with the pitfalls of ponzi banking was significantly less bad than the Great Depression.

Nor is a stable store of value over long periods of time the only point of having a monetary system. Rather, it should exist to increase prosperity and human well-being. Gold is still available, after all, to all and sundry who wish for that imperishable store of value. For the rest, modest and controlled inflation using an elastic supply of money, as a spur to productive investment and labor over pure saving / mattress-stuffing, may be a more desirable feature of an optimal monetary system, combined with active state management to maintain value over booms and busts in the business cycle.


Saturday, August 15, 2015

Whatever Happened to Bangladesh?

Saudi funding sows a whirlwind of extremist Islam.

Bangladesh had a difficult birth, with war and something close to genocide. The irony is that the killers and rapists were Islamic co-religionists from West Pakistan, now called Pakistan. Bangladesh was saved by the Hindus of India, who drove Pakistan out of a country about which it cared evidently less than nothing. Yet Islam and Islamism survived, and seems to have come to the fore vs the other main cultural thread, which is Bengali culture.

Bangladesh has been in the news recently for the murders of four outspoken atheists, particularly the brutality and impunity with which they were carried out. Why? Why has the virus of Islamism found a home so far from the core Arab areas whose extremism we take for granted? Money seems to be the short answer. Money that funds innumerable mosques, madrassas, and a variety of terrorist groups. The private madrassa system in Bangladesh has grown enormously over the recent decades, now rivalling the state educational system, which has been forced to offer its own madrassas of a hybrid nature that combine modern and Islamic education.

The private madrassas couldn't exist without large infusions of money, (donations), since they are typically free to students. The government spends about 50 million a year on its own system of madrassas, which should give an idea of the money involved. A large amount for a poor country, but very little for a country like Saudi Arabia, whose petro-wealth makes supporting ultra-conservative Islamism all over the world relatively easy, both by way of state policy, and by private donations. Charity is a very important duty in Islam, and is prone to chauvanistic and political interpretation, funding jihadist groups, proselytizing, and fundamentalist educational activities.

There are two important native institutions of Islamism in South East Asia. One is the Deobandi school or movement, and the other is the Jamaat -e-Islami political party. Deobandis are more conservative than the mainstream Barelvi Muslims, who preserve local and mystic Islamic traditions. The Deobandis are more by-the-book, holding to the Hanafi school mostly, and putting importance on following some major tradition. This contrasts with the evern more extreme movements like the Salafists, who renouce all the traditions after the first three generations and thus base their interpretations on the Koran and Hadiths, without regard to current scholarship, mainstream schools, or famous Islamic universities.

So there are more or less Calvinist schools at work here, from the culturally traditional Barelevis, who are nonetheless quite divisive in their own right, having supported the Pakistan movement and countless instances of anti-Hindu violence, to the Islamically traditional Deobandis, to the Islamically fundamantalist Salafis and Wahhabis. Of the latter, if they had wanted to recapture the Islamic golden age, the roughly 500 years from 800 to 1250, with its cosmopolitanism and scholarship, that would be a worthwhile enterprise. But trying to recapture the days of the first generations, with their incessant bigotry and warfare, in the current age, is naturally a recipe for global disaster.


The Jamaat -e-Islami is analogous to the Muslim Brotherhood, (or the Christian Fellowship, for that matter), in working assiduously to propagandize for a religio-political fundamentalism and to place its acolytes in powerful positions. It has an interesting distain for nationalism, viewing the modernizing movements of Nasser and others in the mid-20th century Islamic world as distractions from the world-wide revolution that would bring Islam to the head of affairs globally. It was outlawed for its violence in supporting West Pakistan at independence, but was later legalized by an Islamic dictator and now is a minor party with outsized power in the parliamentary system of Bangladesh.

The extent of funding flowing from the Gulf states to Islamist enterprises in Bangladesh is very hard to judge, as it goes under many routes and guises. There are several billions in remittences from individual Bangladeshis working in the Gulf states, (and subject there to Islamist propaganda, naturally), perhaps a billion in direct foreign aid, extensive business relationships including the curious and secretive practices of Islamic banking, and unknown billions in private charity. This is not to mention the Haj and its associated opportunities for propagandaOne example: "The Saudi Arabia-based al-Haramain Islamic Foundation, banned internationally by United Nations Security Council Committee 1267, has come under suspicion, along with other charities from the Middle East, for financing terrorism in Bangladesh."

All this is working against the indigenous, orthogonal culture of Bengal, which is Buddhist/Hindu as well as Islamic, literary, and rather non-violent. A recent podcast about Rabindranath Tagore tells some of the story. But Bangladesh was and remains torn between its Bengali and Islamic identities. The Islamic union with Pakistan certainly went sour, (as do all relationships with Pakistan, apparently), yet with the Gulf states putting their thumbs on the scale for decades, we end up with continuing schizophrenia, culture war, Islamic fundamentalism, and violence.

Will Islamism burn itself out anytime soon? It seems to be powered, not by its own logic or success (see the problems ISIS has in governing anyone in a popular manner), but as a bitter reaction to the long-term decline of Islamic culture, set against colonialism and continuing cultural dominance of the West. A reaction to modernity; to Muslim powerlessness in the modern worlds of technology, cultural soft power, military power, scholarship, creative arts, feminism, gay rights, political theory, ... the list could go on. There is a great deal to be bitter about. In Bangladesh, the majority remains moderate, democratic, and peaceful. Why outsiders like the Saudis and Pakistanis would want to stir up hatred is, as usual, not clear except on a psychoanalytic plane. As usual, a belief system seeks to bolster its tenuous believability by making as many people adhere to it as possible, by fair means or foul, since success is itself a form of truth in the Darwinian world of culture and ideas.



  • Another review of Islamism in Bangladesh, if a bit right wing. 
  • ISIS as (non-) governing organization- soft power, branding, governance. Who governs better?
  • Afghanistan is doing better than Iraq. Which is not saying a lot, but perhaps enough.
  • Must atheism be irritating, and irritable?
  • You go, Carly!
  • It would be hard for the GOP to be more 1%-friendly.
  • A steaming pile of BS from theologian John Millbank.
  • Monetary policy has real effects, only not on ideologues.
  • To wingnuts, running a monetary system is "socialist".
  • Lack of shame among economists.
  • Jeb! ... for a significantly creepier foreign policy.
  • Solow on rent.
  • Science fiction as prescient cultural critique.
  • Another organism with more genes than we have.

Saturday, August 8, 2015

The Humble and Nether Origins of New York

The story of New Amsterdam is surprisingly well-documented. And a little sad.

A delightful, if a little fusty, book from 1978 "A Sweet and Alien Land", tells the story of Dutch colonization of what we now call New York State, beginning with the purchase of Manhattan from the local Indians in 1626, and outward to Long Island and upstate up to Albany. There is a surprising amount of documentation available, thanks to the scrupulous record keeping of the Dutch, including the cost of Manhattan itself, a steal if there ever was one. But the most interesting facet of the story is the subtext of human density and the role it plays in conflict. Whoever has more people wins the land, a process that may someday come back to haunt us.

The land was only sweet because it was underpeopled, (from a European perspective), partly through the technological backwardness of the resident native americans, partly from their decimation by exotic diseases. Nevertheless, like other colonists up and down the new world, the Dutch had a very difficult time getting a foothold and making their old-world methods work in the new. Maize was one technology they had to learn, for example. It didn't help that the Dutch colony was not really about colonization at all, but about "harvesting" the local beaver skins. it was founded by the Dutch West India Company, the ill-starred sibling of the Dutch East India Company, expressly and solely to make enormous profits from trade in the skins, which were brought down the Hudson River from the interior by Native Americans.

As one can imagine, the supply dwindled from overkill, and never came up to the company's expectations. So the colony was left with little investment, few defenses, and mediocre personnel. Its strong suits were its prime central position on the coast, outstanding harbor, and easy-going culture, compared to the fanatics up in Boston. The population was roughly 270 by 1640, and 4,000 by 1664. In the absence of completing claims early on, the Dutch expanded their network of outposts from Connecticut down to Delaware, but the outlying sites were doomed from lack of personnel and from encroachment, not by Indians, but by other Europeans, particularly disenchanted New Englanders who took over the Connecticut area, and then Long Island.

The book proceeds through the gory details of successive governorships, which  will leave aside, except to note that it was an excruciating position to occupy, faced with rude, obstreperous colonists, and months travel from one's putative overseers in Holland. and from there he was ruled by two bodies- the West India Company, and the Dutch government proper. Neither of which appreciated the great difficulties of the new colony, and provided as little guidance as they did resources.

The English had as little foresight in their settlement of the new world, it being a dumping ground for religious dissenters (New England) or for deluded economic opportunists (Jamestown and the South). But for them, the settlement took precedence over the economic plunder, and the people came. When enough had populated New England in the later 1600's, they cast their eyes southward upon the weak settlement in New Amsterdam. A few political machinations later, particularly in England where the king's brother York was eager for a feather in his cap, and they presented an overwhelming show of force and defeated governor Peter Stuyvesant on September 5, 1664, with very little firing or bloodshed.

Who should own the land? Those who currently possess it, or those who can most densely populate it and put it to its most productive economic (and martial) use? Or those who have the firepower to take it? As soon as Europeans took over land in the new world, they set up systems of property deeds and law that completely contravened their original mode of acquisition, which was by swindling, terror, killing, and stealing. Europeans continually overwhelmed and "opened" up lands during American settlement on the basis of both higher density and stronger force. If density and productivity is the rule, we may eventually be on the short end of such a competition from Asia. Even Mexico, at 64 people per square kilometer, has about twice the population density of the US.


  • What lack of competition and regulation is costing us in internet service.
  • Reich, on the ruling class candidates, whose outstanding characteristic is BAU corruption. As Trump so clearly mentioned in the debate.
  • Why did Turkey's war on ISIS turn into a war on the Kurds?
  • Guess what a "Top environmental official in the George W. Bush administration" was doing lately? Not much for the environment. It is also well past time to call out the Chamber of Commerce for anti-planetary and anti-human activities.
  • The sacred right to bear guns stops at the debate venue, apparently.
  • Stiglitz on social investing.
  • Finance as leech: there are no economies of scale here.
  • Just who, and what country, is Charles Schumer representing?

Saturday, August 1, 2015

Seeing Speech in the Brain

An fMRI study of brain dynamics during speech production.

fMRI is now a standard method to look at brain activity, though its resolution in both time and space are poor. It only detects changes in blood volume, which is an indirect, if regular, response to neuronal activity. Additionally, the (living) brain is always on, churning away at its own fantasies and unconscious levels, so the differences between resting and active states can be quite hard to detect, especially at this gross level of resolution.

A recent study took a stab at detecting speech production in the brain by fMRI, analyzing the data by correlation between regions. They broke up the brain into 212 regions, and measured the mutual correlation between each of them under separate tasks given to 14 subjects. The first task was the resting state, when the subject's eyes were closed and they were told to have no specific thoughts. Not the greatest method, from my experience of meditation, actually(!) The other two tasks were 1. they were asked to make simple eeee and iiii vowel sounds, and 2. they were asked to repeat simple but complete English sentences. The respective brain correlation networks are shown below.




Correlation maps among the best-correlated nodes selected from a total of 212 nodes (brain volumes) of fMRI activity in an average of 14 subjects. Nodes are colored "hotter" the more connected they are to others, and the lines between them are colored "hotter" depending on the strength of correlation. The A images show the complete map. The B, C, D, and E graphs are sub-selections of the data by "hotness" of the nodes, with the most connected (red) in the B maps, etc.


While the complexity of these images resists interpretation, they show several notable things. First is the overall lower connectivity in the resting state, which is dominated by green-level correlations. There are only about 3 "hot" nodes, as shown in the B sub-graph, in red. In contrast, during speaking of either syllables or sentences, far more nodes (15, and 11, respectively) are highly connected, and the overall graph is heavily weighted towards higher connectivity. The identity of the "hot" nodes is difficult to pin down in functional terms, but they seem to be dominated by premotor and motor cortex regions, noted as 6 and 4 in the graphs.

Secondly, while the syllable graphs have even higher connectivity in some respects than the sentence graphs, they show much less involvement of the frontal cortex, which is entirely absent in the orange and green-level correlation graphs, compared to the sentence speech case. This makes, on the face of it, complete sense, and supports the idea that we are seeing aspects of speech processing going on through this method. The emphasis in the syllable case is on the motor processing, while the full sentence case adds in executive complexity as well as some cerebellar fine sequencing control. That the resting state shows some frontal cortex activity and connection also stands to reason, if one assumes that uttering plain syllables leads to utter boredom, while letting the mind run free with "no thoughts" is likely to lead, frequently, to some relatively complex thoughts.

Did they find anything interesting? Long lists of brain areas with slight differences were offered, but seem to deserve much deeper (future) analysis to come up with anything approaching a theory of how speech production circuitry might look and work in a comprehensive way. This is really only a first peek at the issue, using crude imaging and advanced analysis methods.

The researchers try to further differentiate the brain states by an analysis of their modularity. The claim is that the speech producing brain seems to have tighter and more intensively local activity than the resting state, whose weaker correlations seem more widely and loosely spread. The speaking brain also generates more active modules, such as one reaching into the cerebellum. Unfortunately, despite some extremely snazzy graphics, I found the data impossible to interpret, and not, on the face of it, compelling, though reasonable enough. It is going to take a great deal more work, and perhaps better technology, to map the speech circuitry in a useful way. Without much better spatial and especially temporal resolution, one can't capture such dynamic and nano-scale activities.

An example quote points to possible functions underlying what was imaged. But the differentiation from the resting state was not very strong, since it showed plenty of connections / correlations as well, just slightly weaker.
"The SPN [speech production network] core sensorimotor hub network established connections with the brain regions responsible for sound perception and encoding (auditory cortex), phonological and semantic processing (parietal cortex), lexical decisions and narrative comprehension (middle/posterior cingulate cortex), motor planning and temporal processing of auditory stimuli (insula), control of learned voice production and suppression of unintended responses (basal ganglia), and modulation of vocal motor timing and sequence processing (cerebellum)."


  • A brief poster of the above work.
  • Speech on the A-train.
  • Institutions of fiction- money, heaven, companies, rights, states.
  • "Sneering at religion is juvenile."
  • Just what is addiction?
  • Race conscious, or race unconscious?
  • Echos of the Ruhr.. Greece to run 3.5% surpluses forever? Don't bet on it- those debts are dead. Also, bonus podcast on Greece, which will need balance of trade discipline one way or another, inside or outside the Euro.
  • State of the GOP race.
  • Cringely on the OPM disaster and who was running their systems.
  • Friedman created not only his own economic fantasies, but fantasy versions of his opponents.
  • Coal is still the big problem for climate change.
  • Economic graph of the week: inflation is dead. So is US currency volatility.


Saturday, July 25, 2015

Spock and the Next Myth

From monomyth to polymyth. Double-header reviews of "The Origins of the World's Mythologies", by E.J. Michael Witzel, and "I am Spock", by Leonard Nimoy. 

Myths are essential. They organize our world with purposeful, dramatic meaning, and situate us in a cosmos that is otherwise utterly mystifying and inhuman. All cultures have them, and the weakness of a cultural myth, such as that of late Rome, indicates lack of confidence and can lead to general malaise and decline. Where are we on this spectrum? It is hard to say, but the bulldozing confidence of Indian extermination, manifest destiny, and saving-the-world-through-democracy seems to have slacked off in recent decades. We have settled the frontiers, won the cold war, and possess an unwieldy world-wide empire which is as ungrateful as it is costly.

Our myths / ideologies of progress and unlimited human potential are met at every turn with stark limits, whether in the form of stunningly regressive religious ideologies from the world of Islam, which have fired the imaginations of millions in revolt against our neocolonial domination, or in the form of CO2, which tells us that our profligate ways can not continue without turning Earth into a wasteland. What next?

Before we get to that, it is good to ask what has led to this point: the history of human myths. Eminent scholar Michael Witzel has written a tome on the subject, a vast attempt to put human myths world-wide into a system of lineal evolutionary relationships that go back 50,000, even 100,00 years, to the origins of modern humans, more or less. Quixotic? Quite. Turgid? Totally. In fact, this is a poorly written book that is chaotically disorganized, repetitive, and keeps putting the cart of theory ahead of the horse of evidence. The theory, basically, is that there are common threads of myth (a remote high god, a golden age in the past, and a flood that punished humanity) that traces back lineally to the beginnings of modern human consciousness. This collection of themes was substantially elaborated in all descendent cultures, and especially so in a subset of northern cultures that covered the Indoeuropean, East Asian, and North American regions, to a full story line from creation to apocalyptic destruction, which we know so well in the Bible and other sources.

The theory is obviously full of holes and exceptions at every turn, and I ended up siding with the much-disparaged Jungian counter-view that stories like these are more or less spontaneous and heavily anthropomorphic emanations from human psychology, uniting universal questions with archetypal answers. The completeness of one's story line may have more to do with the local cognitive and organizational gestalt than with thousands of years of lineal descent, notwithstanding the sometimes remarkable durability and accuracy of oral traditions.

It is interesting to note that the putatively more primitive (what Witzel names Gondwana) mythical themes seem more relevant to human meaning, as they tend to be more animistic, very landscape-focused, ancestor-focused, and transactional. The other lineage in Witzel's system (the Laurasian) is more hierarchical, filled with generations of gods, complex and colorful relations between them, plus the stories of Prometheus and original sin, but posits few interactions between contemporary humans and the cosmos. It seems, frankly, more concerned with supporting a temporal hierarchy of king and nobility than with filling the world at large with personally significant meaning.

At any rate, however ancient these myths are, they no longer function for most people in the developed world (putting aside for the moment the continuing social hold of organized religions on billions of people, who may not consciously realize or participate in the ancient and absurd nature of the implicit cosmologies, the fictional heros, or the drama of human sacrifice in the chalice, etc). Our modern cosmos is definitely not that of the scriptures, and nor is our spiritual or moral universe. Through the enlightenment, all this was gradually discarded in favor of true stories, and in return we gained the immense confidence that such revolutionary factuality bestowed, having, in essence, escaped from Plato's cave- from the murk of fantastical fictions into the sunshine of reality, and the immense technological powers that this reality turned out to harbor.

Does all that mean that myth is now dispensible? Not at all. While we have dispensed with the various fairy tales received as myth through the ages, (which, in fairness, many cultures, like the far northern Inuit, treat in very playful fashion rather than the reified & doomed earnestness common among the reigning monotheisms), the function of myth goes well beyond a factual reporting of our past. That origin story has been replaced with a new, and durable reality. What we have subsisted on, ideologically, since the enlightenment, has been the myth / ideology of progress, because the reality we discovered was even more magical than the classical myths had foretold. The elegance and vastness of the real cosmos, from the tiniest particle to the big bang, is more astonishing. And the utility of fossil carbon, nuclear power, electricity, silicon circuits, and the countless other secrets that have been revealed have multiplied our powers, not to mention our populations, many, many times over.

Yet where is the meaning? If all our powers merely serve to satisfy greed, which turns out to be bottomless, what have we gained? Prosperity does seem to have some positive moral effects, making people more secure, less violent, and more capable of caring for others (up to a point). But if one looks closely at traditional cultures, one sees great and deep happiness there as well. It is not at all clear that our hugely wasteful, hive-like societies are optimal on ethical, social, or spiritual levels.

I think we are seeking a new myth, or myths. The last time there was such ferment and seeking was the axial age, which capped an epoch of great human progress to give us our current, if relic, organized religions. What will the current age provide? It remains very difficult to say, since one key property of a myth / ideology is that it is fictive. It is a construction that provides confidence and meaning without recourse to facts, though at the same time, it is hopefully not antagonistic to the appreciation of true stories about reality. Patriotism is a common example. Everyone can be patriotic and love their country, yet every country is not better than every other one.

One one level, we are bombarded with what might be called micro-myths, from books, movies and TV shows. Most are mere stories, not rising to the level of comprehensive narrative about our past, relations with the world, and most importantly, our future. The products out of Hollywood are becoming ever more simple and formulaic, with their comic book characters. Which might make them increasingly mythical, if they weren't so dedicated to only one facet of the cultural myth: the hero tale, reminiscent of works like the Ramayana.

A much-loved example of a more complete myth is that of Star Trek. The recent death of Leonard Nimoy provided an occasion to watch some eposodes and read his (second) autobiography, "I am Spock". Which is a wonderful book, filled with warmth and insight. Nimoy not only portrayed Spock in the original series and the string of films, but directed two of the films, had a wide-ranging career in other acting and directing roles, and made countless appearances, among other more or less successful projects.

He speaks with great nostalgia and appreciation of the role. While Nimoy is surely more than just Spock, Spock is in turn far, far more than Nimoy was, created, or bargained for. Star Trek, and its science officer particularly, created a modern myth of continued human progress, with high ethics and integrity, intrinsic diversity, and (weekly) adventure devoted to searching through that complex reality that surrounds us, bringing peace and reason in equal measure. (Was Spock a Christ-like alien being brought to the Federation via his human mother to redeem mankind through logic? The mind reels!)

It spoke volumes to its own time, and just as strongly to ours. Exploration doesn't have to happen in the outer world of aliens and M-class planets. It can be questions of basic science or forays into the inner worlds psychology, conducted scientifically or artistically. And it includes a dedication to solving the big problems with everything we can muster, particularly reason and logic: climate change, social justice, economic prosperity. The metaphor is quite general, and we can all be in on the adventure.

The one thing we can't do is travel to other star systems. The warp drive that the show is based on physically impossible, so the myth remains firmly fictional in that critical respect. Whether there are dramatic and intelligent beings in other star systems may also remain unknown. In theory, there must be many other civilizations around the galaxy, let alone the universe. But detecting them seems only remotely plausible, and interacting with them, frankly impossible. Still, using some modest artistic license to reveal human ideals and possibilities is a far cry from the monotheistic myths which not only posit, but demand, belief in a vast conspiracy & hierarchy of spirits and other supernatural phenomena as clearly dredged from our psychological makeup as they are scientifically unbelievable.

This is a bridge that we crossed, intellectually, with the enlightenment. Gone are the days when everyone had to believe the same thing, and draw meaning from the same wholly crazy story. Because no myth fully answers each person's questions and perspective. The answer is that we live and will continue to live in a world of many myths, a polymythic culture, and should be quite wary of a single myth returning to dominate. America is particularly diverse, which is reflected in a wildly divergent zoo of cultural myths, from the die-hard son of the Confederacy to the roccoco sexuality of of hip-hop. Start Trek is only one myth of a great variety, one that resonates with many, with positive humanism at its core.


  • Ten feet of sea level rise? What shall we do?
  • Hilary Hahn, on her violinistic upbringing.
  • Samuelson back in the 50's: ... Fiscal policy, meaning changes in taxes and government spending, were the way to deal with the business cycle. The Bureau of the Budget could manage the economy to good effect.  He did not mention the Federal Reserve Board.
  • Krugman: "My guess is that euro exit will still prove necessary."
  • Policing in South Carolina. No cause for stop, no cause for arrest, no cause for death.
  • And what is a "lawful order"?
  • A carbon tax is needed: we can never rely on supplies becoming scarce. Or on new tech being cheaper than coal.
  • A transaction tax is finally on the table.
  • Trains are five to ten fold less carbon-emitting than planes.

Saturday, July 18, 2015

Consciousness and Unconsciousness Are Different

Further work on the mechanics of consciousness.

Of all the scientific problems, consciousness is among the most interesting. It is the study of ourselves, of our most intimate nature, whose very proximity makes investigation problematic, not to say perilous. Its intrinsic structure misdirects us from its physical basis, and it has thus been the subject of millennia of fruitless, indeed unsound and narcissistic, speculation. Not that there is any significant evidence that goes against the basic reductionist model that it all happens in our brains, but how it happens ... that remains largely unknown, due to the rather bizarre technology involved, which is miniaturized, rapidly dynamic, very wet, and case-hardened, in addition to being ethically protected from willful damage.

A recent paper took another step towards teasing out the dynamic differences between conscious and unconscious processing using EEG and MEG, that is, electrical and magnetic recordings from the scalps of human subjects while they are looking at various visual tests. Visual spots were very briefly presented, to register at the very limits of visual perception. Then the subjects were asked both whether they consciously saw the spot, and in either case, where they thought it was.

An interesting wrinkle in this work is that even if the subjects didn't consciously see the spot, their guesses were much more accurate than chance. This is because of the well-known phenomenon of blind-sight, by which we still process our visual perception to very high levels of analysis whether it enters consciousness or not. So the researchers ended up with three classes of brain state from their subjects- consciously perceived images, unconsciously perceived and correctly placed images, and unconsciously perceived but not correctly placed images. The last set is assumed to the most basic level of unconscious perception, which is interrupted before higher levels of visual analysis or transmission between the visual system and the conscious guessing system.

A technical advantage of this method is that in virtually all respects, the conscious, correctly guessed case is the same as the unconscious, correctly guessed case. The visual system found the object, and the guessing system was told where it was. Only the consciousness system is left in the dark, as it were, which these researchers try to exploit to find its electrical traces.

Real location plotted against reported location. Subjects perform far above chance, (on diagonal), whether they consciously think they saw the visual feature or not. A line was presented in one of  eight radial locations, as indicated on right.

They spend the paper trying to differentiate these brain states by their EEG and MEG scanning methods, and find that they can detect slight differences. Indeed, they try to predict the visual spot location purely from the brain signals, independent of the subject's report, and find that they can do a pretty good job, as shown below. This was done by machine learning methods, where the various trials, with known results, are fed into a computer program, which picks out whatever salient parts of the signals might be informative. This learned pattern can then be deployed on unknown data, resulting in novel predictions, as they present.

Programmatic classifiers perform better than chance in predicting the visual location, even in cases (unseen, incorrectly reported) where the subject had no idea where the visual feature was, either consciously or unconsciously. The classifiers also follow the internal processing of perception in the brain over time, which all happens pretty much within a second.

The researchers also show that even when unconscious, and even when wrongly reported by the subject, the brain data can pick up the correct visual location quite a bit of the time, and it is encoded out to quite late times, to about 0.6 seconds. This indicates that just because a percept is unconscious doesn't mean that it is much more brief or transient than a conscious one.

At this point they deepened their decoding of the brain signals, looking at the differences between the conscious + correct guess versus the unconsciousn + correct guess trials. What they found supported the consensus interpretation that consciousness is an added event on top of all the unconscious processing that goes forth pretty much the same way in each case.

Further classifier developed to detect the difference between conscious and unconscious perception. Those trained on conscious data (dark line) see extra processing when given conscious data, while the reverse is not true.. those trained on unconscious data (gray line) fail to see anything different when exposed to conscious data.

"These analyses indicate that seen–correct trials contained the same decodable stimulus information as unseen–correct trials, plus additional information unique to conscious trials."

Great, so where are these differentiating signals coming from? When they tried to deconstruct the programmed decoding by location, they found that they could roughly locate the signals in the superior frontal and parietal lobes. But this was a pretty weak effect. The consciousness part of the signal was difficult to localize, and probably happens in wide-spread fashion.


Two examples of localized classifiers. The superior frontal (bottom) sees (slightly) different processing in the conscious data, while the rostral medial frontal location (top) sees nothing different at all. Localization was relatively weak, overall.

Finally, the researchers tried another analysis on their data, looking whether their classifiers that were so successful above in modelling the subject's perception could be time-shifted. The graphs above track the gradual development, peaking, and decay of the visual perception, all in under one second. Could the classifier that works at 300 milliseconds also work at 500 milliseconds? If so, that would imply that the pattern of activity in the brain was similar, rather than moving about dynamically from one processing center and type to another.

Rough model of the researcher's inferences about processing steps consistent with time-shifting the classifiers over conscious (left) vs unconscious (right) data. They suggest that the conscious data points to successive and different processing steps not present in the unconscious case.

They found that, roughly speaking, their data supported a dramatic distinction between the two tested cases of consciously seen versus unconsciously seen and corrected located. Up to about 300 milliseconds, both were the same, characterized by a succession of visual processing steps where the classifiers for each time slice were useless for other time slices. Thereafter, for the next 500 milliseconds, the time-range of their models went up dramatically, but significantly less so for the conscious case than the unconscious case. This suggested to them the next model, where consciousness involves a series of further processing steps that are not just a long-term reverberation of the perception set up at 400 milliseconds (right side), but additional processes that look different in the EEG / MEG signals and require different classifiers to be picked up in data analysis (yellow, black, and blue curves on the left). These are not as tight as the earlier lock-step series of gestalts, but are detectably different.

 "The high level of decoding observed on seen trials, over a long time window of ∼270–800 ms, combined with a low level of off-diagonal temporal generalization beyond a fixed horizon of ∼160 ms, implies that a conscious location is successively encoded by a series of about four distinct stages, each terminating relatively sharply after a processing duration of ∼160 ms. Conversely, the lower decoding level and longer endurance observed for unconsciously perceived stimuli indicate that they do not enter into the same processing chain."

One has to conclude that these are mere hints of what is going on, though more accessible hints, via EEG, than those gained so painfully by fMRI and more invasive techniques. What these inferred extra processing steps are, where they take place, and how they relate to each other and to our subjective mind states, all remain shrouded in mystery, but more importantly, remain for future investigation.



  • Who should pay for recessions? The FIRE sector, obviously.
  • Pinkovskiy on Piketty. Another person seemingly fooled by the highly unusual nature of the 20th century, and the presumption of high growth as "normal".
  • Jeb! says workers just need to be screwed harder.
  • Has ISIS done what the US could not? Bring the Taliban begging?
  • GOP gearing up for WW3.
  • Lynyrd Skynyrd and all that ... the psychology of defeat and honor.
  • One fiscal union works, the other one does not.
  • A little nostalgia for early days of molecular biology.
  • Bill Mitchell: South Korea has its economic head screwed on right.

Saturday, July 11, 2015

Reading the Genetic Tea Leaves

How a cancer drug diagnostic / prognostic panel works.

Molecular medicine is coming, slowly, but surely. Drug companies have been forced to lead the way, because drugs are molecules, affect other molecules in our bodies, and can't be understood without learning about the molecular workings of the body.

Cancer has been a leading focus for this approach, because of its thousands of molecular manifestations which can cross the traditional organ boundaries, and its protean mechanism of molecular progression, accumulating mutations in many genes before turning dangerous. And because of its maddening self-on-self method of attack. One of the most significant cancer-related genes is TP53, which is mutated in about half of all cancers, and which in most of the rest is mis-regulated by way of other mutations. It encodes a protein (p53) that plays a central role in activating DNA repair processes in response to damage and general stress, in halting the cell replication cycle, and even in activating cellular suicide when repair is impossible. Getting rid of this protein is naturally a key step to keeping a wayward cell alive and allowing it to accumulate even more mutations.

This is one reason why, in addition to sequencing our normal genomes as part of regular medicine in the near future, our tumors and other sampled tissues will also be sequenced and analyzed to find accidental mutations that may be causing disease. For instance, over 100 tumors are known to afflict the skin, causing all sorts of lesions, each with a different set of causal mutations.

A recent paper from the drug company Novartis concerns a drug it has developed for p53, and particularly its interaction with another protein, HDM2, which turns it off. This drug interferes physically with the binding of these two proteins, thereby leaving p53 more active, and allowing it to kill its host cell in case it is cancerous, via the suicidal processes of apoptosis. But all this can only work if p53 has not been mutationally deactivated in that cancer.

So the researchers looked for a reliable way to test patients for p53 activity, and came up with the work of this paper, which is a collection of other genes whose activity, when on, says that p53 is active and thus "druggable". The test is not to sample DNA, but RNA from the tissue, asking about the transcription and thus activity of these selected indicator genes. From the p53 gene itself, RNA may be quite abundant, but if it has some tiny mutation that kills the activity of the encoded protein, then it is functionally dead. It is thus more effective to test the activity of genes that are "downstream" from it, in circuit terms, to find out whether p53 is working or not. Since one big function of p53, which binds DNA, is to turn other genes on.

The panel consists of 13 genes, developed using cell lines that were carefully selected for their response (or lack of response) to this new drug which inhibits the p53-repressor interaction. These were filtered down from 10,000 or more genes that were tested at the outset, as being the most informative. Each of them are targets of p53 transcriptional activation, which makes them obviously downstream in a circuitry sense.

List of genes used in the diagnostic/prognostic test for p53 function, comprising other genes that p53 activates.

None of these genes are dramatically regulated in the drug treatment case, only about a couple fold change in RNA levels in most cases. But the technology is now sensitive enough to detect such small changes reliably. About a third of the genes in the panel are directly annotated to function in apoptosis, which, in addition to informing on the status of the cellular p53 protein, also informs on the status of the key pathway by which this drug works- the cell suicide pathway.

So there it is, a prognostic test that amounts to something like a thermometer to tell how the patient is doing, but in molecular terms, about a specific molecular pathway, that then indicates the use of specific molecular counter-therapies.



  • Incidentally, another set of cancer mutations are so knarly, they kill each other off.
  • Piketty on Merkel: A colossal and cruel mistake is happening. And incidentally, Germany reneged on colossal amounts of debt back in the day, repeatedly.
  • Dreadful US policy in Bosnia, and the distinction between massacre and genocide.
  • Ephemerality in the arts ... what streaming gets us, and then doesn't get us.
  • Finance naturally lends itself to crime, of huge proportions.
  • Toles on the GOP clown circus:

Saturday, July 4, 2015

Crime, Punishment, and Unemployment

The penal and the anti-labor policy attitudes are closely connected. A July 4 meditation.

I follow MMT economics, which has a great deal to say about our attitude towards employment and unemployment. This attitude has hardened dramatically over recent decades, from a supportive responsibility of government in the wake of the Great Depression and the unifying experience of World War II, down to the Romney credo of "I like to fire people", though he might think twice when "corporations are people too, my friend".

An analogous shift has taken place in our attitude towards criminals and crime, from making an effort to punish in proportional ways and rehabilitate, down to the current lock 'em up and let 'em rot attitude, sweeping up whole demographic communities in its wake.

We have migrated from a consciousness of social complexity and social duties of the collective, towards a starky individualistic sink-or-swim ethic that made of the losers or victims of various social pathologies scapegoats to be ostracized and made even more miserable. We have gone from a temporary suspension of the class war back into its most fetid trenches, as the rich, running their own political party and Ministry of Truth, emit an endless stream of NewsSpeak pertaining to an ideology of blaming the poor for being poor, and idolizing the rich for being hard working exemplars of the American way, richly deserving of every penny they have and in no way beneficiaries of societal inequalities and happenstance.

Now, promotion of striving and hard work is great. But the extent to which we have lost sight of common responsibilities and the role of luck in everyone's lives, and made unpersons of the unemployed (and the incarcerated) has been astonishing. The political system was barely able to muster the empathy to pass the first stimulus bill, and only after it was larded up with plenty of non-stimulative tax cuts for the rich and generous aid to banks. Now the system is completely inert, dedicated to the proposition that helping the unemployed is the absolute last item on the list, behind fixing our national debt in 2040 and cutting lower middle class supports like Medicare, Social Security, and public education.

It is all so ironic, since, conceptually speaking, the society needs as much labor as possible. There are elderly to take care of, parks to clean, solar panels to install.. the list is endless. Just because the private sector can't manage to employ everyone does not by any means imply that everyone can't be employed usefully or can't be paid decently.

It is doubly ironic since work is by now a very artificial concept. We are far past the state of needing everyone's hands on the plow to grow our food and outfit our caves. The modern economy has endless roles for everyone to make each other's lives better, and one highly significant way to do that is simply to employ those who would like to be employed. Indeed, if we make labor easy to sell, that stabilizes society by reducing the need for crime. Crime would become a luxury instead of a necessity.

But the rich have, though their political party, made abundantly clear where they stand in all this. They rather keep labor down and ill-paid rather than build the country's infrastrucure.  They rather enforce their ideology of private capitalism over the public good, with a job market exclusively at the beck and call of private employers. They rather keep inflation low than foster economic growth. They rather keep and expand their marginal advantages in a declining system rather than tend to our common institutions and future collective prospects. It is appalling.



Saturday, June 27, 2015

Christians Have All The Morals

Review of "Terrorism and Civilization", by Shadia Drury

Some weeks ago, I discussed, somewhat tongue-in-cheek, how atheists have no morals. This time we turn the other cheek to consider Christian morals in some detail. Shadia Drury published a response to the 9/11 attacks in the form of a frontal assault on religious morality and Christian morality in particular, a book that should have risen to the top of the Atheist cannon, but didn't, perhaps due to her obscurity, her gender, and the book's high price.

Her case is relentless as it is remorseful, about the opportunities lost in the West due to the twisted, irrational and inhumane doctrines that originate from the New Testament. At least the Old Testament god was understandable, if a little hot-tempered. The New one is positively terroristic. Nor does Jesus (assuming for the monent that he was a real person and that the scriptures about him are at least partly true) get off in this thorough indictment, for he brings us this new system which features carelessness about this world and its people, thought crime, eschetological selfishness, orignal sin, and eternal damnation.

Hell is, of course, one focus. Jews didn't have hell, really. That was Jesus's invention, though obviously stolen from the Zoroastrians. His doctrine is that mere belief in him is the main thing.. everything else is secondary, in any practical, moral, and eschatological sense. However, many are called and few are chosen, which is to say that even if you fulfill his need for faith, you could still end up in hell, which is painted in the most fiery colors, as endless suffering and eternal. This is pure psychological terror, obviously, and was stunningly effective against the other religions of the day, which were all more reasonable.

Jesus also introduced thought crime, since belief in him was the main criterion of salvation. He also deemed immoral thoughts as bad as immoral actions. Adultury in one's mind as bad as in the flesh, etc. So where was one to go for relief from such a regime? There is nowhere to hide. Many serious believers have been agonized by this, for example John Bunyan in Pilgrim's Progress, a despairing work full of terrors and sin.

The reason for the horrors of Bunyan's epic travellogue are that humans, in Jesus's system, are fallen and evil. They are sinners who have no right to salvation, who are saved (if at all) by grace granted in return for slavish faith. Drury makes the acute observation that Freud, that ostensible iconoclast and atheist, follows the Christian system in virtually every psychological particular, from the innate sinfulnes of man (the id), and the need for terror (repression) to keep the inner beast under control, to the therapeutic salve of confession (the couch, in place of the booth).

Nietzsche too comes in for a drubbing, in an even more profound way. The Christian morality is one of blind obedience, of faith in the unbelievable, and terrorism inner and outer. All for our own good, naturally. If one fails to reframe the basic parameters of this model of humanity, then revolt against this edifice of lies might take an ugly turn, to immorality and perverse delight in reversing every dictum of the reigning moral order, so as to return to one's "true" nature, which though sinful, is at least honest and alive. Thus the bathos about Dionysus and the amoral Ãœbermensch.

This accepts a frame that is far from true, however. Humans are many bad things, but they are not fundamentally bad beings. We are basically good beings with complex and often conflicting needs and ideals. We are pro-social. We love and seek love in return. We are fiercely moral. Drury points out that it is our civilization and our highest ideals that are what is in some ways most dangerous about us- our ability to organize huge groups, spout ideological rhetoric about utopian ideals, not to mention creating modern technology, for warfare and totalitarianism on unimaginable scales. It is the beast within that required nurturing, and our better natures that required the most advanced instruments of repression, when one thinks of the rankest horrors of the twentieth century.

So it is time to relax. The work of civilizing ourselves remains great, and constant cultivation is certainly desirable. But it does not require terrors of religion, nor theories about how damnable we are. Nor should we leap to the other Rousseau-ian end of spectrum, where everything natural, native, and naive is all that is good. As the religiously and ideologically burned-out countries of Europe are showing, there is peace, prosperity, happiness, and not least, highly sensistive morality, in a post-Christian, moderate, and humane culture.


  • Kansas is just fine ... steal from the poor, give to the rich is a winning, sustainable strategy, especially considering how damnable we are.
  • The Devil and Mr. Scalia. At least one justice is a fossilized relic, and has either lost his marbles, or enjoys trolling interviewers.
  • Over 100 jihadist training camps, thousands of trainees ... this is not a fringe phenomenon.
  • California's sclerotic housing mess. Incumbent owners always win when new housing / zoning is killed. Especially with prop 13.
  • Grexit is coming.
  • Whom we feel for... white police edition.
  • Another 1%. Or is it the same one?
  • Enriching the 1% makes everyone poorer.