Saturday, June 11, 2011

The not-so-great society

Do we even care about unemployment?

Suppose we enter a world of increasing efficiency, where what once required labor is done by robots, or done overseas, out of sight and mind. Anti-trust concerns have continued to whither away, so the US might have only a handful of corporations that bring us all we need: MegaAgCorp, MegaRoboCorp, MegaMediaCorp, MegaCareCorp, and MegaBankCorp. Indeed efficiencies are so high that each of these corporations has one CEO and just a few programmers tending the machines. Everyone else in the country can do as they please ... they are not really needed. Because of the excess of trained programmers, the CEO hardly has to pay the programmers anything, so he gets all the profits, shared to some degree with the CEO of MegaBankCorp, which is a major shareholder.

This is entirely acceptable and plausible in the capitalist model of laissez-faire, given the technological premises. The CEOs in this model would have to spend furiously to keep other citizens in the country supplied with the funds to buy food and goods, if they wished to do so. They might be prodigious philanthropists, supporting tens of millions of people each with handouts, arts, circuses, and make-work. An entire trickle-down economy could be modeled in this way, resembling in some ways the extremely concentrated wealth conditions of imperial Roman antiquity.

On the other hand, the CEOs might pile up their profits as money- or even as gold if they were infected with Austrian economics. The rest of the population could then go to hell, so to speak. Unfortunately that system wouldn't get very far because with no spending, there is no income, whether in the form of gold or other money. This economy, while perhaps a model of Ayn Rand go-Galt-ism, wouldn't even work on its own terms, let alone larger moral terms. The CEOs would quickly cease to get income, along with everyone else.

It is a problem we are increasingly facing as we live in a new economic landscape with new types of shortages and excesses. For the last two centuries, new machines and cheap energy challenged us to find ever more complex uses for labor. Indeed, labor virtually ceased being labor at all, and turned into thinking. Now with the advent of computers, thinking is getting increasingly displaced as well, and we may end up doing little more than entertaining each other.

It would be a fine pass to come to, but only if the essential supports coming from the concentrated, automated parts of the economy are distributed widely. The idea that everyone should do something for others as far as they are able is certainly important and virtuous. But who evaluates who is able, and what is the worth of their work? If all of this is judged by the Mega CEOs who are the vaunted "producers", the culture is impoverished, and if taken to its economic extreme, such policy becomes rapidly fatal to any semblance of an economy or society.

I think the lesson should be obvious. The productive capacity of our hyper-developed economic system is largely the patrimony of past inventors, researchers, innovators, educators, and laborers. (Matrimony, if one wants a more feminist-friendly spin!) I don't even mention its more general dependence on cultural & natural resources. The managers and capitalists of the means of production are important cogs in the machine's current instantiation and productivity, but are also custodians on behalf of a much larger society of stakeholders. They may deserve a larger than average share, but they do not deserve the whole pie, no matter what market forces or cronyism may say to the contrary.

The idea that workers who are no longer needed in some corner of this vast enterprise can be simply "voted off the island" and sent into jobless penury seems callous to say the least. When amplified to the 10 to 20% levels we see today in the under- and un-employment picture, it amounts essentially to society-wide masochism. Not only are individuals and families reduced to destitution, for which food stamps are not a reasonable and dignified answer, but the entire system is, as Keynes pointed out, made poorer by the waste of so much labor.

Solution 1
I think there are four paradigmatic solutions that the leading ideologies put forward for such a condition. In the idealistic Republican Horatio Alger story, the unemployed work their fingers off inventing new products, services, and business models which so melt the hearts of reluctant bankers that new lending happens, new businesses arise, and more spending occurs in the economy generally. This investment both brings forth new money (via lending) and also brings money out of the savings of the rich as investment and consumption, thereby redistributing income downwards and keeping the economic cycle turning.

Solution 2
A more realistic, hard-headed version of the Republican approach would be that the unemployed remain invisible to the larger economy and good riddance. Perhaps they subsist on alms from private charities, redistributing small amounts of money downwards on a sporadic basis. Money that is rapidly re-collected to the higher levels by the usual mechanisms of private enterprise- payday loans, tobacco and alcohol addiction, and other advertised necessities. Perhaps the unemployed start their own gardens, bartering goods with fellow outcasts and starting an underground economy that remains invisible to the top end of town. They may even develop alternative currencies and markets. Back in the erstwhile conventional economy, contraction occurs and labor becomes cheaper, but as long as the remaining money concentrates upward, all is well.

Solution 3
On the Democratic side, there are two basic approaches, both slightly more socially responsible. The classic counter-cyclical balancing approach is to redistribute public money (from taxes or from de novo money creation) on a more systematic basis than alms, paying unemployment insurance, health insurance, income support of other sorts, and tax cuts weighted to the middle and lower classes. These are designed to raise aggregate demand, raising economic activity and enployment in the private economy back to self-sustaining levels. A very simple relationship, really, which is proven Keynesianism.

Solution 4
Last is the public works approach: direct employment of the unemployed, in public works the country needs so desperately. Our roads are recognized to be of third world status. Our bridges are falling down. Our energy system is antiquated. Our seniors need aid and assistance. Our broadband is sub-par. There is plenty of work to do, and it doesn't take a rocket scientist (also government supported!) to see that unemployment + work that needs doing = solution. The many public works of the Depression, such as Hoover dam, are still paying dividends today. In the wake of enormous money contraction / credit destruction in the private banking collapse, we have plenty of scope for the government to create money needed for such programs. It doesn't all have to go to the banks through various rescue packages, pumped up reserves, etc!

The stakes could not be more serious, both for individuals being crushed by the current downturn, and for our general prosperity and well-being. The simple fact is that we are not "broke". We may be intellectually, politically, and compassionately broke, but that is a different issue!


  • Skidelsky thinks about it..
  • Paul Solmon thinks about it.
  • Executives rake in billions.
  • Tech and the concentration of useful work.
  • Bill Black agrees that Goldman Sachs was doing god's work.
  • Krugman on debt and interest payments ... not a big deal.
  • Solar capacity is growing and getting cheaper.
  • Planet wrecking heads to new heights.
  • Black hats and white hats in the cyberworld.
  • On corruption in Afghanistan.
  • The State Department bunker in Iraq. What on earth are we thinking?
  • Our new Senate: nothing gets done.
  • And the civil war, still going.
  • Be good to your dog.
  • Bill Mitchell quote of the week: A graph of unemployment duration, which is indefensible in a civilized country, putatively the richest and smartest on Earth.

Saturday, June 4, 2011

Sustainable energy when?

Notice anything weird about the weather? It is high time to reform our energy system.

Climate change is bringing is raging tornadoes, floods, wildfires, droughts, famine, and probably an active hurricane season, not to mention untold harm to the biosphere for millennia to come, especially via permanent extinctions. Putting aside the political and ideological battles, what do we need to address it? We have the technology. What we need is the economic and political will to use it. Truthfully, the only thing we really need is a price on fossil carbon.

Right now, a few cars are being run from electricity, and various carbon-free options exist for generating electricity, including nuclear, hydro, wind, and solar. The elephant in the room is cheap fossil fuels- coal and gas for electricity, and oil for transportation. If we avert our eyes from their various environmental costs, as is the wont of mainstream economics, they are very cheap, and as long as they remain cheap, carbon-free energy will not be economically viable.

They may not be cheap forever- oil is already hitting global peak production and higher prices. But coal and gas seem less supply-limited, with fracking all the rage. Coal is particularly noxious in this regard- incredibly dirty, and evidently endlessly plentiful, in the US, India, and China. Some existing regulations on coal pollution raise the effective price of coal-fired electricity, but not enough to make carbon-free sources economically viable, or as the aim should be, economically superior.

Prices of wind and solar energy have been trending downwards, however, so the state of affairs seems very close to tipping. Unfortunately, good information is very hard to come by, since each source pushes its story with various related costs put in or left out. I attempt to quote final electricity prices from various sources, in rough terms:
Source¢ per kWh
coal5 to 10
gas5 to 20
wind5 to 10
nuclear10 to 15
geothermal5 to 10
solar plant12 to 20
residential solar20 to 30

So we are certainly within striking distance of economic parity for several forms of non-fosssil energy production. Adding a carbon tax of $0.10 per kWh, summing over annual electricity production of 3,101 TWh gives a cost of $310 billion yearly. Is this a lot? Not in a $14 trillion dollar economy, especially when the entire amount stays in the system. It can be used to displace other, less efficient taxes, or pay off the debt, give back credits on income taxes, build parks, employ the unemployed, give more money to bankers, or whatever else we would like to do with it.

Adding in oil consumption with a comparable tax of roughly $1 per gallon, over 7.3 billion barrels consumed per year nets another $300 billion- another significant increment to all those who are concerned about the federal debt!

The point of all this isn't, of course, to make money for the federal government, but to put a proper price on all the harms flowing from our use of fossil fuels- which extend to foreign policy, our endless support of enemies like Iran, Venezuela, and Saudi Arabia (woops they are a friend ... a friend!), destruction of landscapes through mining, horrors of ash disposal, not to mention the emissions. The new normal should be concern for future generations and the environment, not for the easy guzzling of today.

A couple more issues come to mind- the role of nuclear power, and the intermittency of solar and wind power. Fukushima was an honest-to-goodness disaster, and will incidentally increase Japan's fossil energy consumption for a long time to come. But it was also a very old design. Future nuclear plants will have safer designs, benefitting from experience, including that at Fukushima. And there are also very interesting reprocessing schemes that could eventually make the nuclear fuel cycle far more benign and manageable than it is today. So nuclear shouldn't be counted out. But like fossil fuels, its costs, including enormous design margins, waste costs, and occasional catastrophic (or at least highly dramatic and disruptive, if not terribly lethal) events, need to be factored in.

Solar and wind power are inherently intermittent power sources, so current policy is reluctant to make them more than 10-20% of the mix on any grid. The solution is energy storage, in the form of water reservoirs, flywheels, compressed gas, or other mechanisms. Such mechanisms will become more efficient with a sufficient market, another important goal of carbon taxes. The situation is reminiscent of the key problem with the electric car- its battery. Indeed, these problems may connect through smart electricity grids that use the fleet of connected cars to stabilize and even out loads on the grid itself. The difficulty of storing energy at both small and large scales certainly highlights the amazing convenience of concentrated, reduced, fossil carbon.

Do electric utilities even care about fuel costs? Aren't they regulated monopolies that pass on all their costs to the consumer, whatever they are? Haven't they been given free passes to charge customers for the enormous and unforeseen costs of nuclear energy? Isn't direct regulation via mandates and rules the better path? I can't claim any expertise in this complicated area. California has accomplished a great deal with enlightened regulation of its electricity providers, keeping its electricity consumption far lower than other states. Nevertheless, all stakeholders need stronger incentives towards sustainable energy, from the householder and driver, up to the power generator, whether well-regulated or not. Simplicity alone argues for a blanket fee on fossil carbon that automatically reaches all of its uses.

Should we wait for China to act first or agree to act in concert? Obiviously, this is the most transparent stalling tactic. Peak oil is coming anyhow. The US has contributed the most to global warming to date, and despite falling behind China in the polluting race, has the greatest moral responsibility to act. The best way to pressure other countries to act is to act ourselves, rather than holding everyone hostage in a game of mutually assured environmental destruction.

I know it seems pollyannish to repeat this theme of carbon taxes in the current political environment of politicians racing to the bottom of demagogic "principles" of greed and corporate subservience, but someday, we will get our heads out of the sand and take responsibility for the future.

  • Gregor discusses overall energy usage in the US.
  • The Saudis want the addiction to go on as long as possible.
  • Putative centrist Michael Lind says ... no worries- let's keep smokin' the dope.
  • A commenter on the right says we have plenty of energy, no environmental worries, and "... we are in the midst of a Cold Civil War in which each election cycle offers another critical battle." That, at any rate, is true enough.
  • Economic benefits to California from green energy (pdf).
  • Where have all the fish gone?
  • Are we facing a domestic religious war?
  • What happened to rule of law?
  • There's someone in my head, but it's not me.
  • Working for free... has it come to this? Is labor completely neutered?
  • Krugman- apparently facts have a liberal bias. But facts never stopped anyone...
  • Economics quote of the week, from Paul Krugman via Bill Mitchell:
"So someone needs to say the obvious: inventing reasons not to put the unemployed back to work is neither wise nor responsible. It is, instead, a grotesque abdication of responsibility."
"... the IMF was blind to the developing crises. It even praised nations like Ireland during the run up to the crisis, missing the largest bubble (relative to GDP) of any nation, an epidemic of banking control fraud, and the destruction of any pretense to effective Irish banking regulation."

Phones of doom?

Bonus post on cell phones ... a pet peeve.

I don't own a cell phone. Nevertheless, as a scientist, the discussion of cell phone dangers intrigues me to no end. The topic was brought up breathlessly by some neighbors a few years ago, with anecdotes about a rash of coworkers who had come down with brain tumors. I replied that the physics simply didn't merit any concern at all. Now the WHO has flagged cell phones as "possible" carcinogens, putting them in the same class as virtually every other substance on earth ... it is not a very meaningful designation, really.

The radiation we are talking about here is a thousand-fold less powerful, per photon, than visible light. And while UV light beyond the upper end of the visible range can damage our skin, break chemical bonds, and cause cancer, the much less powerful photons of radio waves can't do anything of the sort. At most, they might induce a little bit of jiggling of our molecules- some extra heat beyond that naturally flowing through our veins. It is the high-energy ionizing radiation that we need to worry about- the kind we get from CAT scans, mammograms, radon, living & flying at high altitudes, and from breathing in the exhaust of coal plants, among many other things.

For me, it comes down to data, and these graphs say it all:

First, the adoption of cell phones.

Second, the incidence of brain tumors (data from Minnesota).

You can see that so far into the cell phone epidemic, there has been no correlated cancer epidemic of the brain (or anything else). One might claim that it could take decades for such cancers to develop. In that case, the anecdotal evidence of cancer clusters is contradictory and worthless. Even if the average gestation time is long, a serious cancer risk will cause early cases as well, since some part of the population is already older and predisposed to be pushed over the edge by this new carcinogenic insult. The cases would already be showing up at some detectible rate.

What this is really about is magical thinking, as people wonder at "waves" going through them, feel instinctively violated, and fall prey to archetypal fears. This extends to researchers as well, who routinely, especially in social and medical sciences, get the results they expect from studies which, when replicated, show lower and lower effect sizes with each replication. Our unconscious exerts strong effects on everything we do, and it is the premiere accomplishment of the scientific method to, at times when we want accurate data, find ways to cordon off reality and the hypothesis at issue from all the other biasses we can subtly bring to bear on such a question. Yet this is easier said than done.

So don't worry. And while science keeps on going and may yet find that cell phones pose some measurable risk, the epidemiology already tells that the risk is certain to be vanishingly small- much less than the chance of dying from driving while using a cell phone.

Saturday, May 28, 2011

Neural waves of brain

The brain's waves drive computation, sort of, in a 5 million core, 9 Hz computer.

Computer manufacturers have worked in recent years to wean us off the speed metric for their chips and systems. No longer do they scream out GHz values, but use chip brands like atom, core duo, and quad core, or just give up altogether and sell on other features. They don't really have much to crow about, since chip speed increases have slowed with the increasing difficulty of cramming more elements and heat into ever smaller areas. The current state of the art is about 3 GHz, (far below predictions from 2001), on four cores in one computer, meaning that computations are spread over four different processors, which each run at 0.3 nanosecond per computation cycle.

The division of CPUs into different cores hasn't been a matter of choice, and it hasn't been well-supported by software, most of which continues to conceived and written in linear fashion, with the top-level computer system doling out whole programs to the different processors, now that we typically have several things going on at once on our computers. Each program sends its instructions in linear order through one processor/core, in soda-straw fashion. Ever-higher clock speeds, allowing more rapid progress through the straw, still remain critical for getting more work done.

Our brains take a rather different approach to cores, clock speeds, and parallel processing, however. They operate at variable clock speeds between 5 and 500 Hertz. No Giga here, or Mega or even Kilo. Brain waves, whose relationship to computation remains somewhat mysterious, are very slow, ranging from the delta (sleep) waves of 0-4 Hz through theta, alpha, beta, and gamma waves at 30-100+ Hz which are energetically most costly and may correlate with attention / consciousness.

On the other hand, the brain has about 1e15 synapses, making it analogous to five million contemporary 200 million transistor chip "cores". Needless to say, the brain takes a massively parallel approach to computation. Signals run through millions of parallel nerve fibers from, say, the eye, (1.2 million in each optic nerve), through massive brain regions where each signal traverses only perhaps ten to twenty nerves in any serial path, while branching out in millions of directions as the data is sliced, diced, and re-assembled into vision. If you are interested in visual pathways, I would recommend Christof Koch's Quest for Consciousness, whose treatment of visual pathways is better than its treatment of other topics.

Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run, though the present articles deal with one such case. Neurons can spontaneously generate synchrony in large numbers due to their intrinsic rhythmicity.

Nor are neurons passive input-output integrators of whatever hits their dendrites, as early theories had them. Instead, they spontaneously generate cycles and noise, which enhances their sensitivity to external signals, and their ability to act collectively. They are also subject to many other influences like hormones and local non-neural glial cells. A great deal of integration happens at the synapse and regional multi-synapse levels, long before the cell body or axon is activated. This is why the synapse count is a better analog to transistor counts on chips than the neuron count. If you are interested in the topics of noise and rhythmicity, I would recommend the outstanding and advanced book by Gyorgy Buzsaki, Rhythms of the Brain. Without buying a book, you can read Buzsaki's take on consciousness.

Two recent articles (Brandon et al., Koenig et al.) provide a small advance in this field of figuring out how brain rhythms connect with computation. Two groups seem to have had the same idea and did very similar experiments to show that a specific type of spatial computation in a brain area called the medial entorhinal cortex (mEC) near the hippocampus depends on theta rhythm clocking from a loosely connected area called the medial septum (MS). (In-depth essay on alcohol, blackouts, memory formation, the medial septum, and hippocampus, with a helpful anatomical drawing).

Damage to the MS (situated just below the corpus collosum that connects the two brain hemispheres) was known to have a variety of effects on functions not located in the MS, but in the hippocampus and mEC, like loss of spatial memory, slowed learning of simple aversive associations, and altered patterns of food and water intake.

The hippocampus and allied areas like the mEC are one of the best-investigated areas of the brain, along with the visual system. They mediate most short-term memory, especially spatial memory (i.e rats running in mazes). The spatial system as understood so far has several types of cells:

Head direction cells, which know which way the head is pointed (some of them fire when the head points at one angle, others fire at other angles.

Grid cells, which are sensitive to an abstract grid in space covering the ambient environment. Some of these cells fire when the rat is on one of the grid boundaries. So we literally have a latitude/logitude-style map in our heads, which may be why map-making comes so naturally to humans.

Border cells, which fire when the rat is close to a wall.

Place cells, which respond to specific locations in the ambient space- not periodically like grid cells, but typically to one place only.

Spatial view cells, which fire when the rat is looking at a particular location, rather than when it is in that location. They also respond, as do the other cells above, when a location is being recalled rather than experienced.

Clearly, once these cells all network together, a rather detailed self-orientation system is possible, based on high-level input from various senses (vestibular, whiskers, vision, touch). The role of rhythm is complicated in this system. For instance, the phase relation of place cell firing versus the underlying theta rhythm, (leading or following it, in a sort of syncopation), indicates closely where the animal is within the place cell's region as movement occurs. Upon entry, firing begins at the peak of the theta wave, but then precesses to the trough of the theta wave as the animal reaches the exit. Combined over many adjacent and overlapping place fields, this could conceptually provide very high precision to the animal's sense of position.
One rat's repeated tracks in a closed maze, mapped versus firing patterns of several of its place cells, each given a different color.

We are eavesdropping here on the unconscious processes of an animal, which it could not itself really articulate even if it wished and had language to do so. The grid and place fields are not conscious at all, but enormously intricate mechanisms that underlie implicit mapping. The animal has a "sense" of its position, (projecting a bit from our own experience), which is critical to many of its further decisions, but the details don't necessarily reach consciousness.

The current papers deal not with place cells, which still fire in a place-specifc way without the theta rhythm, but with grid cells, whose "gridness" appears to depend strongly on the theta rhythm. The real-life fields of rat grid cells have a honeycomb-like hexagonal shape with diameters ranging from 40 to 90cm, ordered in systematic fashion from top to bottom within the mEC anatomy. The theta rhythm frequency they respond to also varies along the same axis, from 10 to 4 Hz. These values stretch and vary with the environment the animal finds itself in.

Field size of grid cells, plotted against anatomical depth in the mEC.

The current papers ask a simple question: do the grid cells of the mEC depend on the theta rhythm supplied from the MS, as has long been suspected from work with mEC lesions, or do they work independently and generate their own rhythm(s)?

This was investigated by the expedient of injecting anaesthetics into the MC to temporarily stop its theta wave generation, and then polling electrodes stuck into the mEC for their grid firing characteristics as the rats were freely moving around. The grid cells still fired, but lost their spatial coherence, firing without regard to where the rat was or was going physically (see bottom trajectory maps). Spatial mapping was lost when the clock-like rhythm was lost.

One experimental sequence. Top is the schematic of what was done. Rate map shows the firing rate of the target grid cells in a sampled 3cm square, with m=mean rate, and p=peak rate. Spatial autocorrelation shows how spatially periodic the rate map data is, and at what interval. Gridness is an abstract metric of how spatially periodic the cells fire. Trajectory shows the rat's physical paths during free behavior, overlaid with the grid cell firing data.

"These data support the hypothesized role of theta rhythm oscillations in the generation of grid cell spatial periodicity or at least a role of MS input. The loss of grid cell spatial periodicity could contribute to the spatial memory impairments caused by lesions or inactivation of the MS."
This is somewhat reminiscent of an artificial computer system, where computation ceases (here it becomes chaotic) when clocking ceases. Brain systems are clearly much more robust, breaking down more gracefully and not being as heavily dependent on clocking of this kind, not to mention being capable of generating most rhythms endogenously. But a similar phenomenon happens more generally, of course, during anesthesia, where the controlled long-range chaos of the gamma oscillation ceases along with attention and consciousness.

It might be worth adding that brain waves have no particular connection with rhythmic sensory inputs like sound waves, some of which come in the same frequency range, at least at the very low end. The transduction of sound through the cochlea into neural impulses encodes them in a much more sophisticated way than simply reproducing their frequency in electrical form, and leads to wonders of computational processing such as perfect pitch, speech interpretation, and echolocation.

Clearly, these are still early days in the effort to know how computation takes place in the brain. There is a highly mysterious bundling of widely varying timing/clocking rhythms with messy anatomy and complex content flowing through. But we also understand a lot- far more with each successive decade of work and with advancing technologies. For a few systems, (vision, position, some forms of emotion), we can track much of the circuitry from sensation to high-level processing, such as the level of face recognition. Consciousness remains unexplained, but scientists are definitely knocking at the door.


"As I’ve often written, we’re in a strange state now where people who actually take textbook economics and simple arithmetic seriously are seen as dangerously radical and irresponsible, while people who believe in invisible bond vigilantes and confidence fairies, who claim to know what the market will want even though there’s no sign of that desire in current asset prices, are viewed as Very Serious."
"... many readers have been writing in asking me about price manipulation in international commodity markets – which is aka how financial markets caused a jump in world starvation and death."

Saturday, May 21, 2011

House of Saud

Review of two books on Saudi Arabia, springboard of Wahhabism, Al Qaeda, and peak oil.

[Note to readers- the blogspot overlord (pbuh) offers several new presentation formats, which you may be interested in, though "sidebar" seems the only remotely appropriate one.]

The Dune trilogy is one of the great science fiction works, with exotic setting, stirring drama, and fascinating ideas. But I hadn't been aware how much, consciously or unconsciously, it drew from reality, in the form of Islamic, and especially Arabian, history. Action centers on a desert planet, which produces a special substance to which the rest of the human-inhabited galaxy is addicted. Planet Arrakis is inhabited by wild tribal nomads, who eventually adopt a savior, and through a religiously zealous jihad (riding on sandworms, indeed) overturn the reigning aristocratic system and become rulers of the known universe, only to (fore)see the jihad run amok once the religious genie is out of the bottle.

With apologies to the Dune franchise ...

Unfortunately, it has been a long time since I read it, so I will beg off making more detailed parallels. But the unifying theme, as usual, is power and its nexus with religion. There is no more socially motivating force than religion. Yet there is also no more idiosyncratic, emotion-laden, and irrational force. What will people die for? Rarely for anything that makes sense, rather typically for an archetypal construct that expresses their deepest feelings and stirs essential meanings, especially if it offers the bonus of eternal life in heaven. Such things as ethnic, religious, and national identity typically fit the bill.

So what could be more appropriate, in this season of royal pageantry, killing of enemy #1, Jasmine revolutions, and our dawning recognition of peak oil, than to delve into the history of the world's last remaining real monarchy & country named after a family, and origin and home of the most uncompromising form of that most volatile religion-  Saudi Arabia?
"I am not Queen Elizabeth!"- King Saud ibn Abdul Aziz al Saud, upon being asked to be a figurehead. He was then deposed by his brothers, in 1964. (Lacey)

My local library stocks two excellent books on Saudi Arabia- The Kingdom, by Robert Lacey (1981), who apparently is royalty-besotted, having just come off a biography of QE2, and The Siege of Mecca, by Yaroslav Trofimov (2007). The latter is particularly good and focuses on the little-remembered yet highly influential takeover of the Kaaba in Mecca by a well-led and well-equipped band of proto-Al Qaeda millenarians in 1979. The former offers a more conventional, sweeping, and mostly sympathetic history of the Sa'ud since the late 1800's. Both tell the essentials of how the Saudi family nurtures and relies on religious fundamentalism for their internal power as well as external influence.

Power in traditional societies tends to be personal rather than institutional. Empires raised by charismatic personalities crumble just as quickly after their deaths. The terrorist landscape of Al Qaeda and the Taliban is an endless scroll of "commanders" with small groups having friendly, but not fully integrated, relations. This is one reason why the death of OBL is more promising than Westerners typically assume.

Arabia is exemplary in this respect, with tribes traditionally competing for power, and men competing for leadership within their families / tribes. Respect for elders is intense in this conservative society, but not to the point of primogeniture. Leaders have to earn their followings. This led, for instance, to substantial difficulties after the prophet Muhammad died, since many tribes that had converted to Islam regarded this as purely personal allegiance to him rather than an irreversible fall down some abstract theological rabbit hole. The ensuing wars led to a major crisis, whose resolution (i.e. reconquering the Arabian peninsula) led the Islamic warriors far afield and towards world domination.

Likewise with the founder of the modern Saudi state, Abdul Aziz ibn Abdul Rahman al Saud. Through a bold stroke of guerrilla warfare, he took over the capital of central Arabia, Riyadh, from the competing tribe of the Rasheeds at the young age of 26 in 1902. His father (Abdul Rahman) was still alive and head of the family, but let Abdul Aziz have all the glory and power he had earned, after which Abdul Aziz systematically turned surrounding tribes with his generosity, personality, and when necessary, force of arms.
"There were two types of desert warfare, Abdul Aziz's grandfather Faisal had told Colonel Pelley in 1865: religious war and political war. Political warfare involved compromise. But 'when the question is one of religion,' the old man had explained, 'we kill everybody.'" - Lacey

His key to gaining the vast area of today's Saudi Arabia was another inheritance from Muhammad, in the form of Islamic fundamentalist warriors. The Sa'ud family had since 1744 been allied with the ultra-conservative Wahhabi movement, whose main aim was to convert Arabs (especially the nomadic bedouin) from various lax semi-islamic and semi-animistic practices to pure & stringent Islam. The Wahhabis had plundered and massacred as far afield as Mecca and Karbala, Iraq in 1802. This alliance was revived and extended by Abdul Aziz after learning that a new fever of puritanism had swept some bedouin in the wastes to the north of Riyadh into a pious settled existence, calling themselves the brotherhood (Ikhwan).

The Ikhwan were not good farmers, and however devout, the oasis life wasn't economically successful. Abdul Aziz offered them a sympathetic ear, and pointed them towards Jihad, conveniently directed at various of his enemies in turn (between 1912, and the final battle in 1929 when the Saudis turned and emasculated the Ikhwan). The climax came in 1924 when the Ikhwan captured and massacred the inhabitants of Taif, the gateway to Mecca from the interior. The relatively cosmopolitan coastal residents quickly agreed to accept Abdul Aziz as their new overlord if he would (as he did) spare them a similar fate.

The original Wahhabis had massacred Taif previously in 1802, taken over Mecca, and instituted a Taliban-like rule that was far from forgotten. So the Saudis have been proud, but also quite nervous, stewards of the holy precincts of Mecca and Medina. They were looked down upon by the rest of the Islamic world as country bumkins and fundies. The Wahhabis had a tendency to kill other Muslims whom they labeled as apostates. Were Shiah going to be welcome in Mecca? The Saudis didn't want to end up like the Taliban in Afghanistan, reviled by most of the Islamic world for their fanatical puritanism. Abdul Aziz kept the Wahhabi clerics, not to mention the Ikhwan, on a short leash.

Meanwhile, in 1913, Abdul Aziz exploited Turkish weakness and distraction during World War 1 to take Hofuf, the capital of Eastern Arabia (the Al Hasa). Using the same minimalist guerrilla tactics of his Riyadh coup, the site of Saudi Arabia's current oil riches (and a heavily Shiah-populated area) was his virtually without a fight.
"It was Westerners who discovered and developed the Kingdom's fabulous treasure chest. Western economic theories and techniques are the basis of the Kingdom's present development plans. Without the ongoing development of the Western economies there would little market fro the commodity on which the Kingdom's good life is based - and almost every detail of that good life depends on imported foreign labour [and technology and goods] for its smooth running: in a Sa'udi hotel the receptionist is Moroccan, the waiters Filipinos, the room attendants Pakistanis, the cleaners Thais, the management Lebanese, European or American- and the Saudi guests feel superior to all of tehm. Does a duke feel inferior to his tailor because he can not make a pair of trousers? Sa'udis know that God gave them all the wealth and power that they currently enjoy, and they feel neither lucky, nor surprised, nor grateful to anyone except themselves - and God." -Lacey

The Ikhwan went so far as to infiltrate and invade Kuwait in the 1920's. But with the help of the British, Abdul Aziz started boxing them in, and with no more scope for plunder and no farming skills, they became more of a problem than a solution, rebelled, and were put down definitively in 1929.

The outsize personality, wealth, and success of Abdul Aziz bought his family some time after his death in 1953, and his oldest son Saud was installed on the throne. (Of some 43 sons ... and we worry about cloning! How many royal weddings could they stage for world consumption?) But it was only with the accession of Faisal, after extensive family unhappiness and discussion, that the Saudi royal system was more or less institutionalized as a state system.

For instance, Abdul Aziz was what we might call a tea partier. He couldn't conceive of useful government services or a role in general economic development, but just gave money away as it came in, as political patronage and alms. Only under his sons (still in office, in the form of Abdullah) did Saudi Arabia engage in serious public goods development.
"Saudi Arabia has a constitution inspired by God and not drawn up by man ... True socialism is the Arab socialism laid down by the Koran." - Prince Abdullah, in Lacey

As both books describe it, the modern (more or less) Saudi royals are sincerely undemocratic, religious and sympathetic to their Wahhabi ulema (the ruling body of clerics). Yet there is constant tension between these ultra conservative clerics and the needs of governing a somewhat diverse population (including the downtodden Shia- see recent protests in Bahrain) along with economic imperatives such as hosting foreigners and introducing such modern contrivances as TV and radio.

The Saudi regime is clearly the most stringently puritan Islamic state on earth (other than the ill-fated Afghan Taliban), not to mention incredibly rich, so the ulema know that they have it relatively good and support the royals without too much grumbling. For example, back in the 50's and 60's, the Saudis welcomed radical Muslim Brotherhood members, including the brother of Sayyid Qutb, after they were suppressed in Egypt by Gamal Nasser, who disparaged Saudi Arabia as a medieval backwater. Muhammad Qutb was even made a professor in Jeddah. But ...



Here we get to the fascinating book about the Siege of Mecca, which took place in 1979. As the royal family was sitting pretty, having used the oil "weapon" to both enhance their prestige across the Islamic world and multiply their income, they were blindsided by, of all things, Islamic fundamentalism. The first expression was the rapid evaporation of the Persian royal family at the hands of the Ayatollah Khomeini. Trofimov portrays the Saudis as horrified by US weakness in the face of this coup and the ensuing hostage crisis. (Lacey sniffs, disparagingly, that the Persian royal family numbered only a few dozen, while the Saudi royal family numbered easily over 4000. Point Saudis!)

Then while the Iran crisis was at a full boil, a band of several hundred Ikhwan-like fundamentalists led by the charismatic Juhayman al Uteybi [name note- "ibn" and "al" both mean "son of.."] took over the grand mosque of Mecca and proclaimed the savior of Islam (the Mahdi) to have arrived: Juhayman's friend Abdullah al Qahtani. These revolutionaries came prepared with food, with national guard training, and plenty of weapons & ammo. They set up lethally effective sniper nests in the manarets, and held the entire Saudi army at bay for over a week. They also broadcast their messages through released pilgrims and pamphlets, convincing many outside (through a viral marketing campaign, no less!) that just perhaps, at this new century of the Islamic calendar, (year 1400), the Mahdi had indeed arrived.

Meanwhile, the Saudi government was lying through its teeth- first cutting phone and media lines to prevent any news from getting out, and then, when the US shockingly leaked the story, falsely claiming at three separate stages that the mosque had been reclaimed. Rumors swirled. Iran and the US were both blamed. US embassies all over the Muslim world were attacked (most severely in Pakistan, of course). And back in Riyadh, the royal rulers were haggling with the ulema for a fatwa allowing them to barge into the holy of holies with guns blazing.

And here is the interesting part. The ulema was fundamentally in agreement with Juhayman's radicals. Its head was, in fact, one of Juhayman's professors at the university in Medina. Horrified, yes, that they had desecrated the Kaaba, but sympathetic to the vast majority of Juhayman's manifesto that he had read out to the startled pilgrims on their hajj. This manifesto ranged from lack of democracy to a lack of jihad by the ruling Saudis. From the pollution of television to the debauchery of Saudi princes. The worship of mamon and the introduction of women to the workplace ... all the fundamentalist grievances came pouring out.

Horse-trading commenced, and the desperate Saudi princes paid a steep price for theological cooperation in purifying Islam's own shrines: a rollback of social modernization, more Wahhabi influence in the schools, and more money for the ulema to evangelize in and out of the country for its puritanical views ... the very views that had occasioned the crisis in the first place. Incidentally, in a parodic bow to modernity, the Saudi governmental department in charge of religion was at this time called the "Department of Scientific Research and Guidance". So Saudi society sank deeper into the cycle of brainwashing, ignorance, bigotry, jihadism, and extremism, which, through the providence of endless oil money, it keeps exporting assiduously to all corners of the earth. We are now familiar with the double game played by Pakistan versus its neighbors and the West. But that of Saudi Arabia has been more profound, more global, and more damaging.

Trofimov suggests that Al Qaeda was inspired by Juhayman's actions and tracts. Ayman al Zawahiri was certainly a fan. OBL was an impressionable 22 at the time of the takover, and eventually took up many of Juhayman's issues, especially the presence of infidels in Arabia. Infidels (i.e. the rest of us) were already absolutely barred from Mecca, but the fundamentalists were scandalized by any presence in the country. Especially by a military presence, which was such a sign of Arabian impotence, and which subsequently grew as the Saudis took on deeper alliance with the US to keep the Persian gulf (and their own necks) free of Iranian and Soviet influence.

Trofimov points out that the fatwa authorizing government military force to clear out the Grand Mosque justified killing Muslims in the mosque (i.e. the rebels) by declaring that, by their actions, the Juhayman group had merited rebranding as infidels. This was quite a theological summersault, since the rebels, whatever else they were and had done wrong, were pious and fundamentalist in the extreme. (We love too much, and all that!) This casual reclassification of Muslim opponents as infidels was to be redeployed by many extremists & terrorists against the very institutions the ulema was protecting- the Saudi royal family and other corrupt or modernizing rulers across the Muslim world.

It brings to mind the mutual excommunication of numerous popes and other theologians in the more dramatic phases of Christian history. This kind of essential dispute can only be resolved in three ways. The community may squelch independent thought at some level and put itself under a unitary and unquestioned authority (the Catholic solution). Or the competing communities, while retaining their individual theologies, adopt a civil, even secular, space for discourse and renounce violence / power as the arbiter of truth (the Protestant solution). Or it is possible that a community reconsiders those questions to which it had given so much thought and energy and realizes that their framework is largely imaginary, turns away from them, and concentrates on those questions that yield, or can in principle yield, to good-faith investigation (the science/atheism solution).

While one wants to pay respect to the high points of the history of Islam and its peoples, they have clearly lost ground (or returned to their martial roots) when it comes to these various mechanisms to "tame" the essential and irreconcilable conflicts of religion. A convenient solution has been to let these differences flower, but deflect their violent energies towards hapless outsiders by the convenient and practical doctrine of jihad. Darwin would have been proud!

So the Saudis sowed the seeds of the whirlwind we are reaping today, allying themselves ever deeper with their Wahhabi clerics and at the same time with those cleric's worst enemies, the US. Luckily, they were at first able to export the combustible mixture to the killing fields of Afghanistan, where a more immediate threat to the umma than TV and women's rights materialized in the form of the Soviet Union. But of course they then also inspired the horrors of the Taliban government, and exported most of the hijackers of 9/11.

The long game for Muslim hearts and minds has come back to focus on the US for the last decade. With the Jasmine revolutions, we may have turned a corner among the relatively cosmopolitan portions of the Muslim world, which reject fundamentalism in favor of liberalism and democracy. Yet the subtext remains power and legitimacy, and religion remains central. As long as the primary allegiance of Muslims is to their totalitarian religion, then legitimacy and power will flow from religion as well, empowering those who claim to speak its most fundamental truths.

From Muhammad himself through to the Wahhabis, the Ikhwan, the Juhayman-ists and Al Qaeda of today, God favors and gives power to those on his side. Success in war directly implies spiritual righteousness. It is hard to overestimate the damage that this instinctive philosophy, expressed most succinclty through the doctrine of jihad, unleashes upon the world. Most religions, including Islam and Christianity, have tried to temper this atavistic instinct with rules of engagement that restrict what brutality one can inflict in its name, but each seeks and adulates power (King of kings, the Family, the Pope, the Crusades, God bless the USA, etc..).

Muslim extremists have labored (with our help) to portray the various US invasions as crusades, which then constitute a direct clash of religions and gods, victory going to the most righteous, and the most righteous justified by their victory, whatever the abhorrent tactics employed. Yet the other side of the coin rarely shines as cogently. Has the last century of weakness and degradation led Muslims to question their religion? Has the strength and victory of Israel reconciled, even converted, its opponents to its theology? Has Islam's inability to grapple with modernity and the consequent new forms of power led to doubt and atheism? For a few, yes. But for most, no. Religious narcissism doesn't let itself become so depressed, since its fundamental purpose is to provide hope and meaning in a confusing world. Self-pitying, other-blaming narratives and conspiracy theories typically fill the gap.

"'It's all part of a great plot, a grand conspiricy, ' King Faisal replied with confidence. 'Communism, as I told you, is a Zionist creation designed to fulfil the aims of Zionism. They are only pretending to work against each other.'" - Lacey
"Fahd: 'Our enemy is ... the world Zionism, which is seeking to harm the Saudi Arabian Kingdom and to distort its role in every way possible. ... A media war was in the full sense of the word waged against us ... Psychological rape- this is the right expression.' The Saudi royals  would use precisely the same language to complain about Western reporting on Saudi affairs after September 11, 2001." -Trofimov

After they crushed the Siege of Mecca, the Saudis frantically searched for the body of the purported Mahdi, which had not turned up and was feared to have supernaturally disappeared. Finally, a mutilated half-corpse was identified, allowing the royal family to exult that the whole affair had been fundamentally illegitimate. Since as the Mahdi's own mother bluntly said:
"If my son is the Mahdi, he will kill you, if he is not, you will kill him."



What of the future? There are several interacting trends: China is the major rising power that is looking for friends in resource-rich areas of the world, while the US is, in relative terms at least, declining in dominance. Peak oil is here, so all economies will be increasingly constrained by energy scarcity. And jihadism and liberalism will continue to battle for the soul of the Muslim world.

The Saudis, despite their huge reserves of oil, seem to be having difficulty raising production. Their domestic consumption is ever-increasing, making for what looks like a plateau or peak in marketable world oil production.



It goes without saying that they will continue to export oil for decades to come, and will continue to reap the riches of ever-increasing prices.

But what of their political and social system? The gerontocracy of Abdul Aziz's sons is coming to an end, and they will have to transfer power to a new generation of princes. Saudi Arabia was not untouched by the Arab spring, coming instinctively to Mubarak's defense, feeling immediately pressed to spread an extra $36 billion, and send their army to assist their fellow Sunni rulers of Bahrain to brutally suppress protests by their Shiah majority.

On the long term, one can see that Saudi Arabia and China have a convergence of interests. Both run autocratic yet relatively stable regimes that chafe under US domination. China has the money, and Saudi Arabia has the oil. The only missing ingredients are the military power and extensive relationships the US has in the Muslim world. (Though China has cultivated the friendship of Pakistan, which offers little but hatred of India and the US.)

But now, with the Arab spring, we are at a turning point, as the US backs democracy over autocracy (though Obama in his recent speech didn't breathe a word about Saudi Arabia- the royal elephant in the room). The US wants to encourage and be friends with the future democracies of Egypt, Tunisia, Turkey, Libya, Iraq, Afghanistan, and whoever else wants to join the party. Will Pakistan be on that list? Will Syria? Will Iran? Will Saudi Arabia? It is like the dissolution of the Soviet empire, played out in very slow motion, after its quasi-religious ideology, like that of Islamism, expired from direct and painful empirical disproof.

The question is whether and how long autocracies like China, Saudi Arabia, and Cuba can hold out against the Western model of legitimate and liberal bottom-up politics. As long as they can, we will have high drama and ideological tension in world affairs, indeed great danger if China turns jingistically nationalistic and sues for hegemony over its region and over other resource-rich areas of the world.

The US has played a long and generally consistent game, encouraging country after country from autocracy to democracy, by our example and actions. The list is long, from Japan and Germany to South Korea to the Philippines and Russia. Exceptions are glaring, such as Iran, Chile, and others. Numerous countries hang in the balance. But the liberal democratic model is attractive and durable, indeed essentially irreversible once established. China is truly the outlier, as a uniquely successful authoritarian system- will it undergo a spiritual turn towards Buddhism? Will it turn fascist? Or will its government oh-so gradually transform into a lawful, democratic system, now that the primary task of economic development is well under way and its middle class is growing?

Saudi Arabia has a more conservative culture than China's, even more deferential to authority, tradition, and ideological orthodoxy. I anticipate that their transition to the next generation of princes will go smoothly within the decade. But the tide of liberalizing sentiment and media across the Muslim world is lapping at their door. Even an alliance with China can not insulate them from their own people's basic desires and the moderation or collapse of their supporting ideology.


Their export of Wahhabism increasingly falls on deaf and resentful ears. The capacity of jihadis to terrorize their enemies and gain ground against the infidel has been nullified in the face of massive and persistent opposition from the West, particularly the US. Their tactics have been repulsive. The Saudis can see the jihadi blowback happening in Pakistan, as carefully tended militants feel worthy of more than just being used in geopolitical games. The export of bigotry and jihad can only go on so long before the market is saturated and foreign attention falls back on its source.

The Saudi government runs extensive theological retraining operations (along with brutal prisons) to mitigate internal dissent/extremism, (defined in relative terms!). But at some point, they will surely put two and two together and address the problem at its internal source- the Wahhabi ulema and its ideology. Perhaps the Saudis will invite Richard Dawkins to set them straight on how improbable Allah really is. [That is a joke!]  Extremism won't go away entirely, but as in the days of Adbul Aziz, if it no longer is useful to its sponsors, it tends to wear out its welcome and be defunded and deflated. It is hard to believe that the legitimacy of the Saudi state will not over time become more dependent on the desires of its people than on Wahhabi fundamentalism, with or without the royals at the helm.

... And then a new savior will arise, leading a ragtag but fierce band of bedouin|fremen out of the sandy dunes to cleanse the licentious rot of a modernized and fallen Arabia|Arrakis ...


"This is what Keynes had always claimed: the market system lacked a thermostat and its temperature was likely to oscillate wildly unless controlled by the government."
...
"Rich countries should be making preparations for life beyond capitalism."
"It is clear that the Japanese economy is dual in nature. Their export-oriented manufacturing sector is highly productive because it competes in world markets. Its domestic service sector does not have to 'compete' in this way and can focus on other objectives that are of benefit to the Japanese people.
Like – maintaining high levels of secure employment with comcomitant income security.
Like – being nice to each other when transactions are required.
Like – being nice to tourists who bring them income.
The result from the conservative perspective – low productivity and waste."

Saturday, May 14, 2011

Artificial intelligence, the Bayes way

A general review describes progress in modelling general human intelligence.

This will be an unusual review, since I am reviewing a review, which itself is rather "meta" and amorphous. Plus, I am no expert, especially in statistics, so my treatment will be brutally naive and uninformed. The paper is titled "How to Grow a Mind: Statistics, Structure, and Abstraction". Caveats aside, (and anchors aweigh!), the issue is deeply interesting for both fundamental and practical reasons: how do we think, and can such thought (such as it is) be replicated by artificial means?

The history of AI is a sorry tale of lofty predictions and low achievement. Practitioners have persistently underestimated the immense complexity of their quarry. The reason is, as usual, deep narcissism and introspective ignorance. We are misled by the magical ease by which we do things that require simple common sense. The same ignorance led to ideas like free will, souls, ESP, voices from the gods, and countless other religio-magical notions to account for the wonderful usefulness, convenience and immediacy of what is invisible- our unconscious mental processes.

At least religious thinkers had some respect for the rather awesome phenomenon of the mind. The early AI scientists (and especially the behaviorists) chose instead to ignore it and blithely assume that the computers they happened to have available were capable of matching the handiwork of a billion years of evolution.

This paper describes, in part, a theological debt of another kind, to Thomas Bayes, who carried on a double life as a Presbyterian minister in England and as a mathematician member of the Royal Society (those were the days!). Evidently an admirer of Newton, his only scientific work published in his lifetime was a fuller treatment to Newton's theory of calculus.
"I have long ago thought that the first principles and rules of the method of Fluxions stood in need of more full and distinct explanation and proof, than what they had received either from their first incomparable author, or any of his followers; and therefore was not at all displeased to find the method itself opposed with so much warmth by the ingenious author of the Analyst; ..."

However, after he died, his friend Robert Price found and submitted to the Royal Society the material that today makes Bayes a household name, at least to statisticians, data analysts and modellers the world over. Price wrote:
"In an introduction which he has writ to this Essay, he says, that his design at first in thinking on the subject of it was, to find out a method by which we might judge concerning the probability that an event has to happen, in given circumstances, upon supposition that we know nothing concerning it but that, under the same circumstances, it has happened a certain number of times, and failed a certain other number of times."

And here we come to the connection to artificial intellegence, since our minds, insofar as they are intelligent, can be thought of in the simplest terms as persistent modellers of reality, building up rules of thumb, habits, theories, maps, categorizations, etc. that help us succeed in survival and all the other Darwinian tasks. Bayes's theorem is simple enough:

From the wiki page:
"The key idea is that the probability of an event A given an event B (e.g., the probability that one has breast cancer given that one has tested positive in a mammogram) depends not only on the relationship between events A and B (i.e., the accuracy of mammograms) but also on the marginal probability (or 'simple probability') of occurrence of each event."

So to judge the probability in this case, one uses knowledge of past events, like the probability of breast cancer overall, the probability of positive mammograms overall, and the past conjunction between the two- how often cancer is detected by positive mammograms, to estimate the reverse- whether a positive mammogram indicates cancer.

The authors provide their own example:
"To illustrate Bayes’s rule in action, suppose we observe John coughing (d), and we consider three hypotheses as explanations: John has h1, a cold; h2, lung disease; or h3, heartburn. Intuitively only h1 seems compelling. Bayes’s rule explains why. The likelihood favors h1 and h2 over h3: only colds and lung disease cause coughing and thus elevate the probability of the data above baseline. The prior, in contrast, favors h1 and h3 over h2: Colds and heartburn are much more common than lung disease. Bayes’s rule weighs hypotheses according to the product of priors and likelihoods and so yields only explanations like h1 that score highly on both terms."

So, far it is just common sense, though putting common sense in explicit and mathematical form has important virtues, and indeed is the key problem of AI. The beauty of Bayes's theorem is its flexibility. As new data come in, the constituent probabilities can be adjusted, and the resulting estimates become more accurate. Missing data is typically handled with aplomb, simply allowing wider estimates. Thus Bayes's system is a very natural, flexible system for expressing model probabilities based on messy data.

Language is a classic example, where a children rapidly figure out the meanings of words, not from explicit explanations and grammatical diagrams, (heaven forbid!), but from very few instances of hearing them used in a clear context. Just think of all those song lyrics that you mistook for years, just because they sounded "like" the singer wanted ... a bathroom on the right. We work from astonishingly sparse data to conclusions and knowledge that are usually quite good. The scientific method is also precisely this, (method of induction), more or less gussied-up and conscious, entertaining how various hypotheses might achieve viable probability in light of their relations to known prior probabilities, otherwise known (hopefully) as knowledge.

In their review, the authors add Bayes's method of calculating and updating probabilities to the other important element of intelligence- the database, which they model as a freely ramifying hierarchical tree of knowledge and abstraction. The union of the two themes is something they term hierarchical Bayesian models (HBMs). Trees come naturally to us as frameworks to categorize information, whether it is the species of Linnaeus, a system of stamp collecting, or an organizational chart. We are always grouping things mentally, filing them away in multiple dimensions- as interesting or boring, political, personal, technical, ... the classifications are endless.

One instance of this was the ancient memory device of building rooms in one's head, furnishing prodigious recall to trained adepts. For our purposes, the authors concentrate on the property of arbitrary abstraction and hierarchy formation, where such trees can extend from the most abstract distinctions (color/sound, large/small, Protestant/Catholic) to the most granular (8/9, tulip/daffodil), and all can be connected in a flexible tree extending between levels of abstraction.

The authors frame their thoughts, and the field of AI generally, as a quest for three answers:
"1. How does abstract knowledge guide learning and inference from sparse data?
2. What forms does abstract knowledge take, across different domains and tasks?
3. How is abstract knowledge itself acquired?"

We have already seen how the first answer comes about- through iterative updating of probabilistic models following Bayes's theorem. We see a beginning of the second answer in a flexible hierarchical system of categorization that seems to come naturally. The nature and quality of such structures are partly dictated by the wiring established through genetics and development. Facial recognition is an example of an inborn module that classifies with exquisite sensitivity to fine differences. However, the more interesting systems are those that are not inborn / hard-wired, but that allow us to learn through more conscious engagement, as when we learn to classify species, or cars, or sources of alternative energy- whatever interests us at the moment.

Figure from the paper, diagramming hierarchical classification as done by human subjects.

Causality is, naturally, an important form of abstract knowledge, and also takes the form of abstract trees, with time the natural dimension, through which events affect each other in a directed fashion, more or less complex. Probability and induction are concerned with detecting hidden variables and causes within this causal tree, such as forces, physical principles, or deities, that can constitute hypotheses that are then validated probabilistically by evidence in the style of Bayes.

A key problem of AI has been a lack of comprehensive databases that provide the putative AI system the kind of comon-sense, all-around knowledge that we have of the world. Such a database allows the proper classification of details using contextual information- that a band means a music group rather than a wedding ring or a criminal conspiracy, for instance. The recent "Watson" game show contestant simulated such knowledge, but actually was just a rapid text mining algorithm, apparently without the kind of organized abstract knowledge that would truly represent intelligence.

The authors characterize human learning as strongly top-down organized, with critical hypothetical abstractions at higher levels coming first, before details can usefully be filled in. They cite Mendeleev's periodic table proposal as an exemplary paradigm hypothesis that then proved itself by "fitting" details at lower levels, thereby raising its own probability as an organizing structure.
"Getting the big picture first- discovering that diseases cause symptoms before pinning down any specific disease-symptom links- and then using that framework to fill in the gaps of specific knowledge is a distinctively human mode of learning. It figures prominently in children's development and scientific progress, but has not previously fit into the landscape of rational or statistical learning models."

Which leads to the last question- how to build up the required highly general database in a way that is continuously alterable, classifies data flexibly in multiple dimensions, and generates hypotheses (including top-level hypotheses and re-framings) in response to missing values and poor probability distributions, as a person would? Here is where the authors wheel in the HBMs and their relatives, the Chinese Restaurant and Indian Buffet processes, all of which are mathematical learning algorithms that allow relevant parameters or organizing principles to develop out of the data, rather than imposing them a priori.
"An automatic Occam's razor embodied in Bayesian inference trades off model complexity and fit to ensure that new structure (in this case a new class of variables) is introduced only when the data truly require it."
...
"Across several case studies of learning abstract knowledge ... it has been found that abstractions in HBMs can be learned remarkably fast from relatively little data compared with what is needed for learning at lower levels. This is because each degree of freedom at a higher level of the HBM influences and pools evidence from many variables at levels below. We call this property of HBM's 'the blessing of abstraction.' It offers a top-down route to the origins of knowledge that contrasts sharply with the two classic approaches: nativism, in which abstract concepts are assumed to be present from birth, and empiricism or associationism, in which abstractions are constructed but only approximately, and slowly in a bottom-up fashion, by layering many experiences on top of each other and filtering their common elements."

Wow- sounds great! Vague as this all admittedly is, (the authors haven't actually accomplished much, only citing some proof of principle exercises), it sure seems promising as an improved path towards software that learns in the generalized, unbounded, and high-level way that is needed for true AI. The crucial transition, of course, is when the program starts doing the heavy lifting of learning by asking the questions, rather than having data force-fed into it, as all so-called expert systems and databases have to date.

The next question is whether such systems require emotions. I think they do, if they are to have the motivation to frame questions and solve problems on their own. So deciding how far to take this process is a very tricky problem indeed, though I am hopeful that we can retain control. If I may, indeed, give in all over again to typical AI hubris ... creating true intelligence by such a path, not tethered to biological brains, could lead to a historic inflection point, where practical and philosophical benefits rain down upon us, Google becomes god, and we live happily ever after, plugged into a robot-run world!

An image from the now-defunct magazine, Business 2.0. Credit to Don Dixon, 2006. 

"Greece should definitely leave the Eurozone."

Saturday, May 7, 2011

Free will- solved!

A theory of free will.

One of the perennial chestnuts of philosophy is the problem of free will. Mostly a problem for theists rather than non-theists, it still holds a few mysteries for everyone. Is our natural intuition of freedom true, that we reign as sovereign beings, maybe influenced, but never finally determined, in our choices? Or are those choices entirely determined, as the Marxists, physicists, and Tolstoy tend to think- by history, social conditions, the factors of production, character, genes, etc?

And if free will is an illusion, then what of morals? Can anyone be blamed for their choices? Can moral responsibility and agency exist without free will? This is surely the more interesting question.

The physical basis of life is now well-known, so if one assumes that our minds arise from the activities of our brains and, like everything else, are bound by physical principles, there is no escaping that free will really doesn't exist. Certainly there are theists who still believe in souls, supernaturalism, magical interventions in the evolutionary process, and the like, but without much cause aside from precisely the sort of intuitions that are better examined than taken at face value.

The physical world is causally closed as far as we know, and while that may or may not encompass the origin of itself in the pre-big-bang, it certainly seems to encompass our bodies and brains. So whatever we make of our feelings and agency, there is nothing we do or decide that could, on these physical principles, possibly occur without being the consequence of a train of prior causes & physical events. This means that we don't have what I would call "atomic" free will.

The quantum revolution throws a minor wrench into the situation, because the fundamental uncertainty it finds at small scales means that, however much we know, we can never predict where the full set of physical causes is going to take us. Everything may be caused by prior events, but that doesn't mean everything is determined to a singular fate, as Laplace tried to argue. Some of our prior events are truly random, and thus unknowable in advance. Yet that hardly gives us any more agency- it only leavens the causes that determine our decisions with a bit of comedic randomness.

Daniel Wegner wrote a very nice book about how our minds/brains nevertheless maintain an illusion of free will. For instance, if a person is (falsely) convinced that he did some act, he will typically spin elaborate post-rationalizations to explain its motivation to the interviewer. This is most strikingly true for people with neurological disorders, like split brain patients where one hand literally does not know what the other was doing. The verbal half of the brain will typically make up stories to rationalize what the disconnected half is doing. One can see similar things going on in the history of religion, where humans compulsively make up stories about literally everything under the sun that is mysterious. Many of which have had to be retracted, somewhat painfully, at times, or conveniently re-blessed as artistic myths.

More minutely, the work of Libet showed that our actions, and especially our conscious choices about them (like deciding to raise a coffee cup) are always preceded by unconscious trains of neurological activity. The choice is never de novo, but is itself a consequence of prior unconscious activities in the brain, and indeed comes to our consciousness- as a choice- well after it has taken place and set the physical events in motion. So consciousness is not sovereign at a very granular level either, but more of a caboose on the train, learning about things after they happen, more part of a feedback mechanism than of an action mechanism.

So we don't seem to have actual free will. Why do we feel like we are, nevertheless in charge when we hoist a glass to drink? At this point we have to ask the Buddhist/Hume question ... what is the self? Isn't it really an unending stream of causes, influences, and effects- our life histories caroming off our genetic and developmental inheritances? The deeper you look, the messier it is, to the point that a discrete "self" is undetectable. And the mess is mostly invisible, since only a tiny part of the mind's contents are conscious, and the far vaster unconscious activity rests on even more inscrutable molecular foundations. We simply don't know what is going on in our own minds, so can hardly be blamed for regarding it as magic, with the convenient (and, as always, narcissistic) assumption that we are master of this house.

Very well- free will is illusory. What consequences does this have for our moral and legal universe? This is where things get more interesting. For theists, aside from the convenience of off-loading the self into a god-like magical soul that, as they postulate, lives forever, the idea of free will also helps account for evil, since with God stipulated as all-good and all-powerful, there has to be someone else to blame: us, our original sin, and our darned free will to screw everything up!

For non-theists, of course, this angle is completely irrelevant. Yet still, the issue of blame reappears in mundane guise. If the self and the choices it makes are not sovereign, but rather inexorably caused by prior events, then how can anyone be blamed for anything? Doesn't morality become an empty joke?

Thankfully, the answer is no- it doesn't. The reason lies in another aspect of our programming, which is that we are not just physically-bounded no-free-will flesh-bots. We are physically-bounded no-free-will flesh-bots that can learn. Learning is the crucial ingredient in a moral universe, rendering us different from inanimate and non-learning beings. Do we blame rocks for falling on our heads? No we don't, unless we regard them as spirit-inhabited. Do we blame rabid dogs for biting us? No- they have lost their reason, and specifically, their ability to be trained (and we would blame their masters, anyhow). Do we blame insane people for murder? No, they get an insanity defense, because, crucially, they either don't know better, or are incapable of doing better. They are locked up securely rather than punished, because punishment wouldn't do any good.

And what good was punishment supposed to do anyhow? Ideally, (and I am not speaking of our current appalling penal system), punishment teaches the criminal a moral attitude, especially empathy, hopefully inducing deep personal change. Additionally, it has the exemplary role of teaching others the fate that immoral action leads to, as judged by the social system they share. It is a training exercise, which is exactly the sort of prior influence that comes back around to (hopefully) affect our future actions which, as we saw above, do not result from free will.

The (stricter) muslims cut a hand from the thief, which has all these salutary effects. Aside from significantly incapacitating the person from future thievery, it reminds him as few other punishments could of the social rules, and reminds all others who see it as well, influencing their future actions in turn. Unfortunately, its harshness also seriously impairs the society's claim to greater moral ideals and empathy, counteracting its training purpose.

 In so many ways- eating, gambling, drugs, advertising- we know very well we don't have free will. There is hardly a richer literature than that of the tragic battle against temptation and fate. And throughout history, (including that of religion), we labor on, seeking social power and influence to defend others from temptation and bend others to our ideas of human betterment, descended as they are from, to paraphrase Keynes, some defunct philosopher. "Free" has nothing to do with it, but learning and mutual social influence certainly do.

We live in a matrix of influences and prior events. We are built as social beings to give and receive these influences, wired for empathy, for conversation, and inspiration. We cultivate each other and ourselves in a constant effort to attain our overall goal, on which our unconscious and conscious minds are in full agreement- increased happiness. The moral landscape is just another word for that mutual cultivation, on which everything depends.


  • A blogging friend writes about punishment and its moral role, coming to a similar point from a very different direction.
  • Predator drone court-martialed for killing civilians.
  • Salman Rushdie chimes in on Pakistan.
  • Slate breaks the existence of SEAL cats.
  • The case for negotiation in Afghanistan, such as it is.
  • A book for those interested in Christian-ized histories of early America.
  • A brief guide to dark matter.
  • Software toys with human perception and locomotion.
  • More in the annals of class war, and then some more.
  • A dark age of economic dementia.
  • Bill Mitchell quote of the week (from Charles Ferguson), on the credibility of economist Martin Feldstein:
"Martin Feldstein, a Harvard professor, a major architect of deregulation in the Reagan administration, president for 30 years of the National Bureau of Economic Research, and for 20 years on the boards of directors of both AIG, which paid him more than $6-million, and AIG Financial Products, whose derivatives deals destroyed the company. Feldstein has written several hundred papers, on many subjects; none of them address the dangers of unregulated financial derivatives or financial-industry compensation."