How far away is the singularity? That is the point when machine intelligence exceeds human intelligence, after which it is thought that this world will no longer be ours to rule. Rick Bostrom, a philosopher at Oxford, doesn't know when this will be, but is fearful of its consequences, since, if we get it wrong, humanity's fate may not be a happy one.
The book starts strongly, with some well argued and written chapters about the role of intelligence in humanity's evolution, and the competitive landscape of technology today that is setting the stage for this momentous transition. But thereafter, the armchair philosopher takes over, with tedious chapters of hairsplitting and speculation about how fast or slow the transition might be, how collaborative among research groups, and especially, how we could pre-out-think these creations of ours, to make sure they will be well-disposed to us, aka "the control problem".
Despite the glowing blurbs from Bill Gates and others on the jacket, I think there are fundamental flaws with this whole approach and analysis. One flaw is a failure to distinguish between intelligence and power. Our president is a moron. That should tell us something about this relationship. It is not terribly close- the people generally acknowledged as the smartest in history have rarely been the most powerful. This reflects a deeper flaw, which is, as usual, a failure to take evolution and human nature seriously. The "singularity" is supposed to furnish something out of science fiction- a general intelligence superior to human intelligence. But Bostrom and others seem to think that this means a fully formed human-like agent, and those are two utterly different things. Human intelligence takes many forms, and human nature is composed of many more things than intelligence. Evolution has strained for billions of years to form our motivations in profitable ways, so that we follow others when necessary, lead them when possible, define our groups in conventional ways that lead to warfare against outsiders, etc., etc. Our motivational and social systems are not the same as our intelligence system, and to think that anyone making an AI with general intelligence capabilities will, will want to, or even can, just reproduce the characteristics of human motivation to tack on and serve as its control system, is deeply mistaken.
The fact is that we have AI right now that far exceeds human capabilities. Any database is far better at recall than humans are, to the point that our memories are atrophying as we compulsively look up every question we have on Wikipedia or Google. And any computer is far better at calculations, even complex geometric and algebraic calculations, than we are in our heads. That has all been low-hanging fruit, but it indicates that this singularity is likely to be something of a Y2K snoozer. The capabilities of AI will expand and generalize, and transform our lives, but unless weaponized with explicit malignant intent, it has no motivation at all, let alone the motivation to put humanity into pods for its energy source, or whatever.
People-pods, from the Matrix. |
The real problem, as usual, is us. The problem is the power that accrues to those who control this new technology. Take Mark Zuckerberg for example. He stands at the head of multinational megacorporation that has inserted its tentacles into the lives of billions of people, all thanks to modestly intelligent computer systems designed around a certain kind of knowledge of social (and anti-social) motivations. All in the interests of making another dollar. The motivations for all this do not come from the computers. They come from the people involved, and the social institutions (of capitalism) that they operate in. That is the real operating system that we have yet to master.
- Facebook - the problem is empowering the wrong people, not the wrong machines.
- Barriers to health care.
- What it is like to be a psychopathic moron.
1 comment:
Added note: It is also worth asking what this higher intelligence will be good for. Humanity has plucked a lot of low-hanging fruit out of the technological possibilities that reality presents. We can not discover electricity again, or radioactivity. Will greater intelligence be able to get us out of the gravity well of earth? Probably not. Conjure time travel? Will greater intelligence provide practical fusion, or for that matter, fission, power? That is perphaps the most attractive and conceivable boon that we can ask for. But it is, again, unlikely, since we have thought pretty carefully about these problems, and a good deal of their difficulty lies in asking what degree of danger we are willing to put up with- questions of human value, not sheer intelligence. Will weather forecasting improve by a couple of days more? This is a good example of the limits of prediction, which afflict any intelligence, no matter how computationally powerful.
Post a Comment