Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

Saturday, December 21, 2013

A robot conundrum

Can we use them, once they are usable?

The trajectory of future technology is pretty clear. We already talk to computers, and they talk back. Soon they will be driving us around on the open roads. Where will it all end? What I want a robot for is to do my laundry and dust the house. What will it take to get there? Honestly, I think it will take consciousness.

Robots will eventually do all kinds of jobs. But to do general tasks like health care and home maintenance, how smart will they have to be? Very smart. And this leads to a conundrum- if robots are smart enough to fix up around the house, hang the Christmas lights, mow the yard, and do all things we don't want to do, won't they have the consciousness that gives them rights against being callously exploited to do just those jobs?

A great deal of intelligence would be required to be a general helper robot. We consider chimpanzees highly intelligent, and worthy of extensive protections of a humane character. But they are not nearly intelligent enough to be a handy helper around the house. Quite the opposite, in fact. Of course they were never designed for such a role, and in contrast share naturally much more of our emotional makeup, which leads to our mutual empathy. But still, it is hard to imagine that the requisite intelligence could happen without some modest dose of emotion and all-around empathy-engendering sense of self, of purpose, of social facility- in short, human-ness.

A robot-heavy future is portrayed in Isaac Asimov's The Naked Sun, whose planet Solaria has a tiny population of humans, served by countless robots, affording all possible luxury. Robots run all factories, including those making more robots, keep house (i.e. mansions), attend all needs, serve as police, and raise children, all under the most tenuous supervision. It is a little far-fetched to imagine such possibilities without also considering that these robots have a consciousness that reaches as far as that of their masters, comprehending the workings of the world they are in, including their own role in it.

Would they have emotions regarding self-worth, self-determination, freedom? One would imagine that this comes with the territory of consciousness. Such beings must look out for their own interests to some degree, to be able to function. They need to experience pain or some analog in order to avoid damage. They need to function socially, interpreting the unavoidably complex and conflicting needs of humans in order to serve them. Jeeves comes to mind. They may not need to be quite as greedy, competitive, and self-centered as humans are, but some of such emotion is required for independent and useful agency.

Also, at a certain high level of functioning, we may not be able to tell very clearly what such beings feel inside, subjectively. Even if we think they are programmed to feel no social pains at being a servile class, or annoyance at being ordered about by, say, an ignorant or even mischievious child, their complexity at that level is likely going to make it impossible to know for sure. Many complicated computer systems already are quite a bit beyond anyone's full understanding, leading to the many software contracting fiascos in the news.

Ironically, Asimov's Solaria is a sort of hell where the humans neurotically isolate themselves from each other and tend to lose their capabilities and interests- a common syndrome of the overly well-off. So in such a future, it's no good, either for the robots or for us.

  • Krugman on the progressive program, rolling back the feudal class war.
  • Guess which debtors get the shaft, and which creditors get first dibs?
  • Fifth graders must not think for themselves!
  • Will CEOs and other corporate officers presiding over crime be prosecuted?
  • Gun nuts deserve intensive Freudian therapy.
  • Religion is not good for politics, nor is politics good for religion.
  • Yes, she really was a welfare queen, and so much more.
  • The US biomedical research establishment is unsustainable and cruel to trainees. STEM shortages are, incidentally, a "myth".
  • Annals of feudalism- temp workers.
  • Economics graph of the week, from Martin Wolfe, financial times. Where does all that superior US productivity go? Not to ordinary people, let alone to public services.

Saturday, November 2, 2013

A strange loop it is to write about one's I so much

Review of Douglas Hofstadter's "Gödel Escher Bach" and "I Am a Strange Loop", thus saving the reader roughly 1000 pages of helpless digression.

Douglas Hofstsadter laments in his preface to his sequel ("I am a strange loop"; ISL, 2007) to his much  more famous "Gödel Escher Bach" (GEB, 1979) that for all its fame and prizes, including the Pulitzer prize, most people he meets didn't get the point of GEB. And no wonder, as those points flit by with great rapidity amidst a welter of puns, word games, abstruse code exercises, maddening repetition, dilatory dialogs, and wayward tangents.

But here they are (apologies for my lack of expertise ... please comment on any inaccuracies):
" The possibility of constructing, in a given system, an undecideable string via Gödel's self-reference method, depends on three basic conditions:  
1. That the system should be rich enough so that all desired statements about numbers, whether true or false, can be expressed in it. ...
2. That all general recursive relations should be represented by formulas in the system. ...
3. That the axioms and typographical patterns defined byitsrules be recognizable bysome terminating decision procedure. ...
 
Satisfaction of these three conditions guarantees that any consistent system will be incomplete, because Gödel's construction is applicable.
The fascinating thing is that any such system [human thought and language are the obvious references] digs its own hole; the system's own richness brings about its own downfall. … [analogy to critical mass in physics and bomb-making] ... But beyond the critical mass, such a lump will undergo a chain reaction, and blow up. It seems that with formal systems there is an analogous critical point. Below that point, a system is 'harmless' and does not even approach defining arithmetical truth formally; but beyond the critical point, the system suddenly attains the capacity for self-reference, and thereby dooms itself to incompleteness."

All the references to Bach and Escher in GEB are really tangential examples of self-reference. It is Kurt Gödel's work that is the core of the book, as it is of ISL. Gödel made a critique of the Principia Mathematica (PM), by Alfred Whitehead and Bertrand Russell, which attempted to build a tightly closed system of axioms and logic that was both incapable of rendering false statements, and also comprehensive in its ability to found all relevant aspects of mathematics and logic. But Gödel showed that it was incomplete, which means that it could represent paradoxical statements that could not be either true or false. It did indeed found all mathematical logic on very uncontroversial axioms, but it developed (painfully) a language for all this that was so rich that it was impossible to keep within the bounds of truth alone.

Gödel's paradigmatic statement, constructed out of unspeakably complicated machinations of the PM tools (and which Russell never accepted were proper machinations) essentially created the statement "This statement is false". But Hofstadter admits that self-referencing paradox is not the only possible type of ambiguous, unresolvable statement Gödel created or suggested- there is an infinite zoo of them.

The point is that truly intelligent systems are open. They are not cognitively or computationally bound by their programming to tread around some threshing track time, and time, and time, again. They not only are responsive parts of their environment, but more importantly have the symbolic capacity to represent imagined realities, real realities, (and the self!), in such recursive, endlessly complicated ways that no theorem-bound system of cognitive computation can account for it. In this way we are endless, multilevel, strange, loops.

But there is one aspect of all this that is most odd, which is Hofstadter's focus on the self, which is prominent in both books, and especially, even gratingly, so, in the second. Much of his conceptual play concerns self-reference, which is enjoyably loopy. Many philosophers and thinkers generally seem to think self-consciousness the very height of consciousness itself. Perhaps even its definition. As Hofstadter says, "Just as we need out eyes in order to see, we need our 'I''s in order to be!". But I don't think that is the case, at least not consciously. Self consciousness certainly comes along in the package of cognitive complexity, once one is making models of everything conceivable about the world. But to me, the core of consciousness is far more basic- the sense of being, not of self. And the core of the sense of being is made up of the flow of sensations, especially pain and pleasure.

I frequently see squirrels from my window, playing, chasing, eating, hiding, calling, etc. They are especially interested in the bird feeder and have tried no end of strategems to get into it. They are clearly highly conscious beings, driven by pleasures and pains, just as we are. They wouldn't know what to do with a mirror, but nevertheless we immediately empathize with their desires and fears, clearly communicated and experienced by themselves. Their consciousness is not infinitely expansive by way of symbolic representation, as ours is, but nor is it negligible.

Hofstadter makes a special project of declaring that mosquitos have zero consciousness, thus sanctioning his frequent bloodthirsty murders, when he is otherwise a principled vegetarian. Why be a vegetarian if you are interested only in symbolically self-referential and unbounded forms of consciousness? Obviously something else is going on, which he jokingly names "hunekers"- small sub-units of consciousness, of which he assigns various amounts to animals and humans of various grades and states.
"Mosquitos, because of the initial impoverishment and the fixed non-extensibility of their symbol systems, are doomed to soullessness (oh, all right- maybe 0.00000001 huneker's worth of consciousness- just a hair above the level of a thermostat)."
But don't mosquitos experience pain and pleasure? Their behavior clearly shows avoidance of danger, and eager seeking of sustenance and reproduction. We know that the biology of animals with nervous systems (not bacteria) organizes these motivations in an internal experience of pain and pleasure. Would Hofstadter compacently sit down to a session of pulling the legs and wings off of mosquitoes he has caught? I think not, because though we certainly don't know what is going on in those very tiny heads, if anything it is the integration of perception, pain, and pleasure in ways that must earn our empathy, and which amount to a level of consciousness radically beyond that of a thermostat.

Hofstadter adds in the analogy of a human knee reflex, saying that perhaps a mosquito's mind is at that level, which no one would claim is conscious. But the integrative work being done, and the whole point of the integration, is quite different in these cases, making it seem much more likely, to me at least, that the mosquito is working with a very tiny, but intensely felt, bit of consciousness. Indeed one might posit that there need not be any particular relation between the cognitive complexity of an animal's consciousness and the degree of its feelings. We know from human infants that feelings can be monumental, and consciousness of hurt (and pleasure) be extremely acute, with precious little cognition behind them. Do we therefore empathize with them less?

This leads to the more general issue of the relation between consciousness and its physical substrate. Despite the talk of "souls", Hofstadter is a thorough naturalist, steeped in the academic field of artificial intelligence. While he has shown much greater proclivities towards philosophy than programming, the basic stance is unchanged- consciousness is a case of enormously complicated computation with (in the human case) the infinitely rich symbol sets of language and whatever is knocking around internally in our mental apparatus. All of which all could conceivably happen on a silicon substrate, or one of orchestral instruments, or other forms, as long as they have the necessary properties of internal communication, logical inference, memory, etc.

For Hofstadter, consciousness is necessarily a high-level phenomenon. It depends on, but is not best characterized by, particular neurons, and certainly is not specifically associated with quantum phenomena, microtubules, or any of the other bizarre theories of mind / soul that various pseudo-theorists have come up with to bridge the so-called mind-body divide. Indeed he spends a great deal of time (in ISL) on consoling himself that his wife, who died unexpectedly and young, lives on precisely because some part of her high-level programming continues to function inside Hofstadter, insofar as he learned to see the world through her eyes, and through other remaining reflections of her consciousness. Nothing physical remains, but if the programming still happens, then the consciousness does as well.

I have my doubts about that proposition, again drawing on my preference for characterizing consciousness in terms of experience, feeling, and emotion, not symbology. But if one has been trained by one's wife, say, to thrill to certain types of music one hadn't appreciated before, then one could make the case in those terms as well.

The question is made more interesting in a long section (in ISL) where Hofstadter discusses a thought experiment of Daniel Dennett, as written about at great length by Derek Parfit. Suppose the Star Trek transporter really worked, and could de-materialize a person and send them via a (sparkling) light stream to be re-assembled on another location, say Mars. Suppose next that an improved version of this transporter were later devised that didn't have to destroy the originating person. A copy is faithfully made on Mars, but the Earth copy remains alive. Who would be the "real" person / soul? Imagine further that the transporter could send multiple copies anywhere it chose, perhaps depositing one copy on Mars, and another on the Moon, etc... What & who then?

Parfit reportedly mulls this over for a hundred pages and agonizes that there is no way to decide which is the "real" person. Hofstadter also makes remarkably heavy weather of the question, finally hinting that a reasonable way to regard it may be as a faithful doubling, where none of the clones have any priority, all are equivalent, and each goes on to an independent existence built on the structure and history of the original person. Well of course! In programming, this is called a "fork" where a program is replicated and both copies keep running in perpetuity, doing their own thing. No need to fret over soul-division or irreducible essences- if the physical brains and bodies are faithfully reproduced in all detail, then so are the minds in all respects. Each will carry on the prior consciousness and other internal processes, differing only by the occurrence of, and interaction with, subsequent events.

And one can extend this to other substrates, supposing that some means has been devised to replicate all the activity of a human brain from neurons into, say, silicon. Just making such an assumption assumes the answer, of course. But the real question is- at what level would the modelling have to be faithful in order to generate a replicated consciousness? Do all the atoms have to be modelled? Clearly not. The model Hofstadter and I share here is that the overall activity of the brain in its electrical activity and structure-to-structure communication constitutes consciousness. So it is the neurons the need to be modelled, and that perhaps only roughly, to recreate their communication flow and data storage. Generate enough of those elements that perceive, that condense and flag significant information, that tag everything inside and out with emotional valences, that remember all sorts of languages, and world events, and experiences, at explicit and implicit levels, that coordinate countless senses and processes, and we might just have some thing that has experiences.


  • And dogs- are they conscious?
  • J P Morgan, et al. Their practices were not just "shady", they were criminal fraud; signing false affidavits, scamming loan customers and investors alike, corrupting appraisers, LIBOR fixing, etc.
  • Should atheists take an economic position?
  • And do they have better morals?
  • Another way the health-related free markets don't work. Big data is fundamentally incompatible with broad-based insurance.
  • Two can play that game... "Last month, the U.S. raided an Afghan convoy carrying a Pakistani Taliban militant, Latif Mehusd, who the Afghan government was using to cultivate an alliance with the Pakistani Taliban."
  • The Koch folks- apparently too embarrassed to stand up for their own beliefs.
  • Unwritten institutions are often the most important- the long shadow of slavery & oppression.
  • Cuddly capitalism- yes, it really works.
  • Our infrastructure is unsightly and unsafe as well as decrepit. And underfunded.
  • Bill Mitchell on why full employment shouldn't just happen for the sake of killing people.
  • Quote of the week, from Paul Krugman:
"A society committed to the notion that government is always bad will have bad government. And it doesn’t have to be that way."
"But right now we’re awash in excess savings with nowhere to go, and the marginal social value of a dollar of savings is negative. So real interest rates should be negative too, if they’re supposed to reflect social payoffs."

Saturday, March 2, 2013

What's next for Apple?

Robots.

With Apple being the most valuable company, more or less, one has to wonder how it can top its previous tricks. Bob Cringely asked this question, and got me thinking.

Honestly, it is hard to expect any growth at all. Given its already enormous size, the most likely path goes down, not up towards a future where Apple would become lord and master of the business world.

Nevertheless, what could be next? How can they top their megahits of iPod, iPhone, and iPad? They have made a science of striking a consumer category ripe for computer-i-zation with a fully-formed solution that transcends what others have imagined and melds design, computerization, and cloud services into all-too convenient ecosystems. As an example of bad timing, Apple first offered a tablet computer way back in 1993- the Newton. It was an amazing achievement, but far before its time. At which point Steve Jobs finally pulled the trigger a second time, and the iPad arrived.

What other area of our lives are ripe for this kind of treatment? Where have nascent computer-consumer interfaces been lurking, ready to inspire an Apple-style invasion and makeover?

I think it is robots. Admittedly, invading this area would be extremely ambitious, involving far more moving parts than Apple is used to dealing with. But the field seems to be in a perfect state. There is an active industrial sector with usable, worked-out technologies. There is a nascent consumer sector, in a few specialized niches- cleaning and telepresence robots (which are even based on the iPad). Autonomous cars seem to be imminent as well.

The potential is clearly very high. Just as desktop computers took over areas of our lives and previously separate jobs that no one imagined, (secretaries, music production, guitar hero, postal service, phone calling), so robots will doubtless work their way into innumerable areas of our lives, limited only by their intelligence and design.

For running a household, the electrical revolution has left a great deal undone. Compared to having a house full of servants, having a washing machine leaves quite a little to be desired. So does having a microwave, compared to having a cook. So does having a vacuum cleaner, compared to having a maid. Could robots bridge this gap between dumb machinery and true service?

It is going to be a very long road, but with real uses already here, it is the kind of market that may be ready for exploitation, and for unimaginable growth.

Could robots take over these tasks without also taking over and ruling the world? I think so, given that we design them. But in any case, it is a world we are headed towards, and Apple would be well-positioned to help design it.


  • Oh, robots do interpretive dance, too.
  • Is the thrill gone at Apple?
  • Henry Ford- another visionary / megalomaniac, building the future.
  • Telecommuting rocks.
  • Christian martyrs: a mess of legend.
  • Christian church- right or wrong, the tribe is most important.
  • IBM, Circa 1970's-80's. "Every IBM employee’s ambition is apparently to become a manager, and the company helps them out in this area by making management the company’s single biggest business." Also ... how the world ended up using the "quick and dirty operating system".
  • Why priests? ... indeed.
  • The sequester ... is what it's like being ruled by the stupid party.
  • Solow on debt. And some MMT corrections.
  • Economics graph of the week- sectoral balances in the US, since 1952. Note how the household sector was in a highly usual deficit since roughly 1998, up till the crisis.


Saturday, May 14, 2011

Artificial intelligence, the Bayes way

A general review describes progress in modelling general human intelligence.

This will be an unusual review, since I am reviewing a review, which itself is rather "meta" and amorphous. Plus, I am no expert, especially in statistics, so my treatment will be brutally naive and uninformed. The paper is titled "How to Grow a Mind: Statistics, Structure, and Abstraction". Caveats aside, (and anchors aweigh!), the issue is deeply interesting for both fundamental and practical reasons: how do we think, and can such thought (such as it is) be replicated by artificial means?

The history of AI is a sorry tale of lofty predictions and low achievement. Practitioners have persistently underestimated the immense complexity of their quarry. The reason is, as usual, deep narcissism and introspective ignorance. We are misled by the magical ease by which we do things that require simple common sense. The same ignorance led to ideas like free will, souls, ESP, voices from the gods, and countless other religio-magical notions to account for the wonderful usefulness, convenience and immediacy of what is invisible- our unconscious mental processes.

At least religious thinkers had some respect for the rather awesome phenomenon of the mind. The early AI scientists (and especially the behaviorists) chose instead to ignore it and blithely assume that the computers they happened to have available were capable of matching the handiwork of a billion years of evolution.

This paper describes, in part, a theological debt of another kind, to Thomas Bayes, who carried on a double life as a Presbyterian minister in England and as a mathematician member of the Royal Society (those were the days!). Evidently an admirer of Newton, his only scientific work published in his lifetime was a fuller treatment to Newton's theory of calculus.
"I have long ago thought that the first principles and rules of the method of Fluxions stood in need of more full and distinct explanation and proof, than what they had received either from their first incomparable author, or any of his followers; and therefore was not at all displeased to find the method itself opposed with so much warmth by the ingenious author of the Analyst; ..."

However, after he died, his friend Robert Price found and submitted to the Royal Society the material that today makes Bayes a household name, at least to statisticians, data analysts and modellers the world over. Price wrote:
"In an introduction which he has writ to this Essay, he says, that his design at first in thinking on the subject of it was, to find out a method by which we might judge concerning the probability that an event has to happen, in given circumstances, upon supposition that we know nothing concerning it but that, under the same circumstances, it has happened a certain number of times, and failed a certain other number of times."

And here we come to the connection to artificial intellegence, since our minds, insofar as they are intelligent, can be thought of in the simplest terms as persistent modellers of reality, building up rules of thumb, habits, theories, maps, categorizations, etc. that help us succeed in survival and all the other Darwinian tasks. Bayes's theorem is simple enough:

From the wiki page:
"The key idea is that the probability of an event A given an event B (e.g., the probability that one has breast cancer given that one has tested positive in a mammogram) depends not only on the relationship between events A and B (i.e., the accuracy of mammograms) but also on the marginal probability (or 'simple probability') of occurrence of each event."

So to judge the probability in this case, one uses knowledge of past events, like the probability of breast cancer overall, the probability of positive mammograms overall, and the past conjunction between the two- how often cancer is detected by positive mammograms, to estimate the reverse- whether a positive mammogram indicates cancer.

The authors provide their own example:
"To illustrate Bayes’s rule in action, suppose we observe John coughing (d), and we consider three hypotheses as explanations: John has h1, a cold; h2, lung disease; or h3, heartburn. Intuitively only h1 seems compelling. Bayes’s rule explains why. The likelihood favors h1 and h2 over h3: only colds and lung disease cause coughing and thus elevate the probability of the data above baseline. The prior, in contrast, favors h1 and h3 over h2: Colds and heartburn are much more common than lung disease. Bayes’s rule weighs hypotheses according to the product of priors and likelihoods and so yields only explanations like h1 that score highly on both terms."

So, far it is just common sense, though putting common sense in explicit and mathematical form has important virtues, and indeed is the key problem of AI. The beauty of Bayes's theorem is its flexibility. As new data come in, the constituent probabilities can be adjusted, and the resulting estimates become more accurate. Missing data is typically handled with aplomb, simply allowing wider estimates. Thus Bayes's system is a very natural, flexible system for expressing model probabilities based on messy data.

Language is a classic example, where a children rapidly figure out the meanings of words, not from explicit explanations and grammatical diagrams, (heaven forbid!), but from very few instances of hearing them used in a clear context. Just think of all those song lyrics that you mistook for years, just because they sounded "like" the singer wanted ... a bathroom on the right. We work from astonishingly sparse data to conclusions and knowledge that are usually quite good. The scientific method is also precisely this, (method of induction), more or less gussied-up and conscious, entertaining how various hypotheses might achieve viable probability in light of their relations to known prior probabilities, otherwise known (hopefully) as knowledge.

In their review, the authors add Bayes's method of calculating and updating probabilities to the other important element of intelligence- the database, which they model as a freely ramifying hierarchical tree of knowledge and abstraction. The union of the two themes is something they term hierarchical Bayesian models (HBMs). Trees come naturally to us as frameworks to categorize information, whether it is the species of Linnaeus, a system of stamp collecting, or an organizational chart. We are always grouping things mentally, filing them away in multiple dimensions- as interesting or boring, political, personal, technical, ... the classifications are endless.

One instance of this was the ancient memory device of building rooms in one's head, furnishing prodigious recall to trained adepts. For our purposes, the authors concentrate on the property of arbitrary abstraction and hierarchy formation, where such trees can extend from the most abstract distinctions (color/sound, large/small, Protestant/Catholic) to the most granular (8/9, tulip/daffodil), and all can be connected in a flexible tree extending between levels of abstraction.

The authors frame their thoughts, and the field of AI generally, as a quest for three answers:
"1. How does abstract knowledge guide learning and inference from sparse data?
2. What forms does abstract knowledge take, across different domains and tasks?
3. How is abstract knowledge itself acquired?"

We have already seen how the first answer comes about- through iterative updating of probabilistic models following Bayes's theorem. We see a beginning of the second answer in a flexible hierarchical system of categorization that seems to come naturally. The nature and quality of such structures are partly dictated by the wiring established through genetics and development. Facial recognition is an example of an inborn module that classifies with exquisite sensitivity to fine differences. However, the more interesting systems are those that are not inborn / hard-wired, but that allow us to learn through more conscious engagement, as when we learn to classify species, or cars, or sources of alternative energy- whatever interests us at the moment.

Figure from the paper, diagramming hierarchical classification as done by human subjects.

Causality is, naturally, an important form of abstract knowledge, and also takes the form of abstract trees, with time the natural dimension, through which events affect each other in a directed fashion, more or less complex. Probability and induction are concerned with detecting hidden variables and causes within this causal tree, such as forces, physical principles, or deities, that can constitute hypotheses that are then validated probabilistically by evidence in the style of Bayes.

A key problem of AI has been a lack of comprehensive databases that provide the putative AI system the kind of comon-sense, all-around knowledge that we have of the world. Such a database allows the proper classification of details using contextual information- that a band means a music group rather than a wedding ring or a criminal conspiracy, for instance. The recent "Watson" game show contestant simulated such knowledge, but actually was just a rapid text mining algorithm, apparently without the kind of organized abstract knowledge that would truly represent intelligence.

The authors characterize human learning as strongly top-down organized, with critical hypothetical abstractions at higher levels coming first, before details can usefully be filled in. They cite Mendeleev's periodic table proposal as an exemplary paradigm hypothesis that then proved itself by "fitting" details at lower levels, thereby raising its own probability as an organizing structure.
"Getting the big picture first- discovering that diseases cause symptoms before pinning down any specific disease-symptom links- and then using that framework to fill in the gaps of specific knowledge is a distinctively human mode of learning. It figures prominently in children's development and scientific progress, but has not previously fit into the landscape of rational or statistical learning models."

Which leads to the last question- how to build up the required highly general database in a way that is continuously alterable, classifies data flexibly in multiple dimensions, and generates hypotheses (including top-level hypotheses and re-framings) in response to missing values and poor probability distributions, as a person would? Here is where the authors wheel in the HBMs and their relatives, the Chinese Restaurant and Indian Buffet processes, all of which are mathematical learning algorithms that allow relevant parameters or organizing principles to develop out of the data, rather than imposing them a priori.
"An automatic Occam's razor embodied in Bayesian inference trades off model complexity and fit to ensure that new structure (in this case a new class of variables) is introduced only when the data truly require it."
...
"Across several case studies of learning abstract knowledge ... it has been found that abstractions in HBMs can be learned remarkably fast from relatively little data compared with what is needed for learning at lower levels. This is because each degree of freedom at a higher level of the HBM influences and pools evidence from many variables at levels below. We call this property of HBM's 'the blessing of abstraction.' It offers a top-down route to the origins of knowledge that contrasts sharply with the two classic approaches: nativism, in which abstract concepts are assumed to be present from birth, and empiricism or associationism, in which abstractions are constructed but only approximately, and slowly in a bottom-up fashion, by layering many experiences on top of each other and filtering their common elements."

Wow- sounds great! Vague as this all admittedly is, (the authors haven't actually accomplished much, only citing some proof of principle exercises), it sure seems promising as an improved path towards software that learns in the generalized, unbounded, and high-level way that is needed for true AI. The crucial transition, of course, is when the program starts doing the heavy lifting of learning by asking the questions, rather than having data force-fed into it, as all so-called expert systems and databases have to date.

The next question is whether such systems require emotions. I think they do, if they are to have the motivation to frame questions and solve problems on their own. So deciding how far to take this process is a very tricky problem indeed, though I am hopeful that we can retain control. If I may, indeed, give in all over again to typical AI hubris ... creating true intelligence by such a path, not tethered to biological brains, could lead to a historic inflection point, where practical and philosophical benefits rain down upon us, Google becomes god, and we live happily ever after, plugged into a robot-run world!

An image from the now-defunct magazine, Business 2.0. Credit to Don Dixon, 2006. 

"Greece should definitely leave the Eurozone."

Saturday, October 25, 2008

Boondoggle to Mars

Why sending humans to Mars is a bad idea.

In an ironic continuation of its general war on science, the Bush administration proclaimed the goal of sending humans to Mars. A speech, a press release, and then nothing more was heard, doubtless because no one took the president seriously as a second coming of JFK, and also because the novelty of space adventure has, to some degree, worn off. Nothing more was heard, yet NASA has been squirrelling away on the plan anyhow, chopping budgets to real science missions in order to plan human missions to the Moon and Mars.

As a Star Trek fan and generally pro-science person who still draws inspiration from the Apollo Moon landings, I might be expected to support these goals. But the devil is truly in the details. The fact is that going to Mars turns out to be incredibly dangerous to the point of being essentially impossible, and thus also bound to be astronomically expensive even to attempt. And all this is to accomplish goals that we are already meeting via robotic exploration on Mars.

The dangers of sending humans to Mars are legion. First is the distance. The moon takes a few days to get to, but Mars takes over eight months. These are months the crew would have to exist in highly cramped conditions, taking care to not damage their tenuous vessel and losing bone and muscle strength all the time. After the same time coming back, humans would be in pretty rough shape on returning to Earth. The probability of accidents and catastrophe will mount to very high levels. More important is the fuel needed. The truly stupendous Saturn V was needed to loft the moon crafts, and far more will be needed just to get to Mars. Mars has 2.3 times the gravity of the Moon, so far more fuel would be needed both to arrest descent to, and to lift off from Mars. Mars has negligible atmosphere, especially oxygen, so huge amounts of supplies, including oxygen, would need to be brought along as well.

All these issues are probably surmountable. What may not be surmountable are radiation exposure issues. On earth we live in the cocoon of Earth's magnetosphere and under a heavy blanket of atmosphere, both of which protect us from intense cosmic and solar high-energy radiation. The Apollo astronauts, specially scheduled to avoid solar flares, were exposed to doses from 1X to 10X the yearly occupational allowed exposure (0.1 rads) in a matter of days. Extending that to the year plus involved in going to and existing on Mars yields exposures that are not only damaging, but possibly fatal (hundreds of rads). And shielding them with water, lead, or similar materials, is essentially impossible, due to obvious weight constraints.

To back up, one might ask.. what is the point of this exercise anyway? We are already mounting quite successful missions to Mars, thank you very much, with robots that have spent years trolling around the Martian landscape, sampling materials, snapping photos, and doing spectroscopic analysis. After a few more generations of rovers, it will be literally impossible to imagine anything they can't do that humans on the spot could.

Clearly scientific discovery and analysis is not the reason. What is the real reason? It is psychological- the adventure of it, to voyage far and wide, to spread our seed, plant our flag, and embody the archetype of the star-man, heavens-transcending, god-touching hero (e.g. Dave Bowman). The spiritual power of this adventure was greatest the first time at the climax of Apollo 11, and met with diminishing returns since then, petering out even by the end of the Apollo program. Turning back to Mars, a popular science fiction series paints our colonization of Mars in gritty but glowing terms, progressing from Red Mars to Green, and lastly to Blue. Adventure, derring-do, and spirituality were revolutionary reasons to go to the Moon, but less compelling reasons to go to Mars, now that we have seen the beauties and incredible harshness of all the planets, including Mars, up close.

Another reason may be the continuation of NASA's manned flight program. The excitement of space flight has been ebbing for years, coming to a sad end in the virtually pointless International Space Station, which might soon be held hostage by Russia- the only country with the wherewithal to get there. The station has no scientific rationale other than the study of the effects of space on humans- its other rationales of materials science, quirky biology, etc. are clearly not enough to interest any serious companies or academics, and thus the station has always been a solution in search of a question. The NASA manned program appears to spend more time inspiring elementary school students to do science and become astronauts than in actually doing serious or useful science. Recently, it has taken to hosting tourists on the space station. Its missions have become a circus of happy talk and kumbaya in space, at least when shuttles are not blowing up. The manned program is frankly a mess, and its only productive use has been to service the Hubble telescope, which, as another robot in space, has been a scientific as well as cultural gem.

The recent shuttle disasters were both ascribed in large part to a problematic culture at NASA, one which valued cheerleading over science, and public relations over truth. The manned program was always a PR program first and foremost, but has by this point become a inertial bureaucracy in symbiosis with its contractors that feeds on (and feeds) the public's faded romance with space dating from the Apollo program, without having a larger purpose, whether scientific, geostrategic, or even aesthetic.

We already go far beyond Earth by way of robots, which need no air, food, or other life support. And robots are critical to our future here on earth as well. Robotics and virtual reality are the technologies of the future, enabling us to learn, work, and play in remote locations, to telecommute, raise productivity, energy efficiency, and living standards. That is where money should be invested, and how we should be voyaging to phenomenally hazardous locations (or even across town). NASA should embrace this future of robotics and human extension with vigor, and not keep flogging past glories of top-gun derring-do.

The fact is that space is never going to be a good place for humans. Our fantasies of living on the moon and other planets, let alone colonizing them, are pure fiction. If we are already having problems living within our means on the Earth which is such a rich source of life-giving energy, food, and air, imagine how difficult it would be to live elsewhere with infinitely thinner means.

The Apollo program taught us one thing in the end, which is the Earth is unbelievably precious and life-giving. But it will not be for long if we continue to treat it as a way-station to the stars- as a home to be transcended and left behind rather than one to be nurtured and loved forever.