Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

Saturday, December 2, 2023

Preliminary Pieces of AI

We already live in an AI world, and really, it isn't so bad.

It is odd to hear about all the hyperventilating about artificial intelligence of late. One would think it is a new thing, or some science-fiction-y entity. Then there are fears about the singularity and loss of control by humans. Count me a skeptic on all fronts. Man is, and remains, wolf to man. To take one example, we are contemplating the election of perhaps the dummbest person ever to hold the office of president. For the second time. How an intelligence, artificial or otherwise, is supposed to worm its way into power over us is not easy to understand, looking at nature of humans and of power. 

So let's take a step back and figure out what is going on, and where it is likely to take us. AI has become a catch-all for a diversity of computer methods, mostly characterized by being slightly better at doing things we have long wanted computers to do, like interpreting text, speech, and images. But I would offer that it should include much more- all the things we have computers do to manage information. In that sense, we have been living among shards of artificial intelligence for a very long time. We have become utterly dependent on databases, for instance, for our memory functions. Imagine having to chase down a bank balance or a news story, without access to the searchable memories that modern databases provide. They are breathtakingly superior to our own intelligence when it comes to the amount of things they can remember, the accuracy they can remember them, and the speed with which they can find them. The same goes for calculations of all sorts, and more recently, complex scientific math like solving atomic structures, creating wonderful CGI graphics, or predicting the weather. 

We should view AI as a cabinet filled with many different tools, just as our own bodies and minds are filled with many kinds of intelligence. The integration of our minds into a single consciousness tends to blind us to the diversity of what happens under the hood. While we may want gross measurements like "general intelligence", we also know increasingly that it (whatever "it" is, and whatever it fails to encompass of our many facets and talents) is composed of many functions that several decades of work in AI, computer science, and neuroscience have shown are far more complicated and difficult to replicate than the early AI pioneers imagined, once they got hold of their Turing machine with its infinite potential. 

Originally, we tended to imagine artificial intelligence as a robot- humanoid, slightly odd looking, but just like us in form and abilities. That was a natural consequence of our archetypes and narcissism. But AI is nothing like that, because full-on humanoid consciousness is an impossibly high bar, at least for the foreseeable future, and requires innumerable capabilities and forms of intelligence to be developed first. 

The autonomous car drama is a good example of this. It has taken every ounce of ingenuity and high tech to get to a reasonably passable vehicle, which is able to "understand" key components of the world around it. That a blob in front is a person, instead of a motorcycle, or that a light is a traffic light instead of a reflection of the sun. Just as our brain has a stepwise hierarchy of visual processing, we have recapitulated that evolution here by harnessing cameras in these cars (and lasers, etc.) to not just take in a flat visual scene, which by itself is meaningless, but to parse it into understandable units like ... other cars, crosswalks, buildings, bicylists, etc.. Visual scenes are very rich, and figuring out what is in them is a huge accomplishment. 

But is it intelligence? Yes, it certainly is a fragment of intelligence, but it isn't consciousness. Imagine how effortless this process is for us, and how effortful and constricted it is for an autonomous vehicle. We understand everything in a scene within a much wider model of the world, where everything relates to everything else. We evaluate and understand innumerable levels of our environment, from its chemical makeup to its social and political implications. Traffic cones do not freak us out. The bare obstacle course of getting around, such as in a vehicle, is a minor aspect, really, of this consciousness, and of our intelligence. Autonomous cars are barely up to the level of cockroaches, on balance, in overall intelligence.

The AI of text and language handling is similarly primitive. Despite the vast improvements in text translation and interpretation, the underlying models these mechanisms draw on are limited. Translation can be done without understanding text at all, merely by matching patterns from pre-digested pools of pre-translated text, regurgitated as cued by the input text. Siri-like spoken responses, on the other hand, do require some parsing of meaning out of the input, to decide what the topic and the question are. But the scope of these tools tend to be very limited, and the wider scope they are allowed, the more embarrassing their performance, since they are essentially scraping web sites and text pools for question-response patterns, instead of truly understanding the user's request or any field of knowledge.

Lastly, there are the generative ChatGPT style engines, which also regurgitate text patterns reformatted from public sources in response to topical requests. The ability to re-write a Wikipedia entry through a Shakespeare filter is amazing, but it is really the search / input functions that are most impressive- being able, like the Siri system, to parse through the user's request for all its key points. This betokens some degree of understanding, in the sense that the world of the machine (i.e. its database) is parceled up into topics that can be separately gathered and reshuffled into a response. This requires a pretty broad and structured ontological / classification system, which is one important part of intelligence.

Not only is there a diversity of forms of intelligence to be considered, but there is a vast diversity of expertise and knowledge to be learned. There are millions of jobs and professions, each with their own forms of knowledge. Back the early days of AI, we thought that expert systems could be instructed by experts, formalizing their expertise. But that turned out to be not just impractical, but impossible, since much of that expertise, formed out of years of study and experience, is implicit and unconscious. That is why apprenticeship among humans is so powerful, offering a combination of learning by watching and learning by doing. Can AI do that? Only if it gains several more kinds of intelligence including an ability to learn in very un-computer-ish ways.

This analysis has emphasized the diverse nature of intelligences, and the uneven, evolutionary development they have undergone. How close are we to a social intelligence that could understand people's motivations and empathise with them? Not very close at all. How close are we to a scientific intelligence that could find holes in the scholarly literature and manage a research enterprise to fill them? Not very close at all. So it is very early days in terms of anything that could properly be called artificial intelligence, even while bits and pieces have been with us for a long time. We may be in for fifty to a hundred more years of hearing every advance in computer science being billed as artificial intelligence.


Uneven development is going to continue to be the pattern, as we seize upon topics that seem interesting or economically rewarding, and do whatever the current technological frontier allows. Memory and calculation were the first to fall, being easily formalizable. Communication network management is similarly positioned. Game learning was next, followed by the Siri / Watson systems for question answering. Then came a frontal assault on language understanding, using the neural network systems, which discard the former expert system's obsession with grammar and rules, for much simpler statistical learning from large pools of text. This is where we are, far from fully understanding language, but highly capable in restricted areas. And the need for better AI is acute. There are great frontiers to realize in medical diagnosis and in the modeling of biological systems, to only name two fields close at hand that could benefit from a thoroughly systematic and capable artificial intelligence.

The problem is that world modeling, which is what languages implicitly stand for, is very complicated. We do not even know how to do this properly in principle, let alone having the mechanisms and scale to implement it. What we have in terms of expert systems and databases do not have the kind of richness or accessibility needed for a fluid and wide-ranging consciousness. Will neural nets get us there? Or ontological systems / databases? Or some combination? However it is done, full world modeling with the ability to learn continuously into those models are key capabilities needed for significant artificial intelligence.

After world modeling come other forms of intelligence like social / emotional intelligence and agency / management intelligence with motivation. I have no doubt that we will get to full machine consciousness at some point. The mechanisms of biological brains are just not sufficiently mysterious to think that they can not be replicated or improved upon. But we are nowhere near that yet, despite bandying about the word artificial intelligence. When we get there, we will have to pay special attention to the forms of motivation we implant, to mitigate the dangers of making beings who are even more malevolent than those that already exist... us.

Would that constitute some kind of "singularity"? I doubt it. Among humans there are already plenty of smart people and diversity, which result in niches for everyone having something useful to do. Technology has been replacing human labor forever, and will continue moving up the chain of capability. And when machines exceed the level of human intelligence, in some general sense, they will get all the difficult jobs. But the job of president? That will still go to a dolt, I have no doubt. Selection for some jobs is by criteria that artificial intelligence, no matter how astute, is not going to fulfill.

Risks? In the current environment, there are a plenty of risks, which are typically cases where technology has outrun our will to regulate its social harm. Fake information, thanks to the chatbots and image makers, can now flood the zone. But this is hardly a new phenomenon, and perhaps we need to get back to a position where we do not believe everything we read, in the National Enquirer or on the internet. The quality of our sources may become once again an important consideration, as they always should have been.

Another current risk is that the automation risks chaos. For example in the financial markets, the new technologies seem to calm the markets most of the time, arbitraging with relentless precision. But when things go out of bounds, flash breakdowns can happen, very destructively. The SEC has sifted through some past events of this kind and set up regulatory guard rails. But they will probably be perpetually behind the curve. Militaries are itching to use robots instead of pilots and soldiers, and to automate killing from afar. But ultimately, control of the military comes down to social power, which comes down to people of not necessarily great intelligence. 

The biggest risk from these machines is that of security. If we have our financial markets run by machine, or our medical system run by super-knowledgeable artificial intelligences, or our military by some panopticon neural net, or even just our electrical grid run by super-computers, the problem is not that they will turn against us of their own volition, but that some hacker somewhere will turn them against us. Countless hospitals have already faced ransomware attacks. This is a real problem, growing as machines become more capable and indispensable. If and when we make artificial people, we will need the same kind of surveillance and social control mechanisms over them that we do over everyone else, but with the added option of changing their programming. Again, powerful intelligences made for military purposes to kill our enemies are, by the reflexive property of all technology, prime targets for being turned against us. So just as we have locked up our nuclear weapons and managed to not have them fall into enemy hands (so far), similar safeguards would need to be put on similar powers arising from these newer technologies.

We may have been misled by the many AI and super-beings of science fiction, Nietzsche's Übermensch, and similar archetypes. The point of Nietzsche's construction is moral, not intellectual or physical- a person who has thrown off all the moral boundaries of civilization, expecially Christian civilization. But that is a phantasm. The point of most societies is to allow the weak to band together to control the strong and malevolent. A society where the strong band together to enslave the weak.. well, that is surely a nightmare, and more unrealistic the more concentrated the power. We must simply hope that, given the ample time we have before truly comprehensive and superior artificial intelligent beings exist, we have exercised sufficient care in their construction, and in the health of our political institutions, to control them as we have many other potentially malevolent agents.


  • AI in chemistry.
  • AI to recognize cells in images.
  • Ayaan Hirsi Ali becomes Christian. "I ultimately found life without any spiritual solace unendurable."
  • The racism runs very deep.
  • An appreciation of Stephen J. Gould.
  • Forced arbitration against customers and employees is OK, but fines against frauds... not so much?
  • Oil production still going up.

Saturday, March 14, 2020

Coronavirus Testing Update

A review of how testing is done, and where we are at.

We in the US are flying blind through the current epidemic, with cases popping up all over, testing done on very few people, and the rest ranging between nervousness and panic. What is the death rate? We still do not know. Did China contain its outbreak by draconian measures, or by wide-spread infection and natural burnout? How about South Korea, or Taiwan? Everyone claims the former, but it far from certain what actually happened. We need more testing, and particularly scientifically sampled population testing, and post-infection exposure testing. The basics of epidemiology, in other words.

SARS-CoV-2 is the virus, and COVID-19 is the disease. Most people do not seem to have mortality risk from infection, other than the elderly and infirm. In these respects, and in its great infectiousness, this disease resembles influenza. Testing from patient samples is done by RT-PCR, which stands for reverse-transcription polymerase chain reaction. The reverse transcription part employs specialized enzymes to copy the viral genomes, which are RNA, from the patient sample, into DNA, the more stable molecule that can be used in PCR. And PCR is the revolutionary method that won a Nobel prize in 1993, which uses a DNA polymerizing enzyme, and short segments of DNA (primers), to repetitively (and exponentially) replicate a chosen stretch of DNA. In this way, a minuscule amount of a pathogen can be processed to an easily detectable amount of DNA. The FDA mandates using three target regions of the new Coronavirus N protein encoding gene for its tests, but will accept one target, if the test is otherwise properly validated. They point test makers to the NAID resource that provides positive control material- RNA genomes from SARS-CoV-2.

 Just the primers, Ma'am. These tubes contained dried DNA- the short primers with specific sequences needed to amplify specific portions of the SARS-CoV-2 viral genome. Using these requires quite of bit of other laboratory equipment and expertise.
Schematic of PCR, the exponential amplification of small amounts of DNA to huge amounts. Primers are in green, nucleotides are light blue, and the target template is dark blue.

So far, so good. But there are a range of test technologies and ways to do this testing, from the bare-bones set of primers, to a roboticized, fully automated system, each appropriate to different institutions and settings. To use the basic primer set, the lab would have to have RNA extraction kits or methods to purify the viral genomes from patient samples, then a reverse transcription kit or method, then a PCR machine and the other materials (nucleotides, high-temperature DNA polymerase, purified water and other proper solution ingredients). The PCR machine is basically a heater that cycles rapidly between the low temperature required for polymerizing and primer annealing, and the higher temperature required to melt all the DNA strands apart so that another round of primer annealing can take place. And all this needs to happen in very clean conditions, since PCR is exceedingly sensitive (of course) to small amounts of contamination. Lastly, the DNA product is typically detected by trace fluorescent markers that light up only double-stranded DNA, and can generally be detected right in the tube, with an advanced PCR machine.

Automated sample handling machines are used in clinical labs.

Virtually all of this can be mustered by any competent molecular biology lab. Results would take a few days, due to the work involved in all the setup steps. The PCR itself and analysis of its results would take a few hours. But such labs do not operate at the requisite scale, or for this purpose. That is the province of clinical testing labs, which come in various sizes, from a small hospital in-house operation to a multinational behemoth. The latter run these tests on a vast, mechanized scale. They might manufacture the DNA primers themselves, or buy them in bulk, and have the proper logistical structures to do these tests from scratch in a reproducible way, to a high standard. Providers at these scales need different kinds of materials for their testing. A small provider may need a turn-key solution that comes with pre-packaged cassettes that just need the sample added before plugging into the machine, while a larger provider would save costs by using bulk reagents and massively robotized sample handling and PCR machines.

A one-hour test in a turn-key package. But at relatively high cost.

So who are the players and what is the status? The CDC did not, for some reason, use the WHO test, or tests already developed in China, whose capacity for such manufacturing and testing is prodigious. The CDC at first didn't allow anyone else to run the tests, and when they did, they did not work correctly. It has been a bad scene and much valuable time has been lost- time that resulted in the US losing any chance of containment. Now, the FDA is authorizing others to run these tests, with detailed instructions about sampling, extraction, and machinery to be used, and is slowly granting authorization to selected manufacturers and kit makers for more kinds of tests.

Large suppliers like Roche and ThermoFisher have just been approved to supply clinical labs with testing systems. Most significant is Roche, whose tests are pre-positioned and ready to go already at clinical labs around the country. The biggest clinical lab, ominously named LabCorp, offers a home-made test, but only "several thousand tests per day", which is not yet the capacity needed. So capacity for testing will rise very rapidly, and soon enable the diagnostic and surveillance testing that is so important, and has been missing to date.

  • Notes on previous pandemics.

Post script:
An aspect I forgot to include is how to select the portions of the viral genome sequence to include in testing kits. Different institutions have clearly come up with primers to different genes, few as they are, and regions within those genes. For example, "The primers currently target the N1, N2, and RP genes of the virus, but these are subject to change."; "In particular, the test detects the presence of SARS-CoV-2’s E gene, which codes for the envelope that surrounds the viral shell, and the gene for the enzyme RNA-dependent RNA polymerase." There is a balance between finding regions and primer sites that are unique to the particular virus you are interested in, so cross-reaction to other viruses is 100% eliminated, and the problem of viral drift and mutation. Some regions of viral genomes mutate much more rapidly than others, but these viruses tend to mutate at pretty high rates overall, so keeping a test current from one year to the next can be challenging. That is also what our immune systems have to deal with, as cold and flu viruses change continually to evade our defenses. So the specific DNA primer targets of a test need to be relatively highly conserved, but not too highly conserved, to put it in evolutionary terms, and the regulating agencies have to keep a close eye on this issue as they approve various test versions, to find a proper balance of high specificity and long-term usability.

Post-Post script:
Yet more significant testing solutions have emerged by late March, including a rapid (~10 minute) system from Abbot, and rapid antigen testing kits that also render results in the ~10 minute range. This speed is enormously helpful, obviously, from the patient, provider, and health system perspectives. The Abbot system is based on something called isothermal PCR, which gets rid of the temperature cycling described above. It is run at an intermediate temperature (~60 degrees C) where the DNA is somewhat loose, and primers can invade duplex strands, and also used a DNA polymerase that can displace duplex DNA as it plows ahead. This plus some other clever tricks allows the DNA amplification process to happen continuously in the reaction tube, going to completion in the rapid time quoted for these tests. These tests also tend to be tough- relatively robust to junk in the samples, and variations in temperature and other conditions.

The antigen tests that are coming on line are particularly significant, since they can be used for wide-spread population surveillance, to figure out what proportion of the population has been exposed, even if no active infection is present. Due to what seems like a complete or virtually complete lack of contact tracing + quarantine, the current pandemic will only stop once most of the population has been exposed, providing herd immunity. Before that point, anytime we give up self-isolation, it will start over again, due to the relatively high rate of low- or asymptomatic cases, and their lengthy course. Health care workers that have been exposed and recovered will have a special role before then by being able to freely staff hospitals that otherwise may be in dire straights.

Saturday, October 5, 2019

High Intelligence is Highly Overrated by Highly Intelligent People

AI, the singularity, and watching way too much science fiction: Review of Superintelligence by Nic Bostrom.

How far away is the singularity? That is the point when machine intelligence exceeds human intelligence, after which it is thought that this world will no longer be ours to rule. Rick Bostrom, a philosopher at Oxford, doesn't know when this will be, but is fearful of its consequences, since, if we get it wrong, humanity's fate may not be a happy one.

The book starts strongly, with some well argued and written chapters about the role of intelligence in humanity's evolution, and the competitive landscape of technology today that is setting the stage for this momentous transition. But thereafter, the armchair philosopher takes over, with tedious chapters of hairsplitting and speculation about how fast or slow the transition might be, how collaborative among research groups, and especially, how we could pre-out-think these creations of ours, to make sure they will be well-disposed to us, aka "the control problem".

Despite the glowing blurbs from Bill Gates and others on the jacket, I think there are fundamental flaws with this whole approach and analysis. One flaw is a failure to distinguish between intelligence and power. Our president is a moron. That should tell us something about this relationship. It is not terribly close- the people generally acknowledged as the smartest in history have rarely been the most powerful. This reflects a deeper flaw, which is, as usual, a failure to take evolution and human nature seriously. The "singularity" is supposed to furnish something out of science fiction- a general intelligence superior to human intelligence. But Bostrom and others seem to think that this means a fully formed human-like agent, and those are two utterly different things. Human intelligence takes many forms, and human nature is composed of many more things than intelligence. Evolution has strained for billions of years to form our motivations in profitable ways, so that we follow others when necessary, lead them when possible, define our groups in conventional ways that lead to warfare against outsiders, etc., etc. Our motivational and social systems are not the same as our intelligence system, and to think that anyone making an AI with general intelligence capabilities will, will want to, or even can, just reproduce the characteristics of human motivation to tack on and serve as its control system, is deeply mistaken.

The fact is that we have AI right now that far exceeds human capabilities. Any database is far better at recall than humans are, to the point that our memories are atrophying as we compulsively look up every question we have on Wikipedia or Google. And any computer is far better at calculations, even complex geometric and algebraic calculations, than we are in our heads. That has all been low-hanging fruit, but it indicates that this singularity is likely to be something of a Y2K snoozer. The capabilities of AI will expand and generalize, and transform our lives, but unless weaponized with explicit malignant intent, it has no motivation at all, let alone the motivation to put humanity into pods for its energy source, or whatever.

People-pods, from the Matrix.

The real problem, as usual, is us. The problem is the power that accrues to those who control this new technology. Take Mark Zuckerberg for example. He stands at the head of multinational megacorporation that has inserted its tentacles into the lives of billions of people, all thanks to modestly intelligent computer systems designed around a certain kind of knowledge of social (and anti-social) motivations. All in the interests of making another dollar. The motivations for all this do not come from the computers. They come from the people involved, and the social institutions (of capitalism) that they operate in. That is the real operating system that we have yet to master.

  • Facebook - the problem is empowering the wrong people, not the wrong machines.
  • Barriers to health care.
  • What it is like to be a psychopathic moron.

Saturday, July 20, 2019

We'll Keep Earth

The robots can have the rest of the universe.

The Apollo 11 aniversary is upon us, a wonderful achievement and fond memory. But it did not lead to the hopeful new-frontier future that has been peddled by science fiction for decades, for what are now obvious reasons. Saturn V rockets do not grow on trees, nor is space, once one gets there, hospitable to humans. Earth is our home, where we evolved and are destined to stay.

But a few among us have continued taking breathtaking adventures among the planets and toward other stars. They have done pirouettes around the Sun and all the planets, including Pluto. They are our eyes in the heavens- the robots. I have been reading a sober book, Nick Bostrom's Superintelligence, which works through in painstaking, if somewhat surreal, detail what artificial intelligence will become in the not too distant future. Whether there is a "singularity" in a few decades, or farther off, there will surely come a time when we can reproduce human level intelligence (and beyond) in machine form. Already, machines have far surpassed humans in memory capacity, accuracy, and recall speed, in the form of databases that we now rely on to run every bank, government, and app. It seems inescapable that we should save ourselves the clunky absurdity, vast expense, and extreme dangers of human spaceflight and colonization in favor of developing robots with increasing capabilities to do all that for us.

It is our fleet of robots that can easily withstand the radiation, weightlessness, vacuum, boredom, and other rigors of space. As they range farther, their independence increases. On the Moon, at 1.3 light seconds away, we can talk back and forth, and control things in near real time from Earth. The Mars rovers, on the other hand, needed to have some slight intelligence to avoid obstacles and carry out lengthy planned maneuvers, being roughly 15 light-minutes from Earth. Having any direct control over rovers and other probes farther afield is increasingly impossible, with Jupiter 35 minutes away, and Neptune four light hours away. Rovers or drones contemplated for Saturn's interesting moon Titan will be over a light hour away, and will need extensive autonomous intelligence to achieve anything.

These considerations strongly suggest that our space program is, or should be in large part joined with our other artificial intelligence and robotics activities. That is how we are going to be able to achieve great things in space, exploring far and wide to figure out how we came to be, what other worlds are like, and whether life arose on them as well. Robots can make themselves at home in the cosmos in a way that humans never will.

Matt Damon, accidentally marooned on Mars.

Bostrom's book naturally delves into our fate, once we have been comprehensively outclassed by our artificial creations. Will we be wiped out? Uploaded? Kept as pets? Who knows? But a reasonable deal might be that the robots get free reign to colonize the cosmos, spreading as far as their industry and inventiveness can carry them. But we'll keep earth, a home for a species that is bound to it by evolution, sentiment, and fate, and hopefully one that we can harness some of that intelligence to keep in a livable, even flourishing, condition.


Saturday, June 15, 2019

Can Machines Read Yet?

Sort of, and not very well.

Reading- such a pleasure, but never time enough to read all that one would like, especially in technical fields. Scholars, even scientists, still write out their findings in prose- which is the richest form of communication, but only if someone else has the time and interest to read it. The medical literature is, at the flagship NCBI Pubmed resource, at about 30 million articles in abstract and lightly annotated form. Its partner, PMC, has 5.5 million articles in full text. This represents a vast trove of data which no one can read through, yet which tantalizes with its potential to generate novel insights, connections, and comprehensive and useful models, were we only able to harvest it in some computable form.

That is one of the motivations for natural language processing, or NLP, one of many subfields of artificial intelligence. What we learn with minimal effort as young children, machines have so far been unable to truly master, despite decades of effort and vast computational power. Recent advances in "deep learning" have made great progress in pattern parsing, and learning from large sets of known texts, resulting in the ability to translate one language to another. But does Google Translate understand what it is saying? Not at all. Understanding has taken strides in constricted areas, such as phone menu interactions, and Siri-like services. As long as the structure is simple, and has key words that tip off meaning, machines have started to get the hang of verbal communication.

But dealing with extremely complex texts is another matter entirely. NLP projects directed against the medical literature have been going on for decades, with relatively little to show, since the complexity of the corpus far outstrips the heuristics used to analyze it. These papers are, indeed, often very difficult for humans to read. They are frequently written by non-English speakers, or just bad writers. And the ideas being communicated are also complex, not just the language. The machines need to have a conceptual apparatus ready to accommodate, or better yet, learn within such a space. Recall how perception likewise needs an ever-expanding database / model of reality. Language processing is obviously a subfield of such perception. These issues raises a core question of AI- is general intelligence needed to fully achieve NLP?


I think the answer is yes- the ability to read human text with full understanding assumes a knowledge of human metaphors, general world conditions, and specific facts and relations from all areas of life which amounts to general intelligence. The whole point of NLP, as portrayed above, is not to spew audio books from written texts, (which is already accomplished, in a quite advanced way), but to understand what it is reading fully enough to elaborate conceptual models of the meaning of what those texts are about. And to do so in a way that can be communicated back to us humans in some form, perhaps diagrams, maps, and formulas, if not language.

The intensive study of NLP processing over the Pubmed corpus reached a fever pitch in the late 2000's, but has been quiescent for the last few years, generally for this reason. The techniques that were being used- language models, grammar, semantics, stemming, vocabulary databases, etc. had fully exploited the current technology, but still hit a roadblock. Precision could be pushed to ~ %80 levels for specific tasks, like picking out the interactions of known molecules, or linking diseases with genes mentioned in the texts. But general understanding was and remains well out of reach of these rather mechanical techniques. This is not to suggest any kind of vitalism in cognition, but only that we have another technical plateau to reach, characterized by the unification of learning, rich ontologies (world models), and language processing.

The new neural network methods (tensorflow, etc.) promise to provide the latter part of the equation, sensitive language parsing. But from what I can see, the kind of model we have of the world, with infinite learnability, depth, spontaneous classification capability, and related-ness, remains foreign to these methods, despite the several decades of work lavished on databases in all their fascinating iterations. That seems to be where more work is needed, to get to machine-based language understanding.


  • What to do about media pollution?
  • Maybe ideas will matter eventually in this campaign.
  • Treason? Yes.
  • Stalinist confessions weren't the only bad ones.
  • Everything over-the-air ... the future of TV.

Saturday, December 15, 2018

Screwy Locomotion: the Spirochete

How do spirochete bacteria move?

Getting around isn't easy. Some of our greatest technological advancements have been in locomotion. Taming, then riding, horses; railroads, automobiles, airplanes. Microorganisms have been around for a long time, and while flying may be easy for them, getting through thicker media is not, nor is steering. The classic form of bacterial motion is with an outboard motor- the flagellum. The prototypical bacterium E. coli has several flagella sprinkled around its surface. Each flagellum is slightly helical, thus forming a languid sort of propeller, which if turned along its helical axis, (at roughly 6,000 rpm), can propel the bacterium through watery media. Turning multiple flagella in this same direction (counter-clockwise) encourages them all to entangle coherently and unite into a bundle. It turns out, however, that bacteria can easily switch their motors to the opposite direction, which causes the flagella to separate, and also to flail about, (since for a left-handed helix, this is the "wrong" direction), sending the cell in random directions.

A typical bacterium with multiple flagella, which will cooperate in forming a bundle when all turned in the same direction, consonant with their helicity (i.e. counter-clockwise).

These are the two steering options for most bacteria- forward or flop about. And this choice is made all the time by typical bacteria, which can sense good things in front (keep swimming forward), or sense bad things in front / good things elsewhere (flail about for a second, before resuming swimming). The flagellar base, where the motor resides, uses both ATP and the proton motive force (i.e. protons that were pumped out by cellular respiration, or the breakdown of food). The protons drive the motor, and ATP drives the construction of the flagellum, which is itself a very complicated dance of self-organization, built on the foundation of an extrusion/injection system also used by pathogenic bacteria to inject things into their targets.

Animated video describing how the flagellum and its base are constructed.

But sometimes a bacterium really needs to get somewhere badly, and is faced with viscous fluids, perhaps inside other organisms, or put out by them to defend themselves. One human defense mechanism is a DNA net thrown out by neutrophils, a type of white blood cell. Spirochetes have come up with an ingenious (by evolution, anyway!) solution- the inboard motor. This is not a motor sticking out of the bottom, but a motor fully enclosed within the cell wall of the bacterium.

Choice of directions (small forward or back arrows) that are dictated by the rotation of the flagella (blue). One set of flagella originate at the rear, and a second set originates at the front. Only if they turn in opposite directions (top two panels) does the spirochete swim coherently, either forward or back. 

How can that work? It is an interesting story. Spirochetes, as their name implies, are corkscrews in shape. In mutants lacking flagella, they instead relax to a normal bacterial rod shape. So they have flagella, but these are positioned inside the cell wall, in the periplasmic space. Indeed they form the central axis around which the corkscrew rotates, with one set of (approximately ten) flagella coming from the rear and another set from the front, each ending up around the middle. If each set rotates as hard as it can, they drive their respective ends to counter-rotate, in reaction. If the front motors (of which there are several) turn their flagella counterclockwise, as viewed from the back, they will, in reaction, drive (and bend) the nose into a clockwise orientation. If the back set of motors run clockwise, driving their flagella counterclockwise (also as seen from the back), then the rear part of the bacterium counter-rotates in clockwise fashion, and the coordinate action drives spiral bending and an overall drilling motion forward.

Video of a non-spirochete bacterium with its flagellum stick to the slide, causing the tail to wag the dog.

Video of spirochete bacteria in motion.

On the other hand, if the motors on the opposite ends of the bacterium go in the same direction, then the flagella induce opposite, instead of coordinate, counter-rotations, and the bacterium doesn't tumble randomly, as normal bacteria do, but contorts and flexes in the middle, with a similar re-orienting effect. This ability incidentally shows the remarkable toughness of these bacteria, considering the lipid bilayer nature of their key protective membranes. These bacteria can also easily reverse direction, by sending both sets of motors in reverse, operating very much like little drills. How this exquisite coordination works has not yet been worked out, however.

Reconstruction, drawn from electron microscopy, of one end of a spirochete, showing the motor orientations, the sharp hook/base of the flagellum, the membrane and cell wall structure, and one of the signaling proteins (MCP), which transmits  a sensory signal to dictate the direction of motor rotation.

One thing that is known, however, is that spirochete motors are massive- almost twice the size of E. coli motors, with special outside hooks to propagate power through the tight turn inside the periplasmic space. It is interesting that these motors can be scaled up in size, with more subunits, and more proton ports for power, as if they were just getting more cylinders in a (fossil fuel-burning) car engine.

Structure of the Borrelia flagellar motor, showing the stator (blue), which is attached to the membrane and stabilized against rotation; the rotor (yellow spokes and teal C-ring), and the gateway ATPase complex which unfolds and transmits the structural components (proteins) into the central channel from which they build the machine.

All this is in service of getting through messy, gelatinous material. The model for most of this work is the spirochete responsible for Lyme disease. The characteristic red ring seen in that infection is thought to track the progress of the spirochete outward and away from the original tick bite site, in relation to the immune system catching up via inflammation. But such viscous environments are quite common in the organic muck of the biosphere, including biofilms established by other bacteria. So the evolutionary rationale for the superpowers of spirochetes is probably quite ancient.

  • EPI has a comprehensive solution for righting the inequality ship.
  • John Dingle also has a solution.
  • "Entitlements" are OK- on the importance of social insurance. Remember, the military is always insolvent, from a budgetary perspective.
  • Sleazebags to the end.
  • On the types of epilepsy.
  • A persistent cycle of resource extraction, incumbent interests, regressive politics, and non-development. Let's not go there ourselves.
  • A lesson in jazz.

Sunday, June 24, 2018

Now You See it

Rapid-fire visual processing analysis shows how things work in the brain.

We now know what Kant only deduced- that the human mind is full of patterns and templates that shape our perceptions, not to mention our conceptions, unconsciously. There is no blank slate, and questions always come before answers. Yet at the same time, there must be a method to the madness- a way to disentangle reality from our preconceptions of it, so as to distinguish the insane from the merely mad.

The most advanced model we have for perception in the brain is the visual system, which commandeers so much of it, and whose functions reside right next to the consciousness hot-spot. From this system we have learned about the step-wise process of visual scene interpretation, which starts with the simplest orientation, color, movement, and edge finding, and goes up, in ramifying and parallel processes, to identifying faces and cognition. But how and when do these systems work with retrograde patterns from other areas of the brain- focusing attention, or associating stray bits of knowlege with the visual scene?

A recent paper takes a step in this direction by developing a method to stroboscopically time-step through the visual perception process. They find that it involves a process of upwards processing prior to downwards inference / regulation. Present an image for only 17 milliseconds, and the visual system does not have a lot of conflicting information. A single wave of processing makes its way up the chain, though it has quite scant information. This gives an opportunity to look for downward chains of activity, (called "recurrent"), also with reduced interference from simlutaneous activities. The subjects were able to categorize the type of scene (barely over random chance) when presented images for this very brief time, surprisingly, enough.

The researchers used a combination of magnetoencephalography (MEG), which responds directly to electrical activity in the brain and has very high time resolution, but not so great depth or spatial resolution, with functional magentic resonance imaging (fMRI), which responds to blood flow changes in the brain, and has complementary characteristics. Their main experiments measured how accurately people categorized pictues, between faces and other objects, as the time of image presentation was reduced, down from half a second to 34, then 17 milliseconds, and as they were presented faster, with fewer breaks in between. The effect this had on task performance was, naturally, that accuracy degraded, and also that what recognition was achieved was slowed down, even while the initial sweep of visual processing came up slightly faster. The theory was that this kind of categorization needs downward information flow from the highest cognitive levels, so this experimental setup might lower the noise around other images and perception and thus provide a way to time the perception process and dissect its parts, whether feed-forward to feed-back.

Total time to categorization of images presented at different speeds. The speediest presentation (17 milliseconds) gives the last accurate discernment (Y-axis), but also the fastest processing, presumably dispensing with at least some of the complexities that come up later in the system with feedbacks, etc., which may play an important role in accuracy.

They did not just use the subject reports, though. They also used the fMRI and MEG signals to categorize what the brains were doing, separately from what they were saying. The point was to see things happening much faster- in real time- rather than to wait for the subjects to process the images, form a categorical decision, generate speech, etc. etc. They fed this MEG data through some sort of classifier, (SVM), getting reliable categorizations of what the subjects would later report- whether they saw a face or another type of object. They also focused on two key areas of the image processing stream- the early visual cortex, and the inferior temporal cortex, which is a much later part of the process, in the ventral stream, which does object classification.

The key finding was that, once all the imaging and correlation was done, the early visual cortex recorded a "rebound" bit of electrical activity as a distinct peak from the initial processing activity. The brevity of picture presentation (34 ms and 17 ms) allowed them to see (see figure below) what longer presentation of images hides in a clutter of other activity. The second peak in the early visual cortex corresponds quite tightly to one just slightly prior in the inferior temporal cortex. While this is all very low signal data and the product of a great deal of processing and massaging, it suggests that decisions about what something is not only happen in the inferior temporal cortex, but impinge right back on the earlier parts of the processing stream to change perceptions at an earlier stage. The point of this is not entirely clear, really, but perhaps the most basic aspects of object recognition can be improved and tuned by altering early aspects of scene perception. This is another step in our increasingly networked models of how the brain works- the difference between a reality modeling engine and a video camera.

Echoes of categorization. The red trace is from the early visual processing system, and the green is from the much later inferior temporal cortex (IT), known to be involved in object categorization / identification. The method of very brief image presentation (center) allows an echo signal that is evidently a feedback (recession) from the IT to come back into the early visual cortex- to what purpose is not really known. Note that overall, it takes about 100 milliseconds for forward processing to happen in the visual system, before anything else can be seen.

  • Yes, someone got shafted by the FBI.
  • We are becoming an economy of feudal monoliths.
  • Annals of feudalism, cont. Workers are now serfs.
  • Promoting truth, not lies.
  • At the court: better cowardly and right wing, than bold and right wing.
  • Coal needs to be banned.
  • Trickle down not working... the poor are getting poorer.

Saturday, April 22, 2017

How Speech Gets Computed, Maybe

Speech is just sounds, so why does it not sound like noise?

Even wonder how we learn a language? How we make the transition from hearing how a totally foreign language sounds- like gibberish- to fluent understanding of the ideas flowing in, without even noticing their sound characteristics, or at best, appreciating them as poetry or song- a curious mixture of meaning and sound-play? Something happens in our brains, but what? A recent paper discusses the computational activities that are going on under the hood.

Speech arrives as a continuous stream of various sounds, which we can split up into discrete units (phonemes), which we then have to reconstruct as meaning-units, and detect their long-range interactions with other units such as the word, sentence, and paragraph. That is, their grammar and higher-level meanings. Obviously, in light of the difficulty of getting machines to do this well, it is a challenging task. And while we have lots of neurons to throw at the problem, each one is very slow- a mere 20 milliseconds at best, compared to about well under a nanosecond for current computers, about 100 million times faster. It is not a very promising situation.

An elementary neural network, used by the authors. Higher levels operate more slowly, capturing broader meanings.

The authors focus on how pieces of the problem are separated, recombined, and solved, in the context of a stream of stimulus facing a neural network like the brain. They realize that analogical thinking, where we routinely schematize concepts and remember them in relational ways that link between sub-concepts, super-concepts, similar concepts, coordinated experiences, etc. may form a deep precursor from non-language thought that enabled the rapid evolution of language decoding and perception.

One aspect of their solution is that the information of the incoming stream is used, but not rapidly discarded as one would in some computational approaches. Recognizing a phrase within the stream of speech is a big accomplishment, but the next word may alter its interpretation fundamentally, and require access to its component parts for that reinterpretation. So bits of the meaning hierarchy (words, phrases) are identified as they come in, but must also be kept, piece-wise, in memory for further Bayesian consideration and reconstruction. This is easy enough to say, but how would it be implemented?

From the paper, it is hard to tell, actually. The details are hidden in the program they use and in prior work. They use a neural network of several layers, originally devised to detect relational logic from unstructured inputs. The idea was to use rapid classifications and hierarchies from incoming data (specific propositional statements or facts) to set up analogies, which enable drawing very general relations among concepts/ideas. They argue that this is a good model for how our brains work. The kicker is that the same network is quite effective in understanding speech, relating (binding) nearby (in time) meaning units together even while it also holds them distinct and generates higher-level logic from them. It even shows oscillations that match quite closely those seen in the active auditory cortex, which is known to entrain its oscillations to speech patterns. High activity at the 2 and 4 Hz bands seem to relate to the pace of speech.

"The basic idea is to encode the elements that are bound in lower layers of a hierarchy directly from the sequential input and then use slower dynamics to accumulate evidence for relations at higher levels of the hierarchy. This necessarily entails a memory of the ordinal relationships that, computationally, requires higher-level representations to integrate or bind lower-level representations over time—with more protracted activity. This temporal binding mandates an asynchrony of representation between hierarchical levels of representation in order to maintain distinct, separable representations despite binding."

This result and the surrounding work, cloudy though they are, also forms an evolutionary argument, that speech recognition, being computationally very similar to other forms of analogical / relational / hierarchical thinking, may have arisen rather easily from pre-existing capabilities. Neural networks are all the rage now, with Google among others drawing on them for phenomenal advances in speech and image recognition. So there seems to be a convergence from the technology and research sides to say that this principle of computation, so different from the silicon-based sequential and procedural processing paradigm, holds tremendous promise for understanding our brains as well as exceeding them.


Saturday, June 18, 2016

Perception is Not a One-Way Street

Perceptions happen in the brain as a reality-modeling process that uses input from external senses, but does so gradually in a looping (i.e. Bayesian) refinement process using motor activity to drive attention and sensory perturbation.

The fact that perceptions come via our sensory organs, and stop once those organs are impaired, strongly suggests a simple camera-type model of one-way perceptual flow. Yet, recent research all points in the other direction, that perception is a more active process wherein the sense organs are central, but are also directed by attention and by pre-existing models of the available perceptual field in a top-down way. Thus we end up with a cognitive loop where the mind holds models of reality which are incrementally updated by the senses, but not wiped and replaced as if they were simple video feeds. The model is the perception and is more valuable than the input.

One small example of a top-down element in this cognitive loop is visual attention. Our eyes are little outposts of the brain, and are told where to point. Even if something surprising happens in the visual field, the brain has to do a little processing before shifting attention, and thus the eyeballs, to that event. Our eyes are shifting all the time, being pointed to areas of interest, following movie scenes, words on a page, etc. None of this is directed by the eyes themselves, (including the jittery saccade system), but by higher levels of cognition.

The paper for this week notes ironically that visual perception studies have worked very hard to eliminate eye and other motion from their studies, to provide consistent mapping of what the experimenters present, to where the perceptions show up in visual fields of the brain. Yet motion and directed attention are fundamental to complex sensation.

Other sensory systems vary substantially in their dependence on motion. Hearing is perhaps least dependent, as one can analyze a scene from a stationary position, though movement of either the sound or the subject, in time and space, are extremely helpful to enrich perception. Touch, through our hairs and skin, is intrinsically dependent on movement and action. Taste and smell are also, though in a subtler way. Any monotonic smell will die pretty rapidly, subjectively, as we get used to it. It is the bloom of fresh tastes with each mouthful or new aromas that create sensation, as implied by the expression "clearing the palate". Aside from the issues of the brain's top-down construction of these perceptions through its choices and modeling, there is also the input of motor components directly, and dynamic time elements, that enrich / enliven perception multi-modally, beyond a simple input stream model.

The many loops from sensory (left) to motor (right) parts of the perceptual network. This figure is focused on whisker perception by mice.

The current paper discusses these issues and makes the point that since our senses have always been embodied and in-motion, they are naturally optimized for dynamic learning. And that the brain circuits mediating between sensation and action are pervasive and very difficult to separate in practice. The authors hypothesize very generally that perception consists of a cognitive quasi-steady state where motor cues are consistent with tactile and other sensory cues (assuming a cognitive model within which this consistence is defined), which is then perturbed by changes in any part of the system, especially sensory organ input, upon which the network seeks a new steady state. They term the core of the network the motor-sensory-motor (MSM) loop, thus empahsizing the motor aspects, and somewhat unfairly de-emphasizing the sensory aspects, which after all are specialized for higher abundance and diversity of data than the motor system. But we can grant that they are an integrated system. They also add that much perception is not conscious, so the fixation of a great deal of research on conscious reports, while understandable, is limiting.

"A crucial aspect of such an attractor is that the dynamics leading to it encompass the entire relevant MSM-loop and thus depend on the function transferring sensor motion into receptors activation; this transfer function describes the perceived object or feature via its physical interactions with sensor motion. Thus, ‘memories’ stored in such perceptual attractors are stored in brain-world interactions, rather than in brain internal representations."

A simple experiment. A camera is set up to watch a video screen, which shows  light and dark half-screens which can move side-to-side. The software creates a sensory-motor loop to pan motors on the camera to enable it to track the visual edge, as shown in E. It is evident that there is not much learning involved, but simply a demonstration of an algorithm's effective integration of motor and sensory elements for pursuit of a simple feature.

Eventually, the researchers present some results, from a mathematical model and robot that they have constructed. The robot has a camera and motors to move around with, plus computer and algorithm. The camera only sends change data, as does the retina, not entire visual scenes, and the visual field is extremely simple- a screen with a dark and light side, which can move right or left. The motorized camera system, using equations approximating a MSM loop, can relatively easily home in on and track the visual right/left divider, and thus demonstrate dynamic perception driven by both motor and sensory elements. The cognitive model was naturally implicit in the computer code that ran the system, which was expecting to track just such a light/dark line. One must say that this was not a particularly difficult or novel task, so the heart of the paper is its introductory material.


  • The US has long been a wealthy country.
  • If we want to control carbon emissions, we can't wait for carbon supplies to run out.
  • Market failure, marketing, and fraud, umpteenth edition. Trump wasn't the only one getting into the scam-school business.
  • Finance is eating away at your retirement.
  • Why is the House of Representatives in hostile hands?
  • UBI- utopian in a good way, or a bad way?
  • Trump transitions from stupid to deranged.
  • The fight against corruption and crony Keynesianism, in India.
  • Whom do policymakers talk to? Hint- not you.
  • Whom are you talking to at the call center? Someone playing hot potato.
  • Those nutty gun nuts.
  • Is Islam a special case, in how it interacts with political and psychological instability?
  • Graph of the week- a brief history of inequality from 1725.

Saturday, October 10, 2015

For Whom Shall the Robots Work?

When robots do everything, will we have work? Will we have income? A review of Martin Ford's "Rise of the Robots".

No one knows when it is coming, but eventually, machines will do everything we regard as work. Already some very advanced activities like driving cars and translating languages are done by machine, sometimes quite well. Manufacturing is increasingly automated, and computers have been coming for white collar jobs as well. Indeed, by Martin Ford's telling, technological displacement has already had a dramatic effect on the US labor market, slowing job growth, concentrating economic power in a few relatively small high-tech companies, and de-skilling many other jobs. His book is about the future, when artificial intelligence becomes realistic, and machines can do it all.

Leaving aside the question of whether we will be able to control these armies of robots once they reach AI, sentience, the singularity ... whatever one wishes to call it, a more immediate question is how the economic system should be (re-)organized. Up till now, humans have had to earn their bread by the sweat of their brow. No economic system has successfully decoupled effort in work (or predation / warfare / ideological parasitism) from income and sustainance. The communists tried, and what a hell that turned out to be. But Marx may only have been premature, not wrong, in predicting that the capitalist system would eventually lead to both incredible prosperity and incredible concentration of wealth.

Ford offers an interesting aside about capitalism, (and he should know, having run a software company), that capitalists hate employees. For all the HR happy talk and team this and relationship that, every employee is an enormous cost drain. Salary is just the start- there are benefits, office space, liability for all sorts of personal problems, and the unpredictability of their quitting and taking all their training and knowledge with them. They are hateful, and tolerated only because, and insofar as, they are absolutely necessary.

At any rate, the incentives, whether personal or cooly economic, are clear. And the trends in wealth and income are likewise clear, that employment is more precarious and less well paid (at the median), while income and wealth have concentrated strongly upward, due to all sorts of reasons, including rightward politics, off-shoring, and dramatic technological invasion of many forms of work. (Indeed, off-shoring is but a prelude to automation, for the typical low-skill form of work.) There is little doubt that, left to its own devices(!), our capitalist system will become ever more concentrated, with fewer companies running more technology and fewer employees to produce everything we need.

What happens then? The macroeconomic problem is that if everyone is unemployed, no one (except the 0.00001%) will be able to buy anything. While the provision of all our necessities by way of hopefully docile machines is not a bad thing, indeed the fulfillment of a much-imagined dream of humanity, some technical problems do arise; of which we can already see the glimmerings in our current society. Without the mass production lines and other requirements for armies of labor that democratized the US economy in the mid-20th century, we may be heading in the direction, not only of Thomas Piketty's relatively wealth-heavy unequal society of financial capital, but towards a degree of concentration we have not seen lately outside of the most fabulous African kleptocracies.

What we need is a fundamental rethinking of the connection between income and work, and the role of capital and capitalists. The real wealth of our society is an ever-accumulating inheritance of technology and knowledge that was built in common by many people. Whether some clever entrepreneur can leverage that inheritance into a ripping business model while employing virtually no actual humans should not entirely dictate the distribution of wealth. As Ford puts it:
"Today's computer technology exists in some measure because millions of middle-class taxpayers supported federal funding for basic research in the decades following World War II. We can be reasonably certain that those taxpayers offered their support in the expectation that the fruits of that research would create a more prosperous future for their children and grandchildren. Yet, the trends we looked at in the last chapter suggest we are headed toward a very different outcome. Beyond the basic moral question of whether a tiny elite should be able to, in effect, capture ownership of society's accumulated technological capital, there are also practical issues regarding the overall health of an economy in which income inequality becomes too extreme."

As a solution, Ford suggests the provision of a basic income for all citizens. This would start very minimally, at perhaps $10,000 per year, and perhaps be raised as the policy develops and the available wealth increase. He states that this could be funded relatively easily from existing poverty programs, and would at a stroke eliminate poverty. It is a very libertarian idea, beloved by Milton Freidman, (in the form of negative income tax), and also emphasizes freedom for all citizens to do as they like with this money. Ford also beings up the quite basic principle that in a new, capital-intensive regime, we should be prepared to tax labor less and tax capital more. But he makes no specific suggestions in this direction ... one gets the impression that it cuts a little too close to home.

This is the part of the book where I part company, for several reasons. Firstly, I think work is a fundamental human good and even right. It is important for us to have larger purposes and organize ourselves corporately (in a broad sense) to carry them out. It is also highly beneficial for people to be paid for positive contributions they make to society. In contrast, the pathology surrounding lives spent on unearned public support are well-known. Secondly, the decline of employment in the capitalist economy has the perverse effect of weakening the labor market and dis-empowering workers, who must scramble for the fewer remaining jobs, accepting lower pay and worse conditions. Setting up a new system to compete at a frankly miserable sub-poverty level does little to correct this dynamic. Thirdly, I think there is plenty of work to be done, even if robots do everything we dislike doing. The scope for non-profit work, for environmental beautification, for social betterment, elder care, education, and entertainment is simply infinite. The only question is whether we can devise a social mechanism to carry it out.

This leads to the concept of a job guarantee. The government would provide paying work to anyone willing to participate in socially beneficial tasks. These would be real, well-paying jobs, geared to pay less than private market rates, (or whatever we deem appropriate as the floor for a middle class existence), but quite a bit more than basic support levels for those who do not work at all. Under this mechanism, basic income is not wasted on those who have better paying jobs. Nor are the truly needy left to fend for themselves on sub-poverty incomes and no other programs of practical support. While the private market pays for any kind of profitable work no matter how socially negative, guaranteed jobs would be collectively devised to have a positive social impact- that would be their only rationale. And most importantly, they would harness and cultivate the human work ethic towards social goals, which I think is a more socially and spiritually sustainable economic system, in a post industrial, even post-work, age.

The problem with a job guarantee, obviously, is the opportunity for corruption and mismanagement, when the market discipline is lifted in favor of state-based decision making. Communist states have not provided the best track record of public interest employment, though there have been bright spots. In the US, a vast sub-economy of nonprofit enterprises and volunteering organizations provides one basis of hope and perhaps a mechanism of expansion. The government could take a set of new taxes on wealth, high incomes, fossil fuels, and financial transactions, and distribute such funds to public interest non-profit organizations, including ones that operate internationally. It is a sector of our economy that merits growth and has the organizational capacity to absorb both money and workers that are otherwise slack.

Additionally, of course, many wholly public projects deserve resources, from infrastructure to health care to combatting climate change. We have enormous public good needs that are going unaddressed. The governmental sector has shown good management in many instances, such as in the research sector, medicare, and social security. Competitive grant systems are a model of efficient resource allocation, and could put some of the resources of a full job guarantee to good use, perhaps using layperson as well as expert review panels. Improving public management is itself a field for development as part of the expanded public sector that a job guarantee would create.

In an interesting digression, Ford remarks on the curious case of Japan. Japan has undergone a demographic shift to an aged society. It has been predicted that the number of jobs in health care and elder care would boom, and that the society would have to import workers to do all the work. But that hasn't happened. In fact, Japan has been in a decades-long deflationary recession featuring, if anything, under-employment as the rule, exemplified by "freeters" who stay at home with their parents far into adulthood. What happened? It is another Keynesian parable, since the elderly are poor consumers. For all the needs that one could imagine, few of them materialize because the income of the elderly, and their propensity to spend, are both low.

The labor participation rate in the US.

Our deflationary decade, with declining labor participation rates, indicates that we are heading in the same direction. We need to find a way to both mitigate the phenomenal concentration of wealth that is destroying our political system and economy, and to create a mechanism that puts money into the hands of a sustainable middle class- one that does not rely on the diminishing capitalist notions of employment and continue down the road of techno-feudalism, but which enhances our lives individually and collectively.