Showing posts with label computer science. Show all posts
Showing posts with label computer science. Show all posts

Sunday, June 24, 2018

Now You See it

Rapid-fire visual processing analysis shows how things work in the brain.

We now know what Kant only deduced- that the human mind is full of patterns and templates that shape our perceptions, not to mention our conceptions, unconsciously. There is no blank slate, and questions always come before answers. Yet at the same time, there must be a method to the madness- a way to disentangle reality from our preconceptions of it, so as to distinguish the insane from the merely mad.

The most advanced model we have for perception in the brain is the visual system, which commandeers so much of it, and whose functions reside right next to the consciousness hot-spot. From this system we have learned about the step-wise process of visual scene interpretation, which starts with the simplest orientation, color, movement, and edge finding, and goes up, in ramifying and parallel processes, to identifying faces and cognition. But how and when do these systems work with retrograde patterns from other areas of the brain- focusing attention, or associating stray bits of knowlege with the visual scene?

A recent paper takes a step in this direction by developing a method to stroboscopically time-step through the visual perception process. They find that it involves a process of upwards processing prior to downwards inference / regulation. Present an image for only 17 milliseconds, and the visual system does not have a lot of conflicting information. A single wave of processing makes its way up the chain, though it has quite scant information. This gives an opportunity to look for downward chains of activity, (called "recurrent"), also with reduced interference from simlutaneous activities. The subjects were able to categorize the type of scene (barely over random chance) when presented images for this very brief time, surprisingly, enough.

The researchers used a combination of magnetoencephalography (MEG), which responds directly to electrical activity in the brain and has very high time resolution, but not so great depth or spatial resolution, with functional magentic resonance imaging (fMRI), which responds to blood flow changes in the brain, and has complementary characteristics. Their main experiments measured how accurately people categorized pictues, between faces and other objects, as the time of image presentation was reduced, down from half a second to 34, then 17 milliseconds, and as they were presented faster, with fewer breaks in between. The effect this had on task performance was, naturally, that accuracy degraded, and also that what recognition was achieved was slowed down, even while the initial sweep of visual processing came up slightly faster. The theory was that this kind of categorization needs downward information flow from the highest cognitive levels, so this experimental setup might lower the noise around other images and perception and thus provide a way to time the perception process and dissect its parts, whether feed-forward to feed-back.

Total time to categorization of images presented at different speeds. The speediest presentation (17 milliseconds) gives the last accurate discernment (Y-axis), but also the fastest processing, presumably dispensing with at least some of the complexities that come up later in the system with feedbacks, etc., which may play an important role in accuracy.

They did not just use the subject reports, though. They also used the fMRI and MEG signals to categorize what the brains were doing, separately from what they were saying. The point was to see things happening much faster- in real time- rather than to wait for the subjects to process the images, form a categorical decision, generate speech, etc. etc. They fed this MEG data through some sort of classifier, (SVM), getting reliable categorizations of what the subjects would later report- whether they saw a face or another type of object. They also focused on two key areas of the image processing stream- the early visual cortex, and the inferior temporal cortex, which is a much later part of the process, in the ventral stream, which does object classification.

The key finding was that, once all the imaging and correlation was done, the early visual cortex recorded a "rebound" bit of electrical activity as a distinct peak from the initial processing activity. The brevity of picture presentation (34 ms and 17 ms) allowed them to see (see figure below) what longer presentation of images hides in a clutter of other activity. The second peak in the early visual cortex corresponds quite tightly to one just slightly prior in the inferior temporal cortex. While this is all very low signal data and the product of a great deal of processing and massaging, it suggests that decisions about what something is not only happen in the inferior temporal cortex, but impinge right back on the earlier parts of the processing stream to change perceptions at an earlier stage. The point of this is not entirely clear, really, but perhaps the most basic aspects of object recognition can be improved and tuned by altering early aspects of scene perception. This is another step in our increasingly networked models of how the brain works- the difference between a reality modeling engine and a video camera.

Echoes of categorization. The red trace is from the early visual processing system, and the green is from the much later inferior temporal cortex (IT), known to be involved in object categorization / identification. The method of very brief image presentation (center) allows an echo signal that is evidently a feedback (recession) from the IT to come back into the early visual cortex- to what purpose is not really known. Note that overall, it takes about 100 milliseconds for forward processing to happen in the visual system, before anything else can be seen.

  • Yes, someone got shafted by the FBI.
  • We are becoming an economy of feudal monoliths.
  • Annals of feudalism, cont. Workers are now serfs.
  • Promoting truth, not lies.
  • At the court: better cowardly and right wing, than bold and right wing.
  • Coal needs to be banned.
  • Trickle down not working... the poor are getting poorer.

Saturday, June 24, 2017

Worried About Truth? Try Programming

Programming as a continual lesson in reality and humility.

The reality principle has been taking a beating recently, with an aged child in the White House throwing tantrums and drama in all directions. Truth itself is under direct assault, as lies big and small emerge shamelessly from the highest levels of our institutions and media. What to do? Reality is still out there, and will surely have its revenge, though that may well drop in another time and place, missing the perpetrators of these outrages while ensnaring the rest of us in its consequences.

For now, you may need a psychological and spiritual cleanse, and what better way than to redouble one's engagement with reality than to drop into a totally artificial world- that of programming? Well, many ways, surely. But nothing teaches discipline in service of the reality principle quite like dealing with a perfectly, relentlessly logical device. Truth is not an aspiration in this world, it is a bread and butter reality, established routinely in a few lines of code. In larger projects, it is a remorseless taskmaster, failing on any misplaced character or weakly developed logic. You get out precisely what you put in, whether that was well thought through or not.

No, it doesn't usually look like this.

One lesson is that every bug has a cause. I may not want to hear about it, but if I want that code to work, I don't have any choice but to address it. I may not be able to find the cause easily, but it is in there, somewhere. Even if the bug is due to some deeper bug, perhaps in the programming language itself or the operating system, and is hard to find and impossible to fix, it is in there, somewhere. Coding is in this way one of many paths to maturity- to dealing honestly with the world. While the profession may have an image of child-men uneasy with social reality, it has its own extreme discipline in the service of realities both formal, in the internal structures they are grappling with, and social, in the needs the code ultimately addresses, or fails to address.

Science is of course another way of dealing with reality in a rigorous way. But, compared to programming, it exists at a significantly larger remove from its subject. It can take years to do an experiment. There may be numerous conscious and unconscious ways to influence results. The superstructures of theory, training, and pre-supposition required to obtain even the smallest step into the unknown are enormous, and create great risks of chasing down fruitless, if fashionable, avenues, such as, say, string theory, or multiverses (not to mention ESP!). The conventional literatures, expecially in drug studies and social science, are notoriously full of false and misleading results. Nor is much of this as accessible to the layperson as programming is, which makes engagement with code an accessible as well as effective tonic to our current national vertigo.


  • What happened in 2016? Mainly, lots of lying.
  • Trump is hardly alone in not caring about the public good.
  • What kind of a democracy is this?

Saturday, April 22, 2017

How Speech Gets Computed, Maybe

Speech is just sounds, so why does it not sound like noise?

Even wonder how we learn a language? How we make the transition from hearing how a totally foreign language sounds- like gibberish- to fluent understanding of the ideas flowing in, without even noticing their sound characteristics, or at best, appreciating them as poetry or song- a curious mixture of meaning and sound-play? Something happens in our brains, but what? A recent paper discusses the computational activities that are going on under the hood.

Speech arrives as a continuous stream of various sounds, which we can split up into discrete units (phonemes), which we then have to reconstruct as meaning-units, and detect their long-range interactions with other units such as the word, sentence, and paragraph. That is, their grammar and higher-level meanings. Obviously, in light of the difficulty of getting machines to do this well, it is a challenging task. And while we have lots of neurons to throw at the problem, each one is very slow- a mere 20 milliseconds at best, compared to about well under a nanosecond for current computers, about 100 million times faster. It is not a very promising situation.

An elementary neural network, used by the authors. Higher levels operate more slowly, capturing broader meanings.

The authors focus on how pieces of the problem are separated, recombined, and solved, in the context of a stream of stimulus facing a neural network like the brain. They realize that analogical thinking, where we routinely schematize concepts and remember them in relational ways that link between sub-concepts, super-concepts, similar concepts, coordinated experiences, etc. may form a deep precursor from non-language thought that enabled the rapid evolution of language decoding and perception.

One aspect of their solution is that the information of the incoming stream is used, but not rapidly discarded as one would in some computational approaches. Recognizing a phrase within the stream of speech is a big accomplishment, but the next word may alter its interpretation fundamentally, and require access to its component parts for that reinterpretation. So bits of the meaning hierarchy (words, phrases) are identified as they come in, but must also be kept, piece-wise, in memory for further Bayesian consideration and reconstruction. This is easy enough to say, but how would it be implemented?

From the paper, it is hard to tell, actually. The details are hidden in the program they use and in prior work. They use a neural network of several layers, originally devised to detect relational logic from unstructured inputs. The idea was to use rapid classifications and hierarchies from incoming data (specific propositional statements or facts) to set up analogies, which enable drawing very general relations among concepts/ideas. They argue that this is a good model for how our brains work. The kicker is that the same network is quite effective in understanding speech, relating (binding) nearby (in time) meaning units together even while it also holds them distinct and generates higher-level logic from them. It even shows oscillations that match quite closely those seen in the active auditory cortex, which is known to entrain its oscillations to speech patterns. High activity at the 2 and 4 Hz bands seem to relate to the pace of speech.

"The basic idea is to encode the elements that are bound in lower layers of a hierarchy directly from the sequential input and then use slower dynamics to accumulate evidence for relations at higher levels of the hierarchy. This necessarily entails a memory of the ordinal relationships that, computationally, requires higher-level representations to integrate or bind lower-level representations over time—with more protracted activity. This temporal binding mandates an asynchrony of representation between hierarchical levels of representation in order to maintain distinct, separable representations despite binding."

This result and the surrounding work, cloudy though they are, also forms an evolutionary argument, that speech recognition, being computationally very similar to other forms of analogical / relational / hierarchical thinking, may have arisen rather easily from pre-existing capabilities. Neural networks are all the rage now, with Google among others drawing on them for phenomenal advances in speech and image recognition. So there seems to be a convergence from the technology and research sides to say that this principle of computation, so different from the silicon-based sequential and procedural processing paradigm, holds tremendous promise for understanding our brains as well as exceeding them.


Saturday, June 18, 2016

Perception is Not a One-Way Street

Perceptions happen in the brain as a reality-modeling process that uses input from external senses, but does so gradually in a looping (i.e. Bayesian) refinement process using motor activity to drive attention and sensory perturbation.

The fact that perceptions come via our sensory organs, and stop once those organs are impaired, strongly suggests a simple camera-type model of one-way perceptual flow. Yet, recent research all points in the other direction, that perception is a more active process wherein the sense organs are central, but are also directed by attention and by pre-existing models of the available perceptual field in a top-down way. Thus we end up with a cognitive loop where the mind holds models of reality which are incrementally updated by the senses, but not wiped and replaced as if they were simple video feeds. The model is the perception and is more valuable than the input.

One small example of a top-down element in this cognitive loop is visual attention. Our eyes are little outposts of the brain, and are told where to point. Even if something surprising happens in the visual field, the brain has to do a little processing before shifting attention, and thus the eyeballs, to that event. Our eyes are shifting all the time, being pointed to areas of interest, following movie scenes, words on a page, etc. None of this is directed by the eyes themselves, (including the jittery saccade system), but by higher levels of cognition.

The paper for this week notes ironically that visual perception studies have worked very hard to eliminate eye and other motion from their studies, to provide consistent mapping of what the experimenters present, to where the perceptions show up in visual fields of the brain. Yet motion and directed attention are fundamental to complex sensation.

Other sensory systems vary substantially in their dependence on motion. Hearing is perhaps least dependent, as one can analyze a scene from a stationary position, though movement of either the sound or the subject, in time and space, are extremely helpful to enrich perception. Touch, through our hairs and skin, is intrinsically dependent on movement and action. Taste and smell are also, though in a subtler way. Any monotonic smell will die pretty rapidly, subjectively, as we get used to it. It is the bloom of fresh tastes with each mouthful or new aromas that create sensation, as implied by the expression "clearing the palate". Aside from the issues of the brain's top-down construction of these perceptions through its choices and modeling, there is also the input of motor components directly, and dynamic time elements, that enrich / enliven perception multi-modally, beyond a simple input stream model.

The many loops from sensory (left) to motor (right) parts of the perceptual network. This figure is focused on whisker perception by mice.

The current paper discusses these issues and makes the point that since our senses have always been embodied and in-motion, they are naturally optimized for dynamic learning. And that the brain circuits mediating between sensation and action are pervasive and very difficult to separate in practice. The authors hypothesize very generally that perception consists of a cognitive quasi-steady state where motor cues are consistent with tactile and other sensory cues (assuming a cognitive model within which this consistence is defined), which is then perturbed by changes in any part of the system, especially sensory organ input, upon which the network seeks a new steady state. They term the core of the network the motor-sensory-motor (MSM) loop, thus empahsizing the motor aspects, and somewhat unfairly de-emphasizing the sensory aspects, which after all are specialized for higher abundance and diversity of data than the motor system. But we can grant that they are an integrated system. They also add that much perception is not conscious, so the fixation of a great deal of research on conscious reports, while understandable, is limiting.

"A crucial aspect of such an attractor is that the dynamics leading to it encompass the entire relevant MSM-loop and thus depend on the function transferring sensor motion into receptors activation; this transfer function describes the perceived object or feature via its physical interactions with sensor motion. Thus, ‘memories’ stored in such perceptual attractors are stored in brain-world interactions, rather than in brain internal representations."

A simple experiment. A camera is set up to watch a video screen, which shows  light and dark half-screens which can move side-to-side. The software creates a sensory-motor loop to pan motors on the camera to enable it to track the visual edge, as shown in E. It is evident that there is not much learning involved, but simply a demonstration of an algorithm's effective integration of motor and sensory elements for pursuit of a simple feature.

Eventually, the researchers present some results, from a mathematical model and robot that they have constructed. The robot has a camera and motors to move around with, plus computer and algorithm. The camera only sends change data, as does the retina, not entire visual scenes, and the visual field is extremely simple- a screen with a dark and light side, which can move right or left. The motorized camera system, using equations approximating a MSM loop, can relatively easily home in on and track the visual right/left divider, and thus demonstrate dynamic perception driven by both motor and sensory elements. The cognitive model was naturally implicit in the computer code that ran the system, which was expecting to track just such a light/dark line. One must say that this was not a particularly difficult or novel task, so the heart of the paper is its introductory material.


  • The US has long been a wealthy country.
  • If we want to control carbon emissions, we can't wait for carbon supplies to run out.
  • Market failure, marketing, and fraud, umpteenth edition. Trump wasn't the only one getting into the scam-school business.
  • Finance is eating away at your retirement.
  • Why is the House of Representatives in hostile hands?
  • UBI- utopian in a good way, or a bad way?
  • Trump transitions from stupid to deranged.
  • The fight against corruption and crony Keynesianism, in India.
  • Whom do policymakers talk to? Hint- not you.
  • Whom are you talking to at the call center? Someone playing hot potato.
  • Those nutty gun nuts.
  • Is Islam a special case, in how it interacts with political and psychological instability?
  • Graph of the week- a brief history of inequality from 1725.

Saturday, October 10, 2015

For Whom Shall the Robots Work?

When robots do everything, will we have work? Will we have income? A review of Martin Ford's "Rise of the Robots".

No one knows when it is coming, but eventually, machines will do everything we regard as work. Already some very advanced activities like driving cars and translating languages are done by machine, sometimes quite well. Manufacturing is increasingly automated, and computers have been coming for white collar jobs as well. Indeed, by Martin Ford's telling, technological displacement has already had a dramatic effect on the US labor market, slowing job growth, concentrating economic power in a few relatively small high-tech companies, and de-skilling many other jobs. His book is about the future, when artificial intelligence becomes realistic, and machines can do it all.

Leaving aside the question of whether we will be able to control these armies of robots once they reach AI, sentience, the singularity ... whatever one wishes to call it, a more immediate question is how the economic system should be (re-)organized. Up till now, humans have had to earn their bread by the sweat of their brow. No economic system has successfully decoupled effort in work (or predation / warfare / ideological parasitism) from income and sustainance. The communists tried, and what a hell that turned out to be. But Marx may only have been premature, not wrong, in predicting that the capitalist system would eventually lead to both incredible prosperity and incredible concentration of wealth.

Ford offers an interesting aside about capitalism, (and he should know, having run a software company), that capitalists hate employees. For all the HR happy talk and team this and relationship that, every employee is an enormous cost drain. Salary is just the start- there are benefits, office space, liability for all sorts of personal problems, and the unpredictability of their quitting and taking all their training and knowledge with them. They are hateful, and tolerated only because, and insofar as, they are absolutely necessary.

At any rate, the incentives, whether personal or cooly economic, are clear. And the trends in wealth and income are likewise clear, that employment is more precarious and less well paid (at the median), while income and wealth have concentrated strongly upward, due to all sorts of reasons, including rightward politics, off-shoring, and dramatic technological invasion of many forms of work. (Indeed, off-shoring is but a prelude to automation, for the typical low-skill form of work.) There is little doubt that, left to its own devices(!), our capitalist system will become ever more concentrated, with fewer companies running more technology and fewer employees to produce everything we need.

What happens then? The macroeconomic problem is that if everyone is unemployed, no one (except the 0.00001%) will be able to buy anything. While the provision of all our necessities by way of hopefully docile machines is not a bad thing, indeed the fulfillment of a much-imagined dream of humanity, some technical problems do arise; of which we can already see the glimmerings in our current society. Without the mass production lines and other requirements for armies of labor that democratized the US economy in the mid-20th century, we may be heading in the direction, not only of Thomas Piketty's relatively wealth-heavy unequal society of financial capital, but towards a degree of concentration we have not seen lately outside of the most fabulous African kleptocracies.

What we need is a fundamental rethinking of the connection between income and work, and the role of capital and capitalists. The real wealth of our society is an ever-accumulating inheritance of technology and knowledge that was built in common by many people. Whether some clever entrepreneur can leverage that inheritance into a ripping business model while employing virtually no actual humans should not entirely dictate the distribution of wealth. As Ford puts it:
"Today's computer technology exists in some measure because millions of middle-class taxpayers supported federal funding for basic research in the decades following World War II. We can be reasonably certain that those taxpayers offered their support in the expectation that the fruits of that research would create a more prosperous future for their children and grandchildren. Yet, the trends we looked at in the last chapter suggest we are headed toward a very different outcome. Beyond the basic moral question of whether a tiny elite should be able to, in effect, capture ownership of society's accumulated technological capital, there are also practical issues regarding the overall health of an economy in which income inequality becomes too extreme."

As a solution, Ford suggests the provision of a basic income for all citizens. This would start very minimally, at perhaps $10,000 per year, and perhaps be raised as the policy develops and the available wealth increase. He states that this could be funded relatively easily from existing poverty programs, and would at a stroke eliminate poverty. It is a very libertarian idea, beloved by Milton Freidman, (in the form of negative income tax), and also emphasizes freedom for all citizens to do as they like with this money. Ford also beings up the quite basic principle that in a new, capital-intensive regime, we should be prepared to tax labor less and tax capital more. But he makes no specific suggestions in this direction ... one gets the impression that it cuts a little too close to home.

This is the part of the book where I part company, for several reasons. Firstly, I think work is a fundamental human good and even right. It is important for us to have larger purposes and organize ourselves corporately (in a broad sense) to carry them out. It is also highly beneficial for people to be paid for positive contributions they make to society. In contrast, the pathology surrounding lives spent on unearned public support are well-known. Secondly, the decline of employment in the capitalist economy has the perverse effect of weakening the labor market and dis-empowering workers, who must scramble for the fewer remaining jobs, accepting lower pay and worse conditions. Setting up a new system to compete at a frankly miserable sub-poverty level does little to correct this dynamic. Thirdly, I think there is plenty of work to be done, even if robots do everything we dislike doing. The scope for non-profit work, for environmental beautification, for social betterment, elder care, education, and entertainment is simply infinite. The only question is whether we can devise a social mechanism to carry it out.

This leads to the concept of a job guarantee. The government would provide paying work to anyone willing to participate in socially beneficial tasks. These would be real, well-paying jobs, geared to pay less than private market rates, (or whatever we deem appropriate as the floor for a middle class existence), but quite a bit more than basic support levels for those who do not work at all. Under this mechanism, basic income is not wasted on those who have better paying jobs. Nor are the truly needy left to fend for themselves on sub-poverty incomes and no other programs of practical support. While the private market pays for any kind of profitable work no matter how socially negative, guaranteed jobs would be collectively devised to have a positive social impact- that would be their only rationale. And most importantly, they would harness and cultivate the human work ethic towards social goals, which I think is a more socially and spiritually sustainable economic system, in a post industrial, even post-work, age.

The problem with a job guarantee, obviously, is the opportunity for corruption and mismanagement, when the market discipline is lifted in favor of state-based decision making. Communist states have not provided the best track record of public interest employment, though there have been bright spots. In the US, a vast sub-economy of nonprofit enterprises and volunteering organizations provides one basis of hope and perhaps a mechanism of expansion. The government could take a set of new taxes on wealth, high incomes, fossil fuels, and financial transactions, and distribute such funds to public interest non-profit organizations, including ones that operate internationally. It is a sector of our economy that merits growth and has the organizational capacity to absorb both money and workers that are otherwise slack.

Additionally, of course, many wholly public projects deserve resources, from infrastructure to health care to combatting climate change. We have enormous public good needs that are going unaddressed. The governmental sector has shown good management in many instances, such as in the research sector, medicare, and social security. Competitive grant systems are a model of efficient resource allocation, and could put some of the resources of a full job guarantee to good use, perhaps using layperson as well as expert review panels. Improving public management is itself a field for development as part of the expanded public sector that a job guarantee would create.

In an interesting digression, Ford remarks on the curious case of Japan. Japan has undergone a demographic shift to an aged society. It has been predicted that the number of jobs in health care and elder care would boom, and that the society would have to import workers to do all the work. But that hasn't happened. In fact, Japan has been in a decades-long deflationary recession featuring, if anything, under-employment as the rule, exemplified by "freeters" who stay at home with their parents far into adulthood. What happened? It is another Keynesian parable, since the elderly are poor consumers. For all the needs that one could imagine, few of them materialize because the income of the elderly, and their propensity to spend, are both low.

The labor participation rate in the US.

Our deflationary decade, with declining labor participation rates, indicates that we are heading in the same direction. We need to find a way to both mitigate the phenomenal concentration of wealth that is destroying our political system and economy, and to create a mechanism that puts money into the hands of a sustainable middle class- one that does not rely on the diminishing capitalist notions of employment and continue down the road of techno-feudalism, but which enhances our lives individually and collectively.


Saturday, January 10, 2015

Do Fruit Flies Dream of Piña Coladas?

The olfactory learning circuitry of the fly brain.

Our brains didn't come from nowhere, but rather out of hundreds of millions of years of evolutionary work that developed mechanisms for dealing rapidly and intelligently with the environment. Primitive organisms are fascinating to study as waystations along that long road, and their simplicity can clarify what in humans may still be beyond our intellectual or technical reach to study.

Fruit flies have only ~ 1-200,000 cells in their brains, compared with the maybe 100 billion in humans. Moreover, most of these neurons are laid down in totally genetically determined fashion, easing study, but also reducing the ability to learn. But one area of the fly brain appears to break that mold- the mushroom body, which is intermediate in the path from olfactory reception to behavior. It is sort of the fly's glimmer of higher intelligence. Indeed, flies can be trained in the traditional Pavlovian manner to associate otherwise meaningless or pleasant odors with torture like electric shocks, and thus avoid them. Researchers have spent decades teaching them other tricks like avoiding certain odors, heading towards sounds, lights, manipulating social status and fighting behavior, etc. Along the way they have given some of their mutants names like dunce and rutabaga.

Location of the mushroom body in the fly head/brain. Signals arrive through the antennal lobes, from the various hairs and other nerves on the mouth and antenna.

Fluorescence image of two labels, one on an annal lobe (AL) neuron (red), and another on a mushroom body (MB) neuron (green, left side), which labels more cells of the same type on the right side. The lateral horn (LH) is one destination for signals as they start going out again to muscles and behavior downstream.

A recent paper conducted a tour de force of anatomy, tracing every single neuron going to and from the mushroom body. The technique they used to do this is interesting in itself, called an "enhancer trap". Fly researchers have been generating a vast number of "lines", or inbred fly mutants, by inserting a two bits of DNA from yeast cells. The first is the gene encoding a transcription activator, GAL4. This is induced to jump randomly in the fly genome, hoping that lands downstream of the regulatory region of an endogenous gene, i.e. its enhancer or promoter. The second bit is a binding site for this GAL4 protein, linked to a gene that expresses some useful marker, typically a fluorescent protein like GFP. Since the yeast GAL4 protein works just fine to activate RNA transcription and gene expression in flies, the end result is that GFP gets expressed in reponse to a single enhancer somewhere else in the genome. Indeed researchers try to "saturate the genome", generating a huge number of lines with such mutations, hoping for mutants at every single enhancer in the fly genome, and even enslave their undergraduate students to that end.
Enhancer trap schematic. An introduced regulatory gene, GAL4 is hopped within the fly genome to random locations, some of which are downstream of enhancers (E1). The signal is received by another introduced gene, which expresses a marker (green fluorescent protein X in this case) in response to the production of GAL4. UAS = upstream activating sequence.
Screening with a fluorescence microscope, one will see the cells where GFP is expressed, and thus where the particular enhancer of that line is active. This might be anywhere and any time, in the egg or in the adult, in the brain or in the leg, or everywhere and all the time, or nowhere. This stage of the process tends to be very tedious, as is all the fly breeding leading up to it. The current researchers used 7000 such lines that had been built previously to screen for those showing GFP expression in neurons in or connected to the mushroom body.

Anatomical details of neurons leading through, and from the mushroom body, drawn from studies of many flies with many individually labelled neurons. Video here. Note the wide distribution of MBON (mushroom body output) neuron connections, and also the density of DANs, (dopaminergic neurons), which feedback to the MBONs from sensations elsewhere in the body.

The mushroom body is made up of ~ 2,000 of what are called Kenyon cells. Inputs come from the antennal lobe, where olfactory receptors from the fly's "nose" and face link to ~200 projecting neurons (diagrammed below). The Kenyon cells synapse to what are called the mushroom body output neurons (MBONs), and these same output axons get inputs both from other MBONs (making some recurrent loops) and from other neurons called DANs, which seem to be crucial for the feedback that leads to learning.

Wiring diagram of the olfactory learning system. Projection neurons (left) come from the nose and face, to the Kenyon cells of the mushroom body (KC). Then MBONs of various types (using different neurotransmitters in their synapses) come out (gray lines) to innervate the lateral horn and other downstream neurons. DAN neurons from other sensations are labelled for their valence. The colored boxes correspond to various anatomical bodies or sub-areas.

The key part of the organization is that while the layout and connections of the olfactory neurons and projection neurons are genetically determined, those of the Kenyon cells are not. They are a generic type of cell whose sporadic connections on both ends are made on the fly (sorry!) during development, and are not the same from one fly to the next, as are all the other cells. These connections are also plastic during adulthood, and the volume of the mushroom body on a gross level expands in response to usage in honey bees. The mushroom body is not essential to actions of the fly, really- the built-in programs for sensations and stereotypical behaviors lie elsewhere. The mushroom body system only biases those behaviors based on learned feedback, which arrives via the DAN neurons from positive and negative shocks / experiences.

This arrangement is pretty much what has long been called a "neural network", which is a computer science tool developed over several decades by analogy to how people thought neural systems work. They offer the unique capacity to learn from sample data, and to solve problems that are not well-specified or are complex. These feature a hidden array (or black box) of many "neurons", connecting inputs with outputs. The box is trained (iteratively) by providing it with (known) sample input and output data. Feedback of error values from the output vs input comparison is sent back through the network and adjusts the weights of those connections which then evolve through time, in their connections and connection strength, leading the network to arrive (slowly) at an approximation of the behavior desired, emitting the correct output for given inputs.

For instance, an image shape recognition program might be made as a neural network, with many examples fed in, and the outputs judged and signals sent back into the network to either reinforce connections that improve the match over previous trials, or weaken connections that reduce it. If a sufficiently broad range of input images are used, then the network stands a good chance of identifying those same shapes in images it has never "seen" before in training.
"... MBs are a paradigmatic case of reused neural networks in action."
"This model was built in a connectionist manner, obeying, although in a scaled version, the neurobiological topology. The model was initially built for showing basic learning and conditioning capabilities; subsequently it was found able to show other interesting behaviors, like attention, expectation, sequence learning, consolidation during sleep and delayed-matching-to sample tasks."

Likewise, in the mushroom body system, the DANs provide the essential feedback in real-life situations as the fly buzzes and walks about, smelling the wonderful world. If they can dampen neural connections among the Kenyon cells and their downstream targets that lead to painful results, and juice up those that lead to pleasure, we have a smarter, and more successful fly.
"The identification of the full complement of 21 MBON types highlights the extensive convergence of 2000 KCs onto just 34 MBONs, a number even smaller than the number of glomeruli in the AL. Thus, the high-dimensional KC representation of odor identity is transformed into a low-dimensional MB output. This suggests that the MBONs do not represent odor identity but instead provide a representation that may bias behavioral responses."

So, what is it like to be a fly? Are flies conscious? They are clearly responsive to their environment, and have what we would call "experiences", such as hunger, searches for food, mating, etc., and one would assume these experiences can be very intense. So I think it would be hard to count them entirely out of the consciousness department. But it would have to be an extremely small consciousness, with little association to past, let alone to future events, metaphorical, or conceptual abstractions. But feeling- that is likely to be there in some form.



  • One problem with the rich ... they are well-hidden, even when they exercise their "free" speech.
  • Martin Wolf on the injustice of housing shortages and zoning control by incumbents. It is the price of overpopulation, really.
  • Everyone's a critic!
  • Atheism- a little more dangerous than you might think.
  • The economics of knowledge & organization.
  • Exploitation and income.
  • Want to know about a government boondoggle?
  • California raises the bar on environmental responsibility.
  • On the tension between science and humanity in economics.
  • The sanctimonious disaster that is Jeffrey Sachs.
  • On the power of satire & truth: "... the Chinese government decided to ban puns."
  • In France, there is a culture war. It was not about "freedom", but about cultural dominance.
  • Dirty jobs- better than clean jobs?

Saturday, November 2, 2013

A strange loop it is to write about one's I so much

Review of Douglas Hofstadter's "Gödel Escher Bach" and "I Am a Strange Loop", thus saving the reader roughly 1000 pages of helpless digression.

Douglas Hofstsadter laments in his preface to his sequel ("I am a strange loop"; ISL, 2007) to his much  more famous "Gödel Escher Bach" (GEB, 1979) that for all its fame and prizes, including the Pulitzer prize, most people he meets didn't get the point of GEB. And no wonder, as those points flit by with great rapidity amidst a welter of puns, word games, abstruse code exercises, maddening repetition, dilatory dialogs, and wayward tangents.

But here they are (apologies for my lack of expertise ... please comment on any inaccuracies):
" The possibility of constructing, in a given system, an undecideable string via Gödel's self-reference method, depends on three basic conditions:  
1. That the system should be rich enough so that all desired statements about numbers, whether true or false, can be expressed in it. ...
2. That all general recursive relations should be represented by formulas in the system. ...
3. That the axioms and typographical patterns defined byitsrules be recognizable bysome terminating decision procedure. ...
 
Satisfaction of these three conditions guarantees that any consistent system will be incomplete, because Gödel's construction is applicable.
The fascinating thing is that any such system [human thought and language are the obvious references] digs its own hole; the system's own richness brings about its own downfall. … [analogy to critical mass in physics and bomb-making] ... But beyond the critical mass, such a lump will undergo a chain reaction, and blow up. It seems that with formal systems there is an analogous critical point. Below that point, a system is 'harmless' and does not even approach defining arithmetical truth formally; but beyond the critical point, the system suddenly attains the capacity for self-reference, and thereby dooms itself to incompleteness."

All the references to Bach and Escher in GEB are really tangential examples of self-reference. It is Kurt Gödel's work that is the core of the book, as it is of ISL. Gödel made a critique of the Principia Mathematica (PM), by Alfred Whitehead and Bertrand Russell, which attempted to build a tightly closed system of axioms and logic that was both incapable of rendering false statements, and also comprehensive in its ability to found all relevant aspects of mathematics and logic. But Gödel showed that it was incomplete, which means that it could represent paradoxical statements that could not be either true or false. It did indeed found all mathematical logic on very uncontroversial axioms, but it developed (painfully) a language for all this that was so rich that it was impossible to keep within the bounds of truth alone.

Gödel's paradigmatic statement, constructed out of unspeakably complicated machinations of the PM tools (and which Russell never accepted were proper machinations) essentially created the statement "This statement is false". But Hofstadter admits that self-referencing paradox is not the only possible type of ambiguous, unresolvable statement Gödel created or suggested- there is an infinite zoo of them.

The point is that truly intelligent systems are open. They are not cognitively or computationally bound by their programming to tread around some threshing track time, and time, and time, again. They not only are responsive parts of their environment, but more importantly have the symbolic capacity to represent imagined realities, real realities, (and the self!), in such recursive, endlessly complicated ways that no theorem-bound system of cognitive computation can account for it. In this way we are endless, multilevel, strange, loops.

But there is one aspect of all this that is most odd, which is Hofstadter's focus on the self, which is prominent in both books, and especially, even gratingly, so, in the second. Much of his conceptual play concerns self-reference, which is enjoyably loopy. Many philosophers and thinkers generally seem to think self-consciousness the very height of consciousness itself. Perhaps even its definition. As Hofstadter says, "Just as we need out eyes in order to see, we need our 'I''s in order to be!". But I don't think that is the case, at least not consciously. Self consciousness certainly comes along in the package of cognitive complexity, once one is making models of everything conceivable about the world. But to me, the core of consciousness is far more basic- the sense of being, not of self. And the core of the sense of being is made up of the flow of sensations, especially pain and pleasure.

I frequently see squirrels from my window, playing, chasing, eating, hiding, calling, etc. They are especially interested in the bird feeder and have tried no end of strategems to get into it. They are clearly highly conscious beings, driven by pleasures and pains, just as we are. They wouldn't know what to do with a mirror, but nevertheless we immediately empathize with their desires and fears, clearly communicated and experienced by themselves. Their consciousness is not infinitely expansive by way of symbolic representation, as ours is, but nor is it negligible.

Hofstadter makes a special project of declaring that mosquitos have zero consciousness, thus sanctioning his frequent bloodthirsty murders, when he is otherwise a principled vegetarian. Why be a vegetarian if you are interested only in symbolically self-referential and unbounded forms of consciousness? Obviously something else is going on, which he jokingly names "hunekers"- small sub-units of consciousness, of which he assigns various amounts to animals and humans of various grades and states.
"Mosquitos, because of the initial impoverishment and the fixed non-extensibility of their symbol systems, are doomed to soullessness (oh, all right- maybe 0.00000001 huneker's worth of consciousness- just a hair above the level of a thermostat)."
But don't mosquitos experience pain and pleasure? Their behavior clearly shows avoidance of danger, and eager seeking of sustenance and reproduction. We know that the biology of animals with nervous systems (not bacteria) organizes these motivations in an internal experience of pain and pleasure. Would Hofstadter compacently sit down to a session of pulling the legs and wings off of mosquitoes he has caught? I think not, because though we certainly don't know what is going on in those very tiny heads, if anything it is the integration of perception, pain, and pleasure in ways that must earn our empathy, and which amount to a level of consciousness radically beyond that of a thermostat.

Hofstadter adds in the analogy of a human knee reflex, saying that perhaps a mosquito's mind is at that level, which no one would claim is conscious. But the integrative work being done, and the whole point of the integration, is quite different in these cases, making it seem much more likely, to me at least, that the mosquito is working with a very tiny, but intensely felt, bit of consciousness. Indeed one might posit that there need not be any particular relation between the cognitive complexity of an animal's consciousness and the degree of its feelings. We know from human infants that feelings can be monumental, and consciousness of hurt (and pleasure) be extremely acute, with precious little cognition behind them. Do we therefore empathize with them less?

This leads to the more general issue of the relation between consciousness and its physical substrate. Despite the talk of "souls", Hofstadter is a thorough naturalist, steeped in the academic field of artificial intelligence. While he has shown much greater proclivities towards philosophy than programming, the basic stance is unchanged- consciousness is a case of enormously complicated computation with (in the human case) the infinitely rich symbol sets of language and whatever is knocking around internally in our mental apparatus. All of which all could conceivably happen on a silicon substrate, or one of orchestral instruments, or other forms, as long as they have the necessary properties of internal communication, logical inference, memory, etc.

For Hofstadter, consciousness is necessarily a high-level phenomenon. It depends on, but is not best characterized by, particular neurons, and certainly is not specifically associated with quantum phenomena, microtubules, or any of the other bizarre theories of mind / soul that various pseudo-theorists have come up with to bridge the so-called mind-body divide. Indeed he spends a great deal of time (in ISL) on consoling himself that his wife, who died unexpectedly and young, lives on precisely because some part of her high-level programming continues to function inside Hofstadter, insofar as he learned to see the world through her eyes, and through other remaining reflections of her consciousness. Nothing physical remains, but if the programming still happens, then the consciousness does as well.

I have my doubts about that proposition, again drawing on my preference for characterizing consciousness in terms of experience, feeling, and emotion, not symbology. But if one has been trained by one's wife, say, to thrill to certain types of music one hadn't appreciated before, then one could make the case in those terms as well.

The question is made more interesting in a long section (in ISL) where Hofstadter discusses a thought experiment of Daniel Dennett, as written about at great length by Derek Parfit. Suppose the Star Trek transporter really worked, and could de-materialize a person and send them via a (sparkling) light stream to be re-assembled on another location, say Mars. Suppose next that an improved version of this transporter were later devised that didn't have to destroy the originating person. A copy is faithfully made on Mars, but the Earth copy remains alive. Who would be the "real" person / soul? Imagine further that the transporter could send multiple copies anywhere it chose, perhaps depositing one copy on Mars, and another on the Moon, etc... What & who then?

Parfit reportedly mulls this over for a hundred pages and agonizes that there is no way to decide which is the "real" person. Hofstadter also makes remarkably heavy weather of the question, finally hinting that a reasonable way to regard it may be as a faithful doubling, where none of the clones have any priority, all are equivalent, and each goes on to an independent existence built on the structure and history of the original person. Well of course! In programming, this is called a "fork" where a program is replicated and both copies keep running in perpetuity, doing their own thing. No need to fret over soul-division or irreducible essences- if the physical brains and bodies are faithfully reproduced in all detail, then so are the minds in all respects. Each will carry on the prior consciousness and other internal processes, differing only by the occurrence of, and interaction with, subsequent events.

And one can extend this to other substrates, supposing that some means has been devised to replicate all the activity of a human brain from neurons into, say, silicon. Just making such an assumption assumes the answer, of course. But the real question is- at what level would the modelling have to be faithful in order to generate a replicated consciousness? Do all the atoms have to be modelled? Clearly not. The model Hofstadter and I share here is that the overall activity of the brain in its electrical activity and structure-to-structure communication constitutes consciousness. So it is the neurons the need to be modelled, and that perhaps only roughly, to recreate their communication flow and data storage. Generate enough of those elements that perceive, that condense and flag significant information, that tag everything inside and out with emotional valences, that remember all sorts of languages, and world events, and experiences, at explicit and implicit levels, that coordinate countless senses and processes, and we might just have some thing that has experiences.


  • And dogs- are they conscious?
  • J P Morgan, et al. Their practices were not just "shady", they were criminal fraud; signing false affidavits, scamming loan customers and investors alike, corrupting appraisers, LIBOR fixing, etc.
  • Should atheists take an economic position?
  • And do they have better morals?
  • Another way the health-related free markets don't work. Big data is fundamentally incompatible with broad-based insurance.
  • Two can play that game... "Last month, the U.S. raided an Afghan convoy carrying a Pakistani Taliban militant, Latif Mehusd, who the Afghan government was using to cultivate an alliance with the Pakistani Taliban."
  • The Koch folks- apparently too embarrassed to stand up for their own beliefs.
  • Unwritten institutions are often the most important- the long shadow of slavery & oppression.
  • Cuddly capitalism- yes, it really works.
  • Our infrastructure is unsightly and unsafe as well as decrepit. And underfunded.
  • Bill Mitchell on why full employment shouldn't just happen for the sake of killing people.
  • Quote of the week, from Paul Krugman:
"A society committed to the notion that government is always bad will have bad government. And it doesn’t have to be that way."
"But right now we’re awash in excess savings with nowhere to go, and the marginal social value of a dollar of savings is negative. So real interest rates should be negative too, if they’re supposed to reflect social payoffs."

Saturday, May 28, 2011

Neural waves of brain

The brain's waves drive computation, sort of, in a 5 million core, 9 Hz computer.

Computer manufacturers have worked in recent years to wean us off the speed metric for their chips and systems. No longer do they scream out GHz values, but use chip brands like atom, core duo, and quad core, or just give up altogether and sell on other features. They don't really have much to crow about, since chip speed increases have slowed with the increasing difficulty of cramming more elements and heat into ever smaller areas. The current state of the art is about 3 GHz, (far below predictions from 2001), on four cores in one computer, meaning that computations are spread over four different processors, which each run at 0.3 nanosecond per computation cycle.

The division of CPUs into different cores hasn't been a matter of choice, and it hasn't been well-supported by software, most of which continues to conceived and written in linear fashion, with the top-level computer system doling out whole programs to the different processors, now that we typically have several things going on at once on our computers. Each program sends its instructions in linear order through one processor/core, in soda-straw fashion. Ever-higher clock speeds, allowing more rapid progress through the straw, still remain critical for getting more work done.

Our brains take a rather different approach to cores, clock speeds, and parallel processing, however. They operate at variable clock speeds between 5 and 500 Hertz. No Giga here, or Mega or even Kilo. Brain waves, whose relationship to computation remains somewhat mysterious, are very slow, ranging from the delta (sleep) waves of 0-4 Hz through theta, alpha, beta, and gamma waves at 30-100+ Hz which are energetically most costly and may correlate with attention / consciousness.

On the other hand, the brain has about 1e15 synapses, making it analogous to five million contemporary 200 million transistor chip "cores". Needless to say, the brain takes a massively parallel approach to computation. Signals run through millions of parallel nerve fibers from, say, the eye, (1.2 million in each optic nerve), through massive brain regions where each signal traverses only perhaps ten to twenty nerves in any serial path, while branching out in millions of directions as the data is sliced, diced, and re-assembled into vision. If you are interested in visual pathways, I would recommend Christof Koch's Quest for Consciousness, whose treatment of visual pathways is better than its treatment of other topics.

Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run, though the present articles deal with one such case. Neurons can spontaneously generate synchrony in large numbers due to their intrinsic rhythmicity.

Nor are neurons passive input-output integrators of whatever hits their dendrites, as early theories had them. Instead, they spontaneously generate cycles and noise, which enhances their sensitivity to external signals, and their ability to act collectively. They are also subject to many other influences like hormones and local non-neural glial cells. A great deal of integration happens at the synapse and regional multi-synapse levels, long before the cell body or axon is activated. This is why the synapse count is a better analog to transistor counts on chips than the neuron count. If you are interested in the topics of noise and rhythmicity, I would recommend the outstanding and advanced book by Gyorgy Buzsaki, Rhythms of the Brain. Without buying a book, you can read Buzsaki's take on consciousness.

Two recent articles (Brandon et al., Koenig et al.) provide a small advance in this field of figuring out how brain rhythms connect with computation. Two groups seem to have had the same idea and did very similar experiments to show that a specific type of spatial computation in a brain area called the medial entorhinal cortex (mEC) near the hippocampus depends on theta rhythm clocking from a loosely connected area called the medial septum (MS). (In-depth essay on alcohol, blackouts, memory formation, the medial septum, and hippocampus, with a helpful anatomical drawing).

Damage to the MS (situated just below the corpus collosum that connects the two brain hemispheres) was known to have a variety of effects on functions not located in the MS, but in the hippocampus and mEC, like loss of spatial memory, slowed learning of simple aversive associations, and altered patterns of food and water intake.

The hippocampus and allied areas like the mEC are one of the best-investigated areas of the brain, along with the visual system. They mediate most short-term memory, especially spatial memory (i.e rats running in mazes). The spatial system as understood so far has several types of cells:

Head direction cells, which know which way the head is pointed (some of them fire when the head points at one angle, others fire at other angles.

Grid cells, which are sensitive to an abstract grid in space covering the ambient environment. Some of these cells fire when the rat is on one of the grid boundaries. So we literally have a latitude/logitude-style map in our heads, which may be why map-making comes so naturally to humans.

Border cells, which fire when the rat is close to a wall.

Place cells, which respond to specific locations in the ambient space- not periodically like grid cells, but typically to one place only.

Spatial view cells, which fire when the rat is looking at a particular location, rather than when it is in that location. They also respond, as do the other cells above, when a location is being recalled rather than experienced.

Clearly, once these cells all network together, a rather detailed self-orientation system is possible, based on high-level input from various senses (vestibular, whiskers, vision, touch). The role of rhythm is complicated in this system. For instance, the phase relation of place cell firing versus the underlying theta rhythm, (leading or following it, in a sort of syncopation), indicates closely where the animal is within the place cell's region as movement occurs. Upon entry, firing begins at the peak of the theta wave, but then precesses to the trough of the theta wave as the animal reaches the exit. Combined over many adjacent and overlapping place fields, this could conceptually provide very high precision to the animal's sense of position.
One rat's repeated tracks in a closed maze, mapped versus firing patterns of several of its place cells, each given a different color.

We are eavesdropping here on the unconscious processes of an animal, which it could not itself really articulate even if it wished and had language to do so. The grid and place fields are not conscious at all, but enormously intricate mechanisms that underlie implicit mapping. The animal has a "sense" of its position, (projecting a bit from our own experience), which is critical to many of its further decisions, but the details don't necessarily reach consciousness.

The current papers deal not with place cells, which still fire in a place-specifc way without the theta rhythm, but with grid cells, whose "gridness" appears to depend strongly on the theta rhythm. The real-life fields of rat grid cells have a honeycomb-like hexagonal shape with diameters ranging from 40 to 90cm, ordered in systematic fashion from top to bottom within the mEC anatomy. The theta rhythm frequency they respond to also varies along the same axis, from 10 to 4 Hz. These values stretch and vary with the environment the animal finds itself in.

Field size of grid cells, plotted against anatomical depth in the mEC.

The current papers ask a simple question: do the grid cells of the mEC depend on the theta rhythm supplied from the MS, as has long been suspected from work with mEC lesions, or do they work independently and generate their own rhythm(s)?

This was investigated by the expedient of injecting anaesthetics into the MC to temporarily stop its theta wave generation, and then polling electrodes stuck into the mEC for their grid firing characteristics as the rats were freely moving around. The grid cells still fired, but lost their spatial coherence, firing without regard to where the rat was or was going physically (see bottom trajectory maps). Spatial mapping was lost when the clock-like rhythm was lost.

One experimental sequence. Top is the schematic of what was done. Rate map shows the firing rate of the target grid cells in a sampled 3cm square, with m=mean rate, and p=peak rate. Spatial autocorrelation shows how spatially periodic the rate map data is, and at what interval. Gridness is an abstract metric of how spatially periodic the cells fire. Trajectory shows the rat's physical paths during free behavior, overlaid with the grid cell firing data.

"These data support the hypothesized role of theta rhythm oscillations in the generation of grid cell spatial periodicity or at least a role of MS input. The loss of grid cell spatial periodicity could contribute to the spatial memory impairments caused by lesions or inactivation of the MS."
This is somewhat reminiscent of an artificial computer system, where computation ceases (here it becomes chaotic) when clocking ceases. Brain systems are clearly much more robust, breaking down more gracefully and not being as heavily dependent on clocking of this kind, not to mention being capable of generating most rhythms endogenously. But a similar phenomenon happens more generally, of course, during anesthesia, where the controlled long-range chaos of the gamma oscillation ceases along with attention and consciousness.

It might be worth adding that brain waves have no particular connection with rhythmic sensory inputs like sound waves, some of which come in the same frequency range, at least at the very low end. The transduction of sound through the cochlea into neural impulses encodes them in a much more sophisticated way than simply reproducing their frequency in electrical form, and leads to wonders of computational processing such as perfect pitch, speech interpretation, and echolocation.

Clearly, these are still early days in the effort to know how computation takes place in the brain. There is a highly mysterious bundling of widely varying timing/clocking rhythms with messy anatomy and complex content flowing through. But we also understand a lot- far more with each successive decade of work and with advancing technologies. For a few systems, (vision, position, some forms of emotion), we can track much of the circuitry from sensation to high-level processing, such as the level of face recognition. Consciousness remains unexplained, but scientists are definitely knocking at the door.


"As I’ve often written, we’re in a strange state now where people who actually take textbook economics and simple arithmetic seriously are seen as dangerously radical and irresponsible, while people who believe in invisible bond vigilantes and confidence fairies, who claim to know what the market will want even though there’s no sign of that desire in current asset prices, are viewed as Very Serious."
"... many readers have been writing in asking me about price manipulation in international commodity markets – which is aka how financial markets caused a jump in world starvation and death."