Are flies conscious? Are neurons conscious? How many neurons make up a thought?
Are animals conscious? Descartes thought not, consigning them to mere mechanism. But obviously, he was wrong, as the evidence of feeling and consciousness is all around, in the animals we can see frantically trying to get into our bird feeders, chasing each other around for play and sex, and raising their young with exquisite care. If they are mere mechanism, then so are we. Now, we consign only plants and invisible microbes to the zone of no-consciousness, though we could be mistaken there as well. Consciousness seems to be defined by some mental and emotional engagement with the world that is responsive, built around modeling of how things are and are wished to be, networked in a way that puts the emotions and the models in close relation with each other. Bacteria have desires too, but they do not have the apparatus of feeling about them and planning around them that would suggest that they are conscious.
Flies, however, are a different story. They hunt for food, victims, mates, avoid pain and escape danger, and in most cases have visual, tactile, and olfactory worlds of substantial complexity. Flies have become leading model systems for neurobiology, even up to studies of consciousness. A recent paper asked whether one of the leading theories of consciousness, integrated information theory, could be applied to flies. This theory posits that consciousness is not a single thing or location of mental processing, but the network flexibly binding together many mental modes such as feelings, sensations, memory, planning, and analysis to form that shifting, yet durable, sense of a self at the center of our being. Specifically, it posits that reductions of consciousness, such as induced by anaethesia or sleep, can be analytically, even quantitatively, characterized by reductions of network size that should be commensurate with the reduction of subjective consciousness. This idea has started to inform the quite practical problem of evaluating levels of anaesthesia in humans, as well as rare neurological conditions such as "locked-in" states, using scanning technologies and network analyses of remaining brain function.
The default hypothesis, of low complexity and no consciousness for a fly brain, would postulate a fully one-directional processing system, going from visual inputs to action outputs, with little networking or complexity in between. The rapid, and often stereotypical, reflexes of flies might support this kind of view. But we now know, after decades of trying to make video cameras smarter, that that is no way to build a visual system, let alone a generally intelligent brain. These researchers obviously find something quite different- complex feedback and integrated information systems, detectable with their electrodes which are mercilessly plugged into their subject's tiny brains.
"In stark contrast to a view which assumes feedforward architecture for insect brains, especially fly visual systems, we found rich information structures, which cannot arise from purely feedforward systems, occurred across the fly brain. Further, these information structures collapsed uniformly across the brain during anesthesia. Our results speak to the potential utility of the novel concept of an “informational structure” as a measure for level of consciousness."
Collapse of larger information structures on anaesthesia, in flies. |
Flies have about 100,000 neurons, a far cry from the ~100 billion neurons we have. But as recent work has claimed, it only takes about 14 neurons to constitute a distinct thought or response, so flies have plenty of brains for thought, and quite possibly consciousness. The current workers do not, in the end, pronouce on the capability of flies to have consciousness, or have it in a way that resembles ours. But in addition to finding clear markers of integrated information networks that decline on anaesthesia, they cite past literature that shows that flies share molecular, cellular, and structural themes with mammalian brains, and show attention, memory, feature integration, and long term planning. So I think it is fair to assume that they are a good model for some modest level of consciousness, and perhaps we should regard them as more than nuisances.