Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Sunday, July 7, 2024

Living on the Edge of Chaos

 Consciousness as a physical process, driven by thalamic-cortical communication.

Who is conscious? This is not only a medical and practical question, but a deep philosophical and scientific question. Over the last decades and century, we have become increasingly comfortable assigning consciousness to other animals, in ever-wider circles of understanding and empathy. The complex lives of chimpanzees, as brought into our consciousness by Jane Goodall, are one example. Birds are increasingly appreciated for their intelligence and strategic maneuvering. Insects are another frontier, as we appreciate the complex communication and navigation strategies honeybees, for one example, employ. Where does it end? Are bacteria conscious?

I would define consciousness as responsiveness crosschecked with a rapidly accessible model of the world. Responsiveness alone, such in a thermostat, is not consciousness. But a nest thermostat that checks the weather, knows its occupant's habits and schedules, and prices for gas and electricity ... that might be a little conscious(!) But back to the chain of being- are jellyfish conscious? They are quite responsive, and have a few thousand networked neurons that might well be computing the expected conditions outside, so I would count them as borderline conscious. That is generally where I would put the dividing line, with plants, bacteria, and sponges as not conscious, and organisms with brains, even quite decentralized ones like octopi and snails, as conscious, with jellyfish as slightly conscious. Consciousness is an infinitely graded condition, which in us reaches great heights of richness, but presumably starts at a very simple level.

These classifications imply that consciousness is a property of very primitive brains, and thus in our brains, is likely to be driven by very primitive levels of its anatomy. And that brings us to the two articles for this week, about current theories and work centered on the thalamus as a driver of human consciousness. One paper relates some detailed experimental and modeled tests of information transfer that characterizes a causal back-and-forth between the thalamus and the cortex, in which there is a frequency division between thalamic 10 - 20 Hz oscillations, whose information is then re-encoded and reflected in cortical oscillations of much higher frequency, at about 50 - 150 Hz. It also continues a long-running theme in the field, characterizing the edge-of chaos nature of electrical activity in these thalamus-cortex communications, as being just the kind of signaling suited to consciousness, and tracks the variation of chaoticity during anesthesia, waking, psychedelic drug usage, and seizure. A background paper provides a general review of this field, showing that the thalamus seems to be a central orchestrator of both the activation and maintenance of consciousness, as well as its contents and form.

The thalamus is at the very center of the brain, and, as is typical for primitive parts of the brain, it packs a lot of molecular diversity, cell types, and anatomy in a very small region. More recently evolved areas of the brain tend to be more anatomically and molecularly uniform, while supporting more complexity at the computational level. The thalamus has about thirty "nuclei", or anatomical areas that have distinct patterns of connections and cell types. It is known to relay sensory signals to the cortex, to be central to sleep control and alertness. It sits right over the brain stem, and has radiating connections out to, and back from, the cerebral cortex, suggestive of a hub-like role. 

The thalamus is networked with recurrent connections all over the cortex.


The first paper claims firstly that electrical oscillations in the thalamus and the cortex are interestingly related. Mouse, rats, and humans were all used as subjects and gave consistent results over the testing, supporting the idea that, at very least, we think alike, even if what we think about may differ. What is encoded in the awake brain at 1-13 Hz in the thalamus appears in correlated form (that is, in a transformed way) as 50-100+ Hz in the cortex. They study the statistics of recordings from both areas to claim that there is directional information flow, not just marked by the anatomical connections, but by active, harmonic entrainment and recoding. But this relationship fails to occur in unconscious states, even though the thalamus is at this time (in sleep, or anesthesia) helping to drive slow wave sleep patterns directly in the cortex. This supports the idea that there is a summary function going on, where richer information processed in the cortex is reduced in dimension into the vaguer hunches and impressions that make up our conscious experience. Even when our feelings and impressions are very vivid, they probably do not do justice to the vast processing power operating in the cortex, which is mostly unconscious.

Administering psychedelic drugs to their experimental animals caused greater information transfer backwards from the cortex to the thalamus, suggesting that the animal's conscious experience was being flooded. They also observe that these loops from the thalamus to cortex and back have an edge-of-chaos form. They are complex, ever-shifting, and information-rich. Chaos is an interesting concept in information science, quite distinct from noise. Chaos is deterministic, in that the same starting conditions should always produce the same results. But chaos is non-linear, where small changes to initial conditions can generate large swings in the output. Limited chaos is characteristic of living systems, which have feedback controls to limit the range of activity, but also have high sensitivity to small inputs, new information, etc., and thus are highly responsive. Noise is random changes to a signal that may not be reproducible, and are not part of a control mechanism.

Unfortunately, I don't think the figures from this paper support their claims very well, or at least not clearly, so I won't show them. It is exploratory work, on the whole. At any rate, they are working from, and contributing to, a by now quite well-supported paradigm that puts the thalamus at the center of conscious experience. For example, direct electrical stimulation of the center of the thalamus can bring animals immediately up from unconsciousness induced by anesthesia. Conversely, stimulation at the same place, with a different electrical frequency, (10 Hz, rather than 50 Hz), causes immediate freezing and vacancy of expression of the animal, suggesting interference with consciousness. Secondly, the thalamus is known to be the place which gates what sensory data enters consciousness, based on a long history of attention, ocular rivalry, and blind-sight experiments.

A schematic of how stimulation of the thalamus (in middle) interacts with the overlying cortex. CL stands for the ventral lateral nucleus of thalamus, where these stimulation experiments were targeted. The greek letters alpha and gamma stand for different frequency bands of the neural oscillation.

So, from both anatomical perspectives and functional ones, the thalamus appears at the center of conscious experience. This is a field that is not going to be revolutionized by a lightning insight or a new equation. It is looking very much like a matter of normal science slowly accumulating ever-more refined observations, simulations, better technologies, and theories that gain, piece by piece, on this most curious of mysteries.


Saturday, December 2, 2023

Preliminary Pieces of AI

We already live in an AI world, and really, it isn't so bad.

It is odd to hear about all the hyperventilating about artificial intelligence of late. One would think it is a new thing, or some science-fiction-y entity. Then there are fears about the singularity and loss of control by humans. Count me a skeptic on all fronts. Man is, and remains, wolf to man. To take one example, we are contemplating the election of perhaps the dummbest person ever to hold the office of president. For the second time. How an intelligence, artificial or otherwise, is supposed to worm its way into power over us is not easy to understand, looking at nature of humans and of power. 

So let's take a step back and figure out what is going on, and where it is likely to take us. AI has become a catch-all for a diversity of computer methods, mostly characterized by being slightly better at doing things we have long wanted computers to do, like interpreting text, speech, and images. But I would offer that it should include much more- all the things we have computers do to manage information. In that sense, we have been living among shards of artificial intelligence for a very long time. We have become utterly dependent on databases, for instance, for our memory functions. Imagine having to chase down a bank balance or a news story, without access to the searchable memories that modern databases provide. They are breathtakingly superior to our own intelligence when it comes to the amount of things they can remember, the accuracy they can remember them, and the speed with which they can find them. The same goes for calculations of all sorts, and more recently, complex scientific math like solving atomic structures, creating wonderful CGI graphics, or predicting the weather. 

We should view AI as a cabinet filled with many different tools, just as our own bodies and minds are filled with many kinds of intelligence. The integration of our minds into a single consciousness tends to blind us to the diversity of what happens under the hood. While we may want gross measurements like "general intelligence", we also know increasingly that it (whatever "it" is, and whatever it fails to encompass of our many facets and talents) is composed of many functions that several decades of work in AI, computer science, and neuroscience have shown are far more complicated and difficult to replicate than the early AI pioneers imagined, once they got hold of their Turing machine with its infinite potential. 

Originally, we tended to imagine artificial intelligence as a robot- humanoid, slightly odd looking, but just like us in form and abilities. That was a natural consequence of our archetypes and narcissism. But AI is nothing like that, because full-on humanoid consciousness is an impossibly high bar, at least for the foreseeable future, and requires innumerable capabilities and forms of intelligence to be developed first. 

The autonomous car drama is a good example of this. It has taken every ounce of ingenuity and high tech to get to a reasonably passable vehicle, which is able to "understand" key components of the world around it. That a blob in front is a person, instead of a motorcycle, or that a light is a traffic light instead of a reflection of the sun. Just as our brain has a stepwise hierarchy of visual processing, we have recapitulated that evolution here by harnessing cameras in these cars (and lasers, etc.) to not just take in a flat visual scene, which by itself is meaningless, but to parse it into understandable units like ... other cars, crosswalks, buildings, bicylists, etc.. Visual scenes are very rich, and figuring out what is in them is a huge accomplishment. 

But is it intelligence? Yes, it certainly is a fragment of intelligence, but it isn't consciousness. Imagine how effortless this process is for us, and how effortful and constricted it is for an autonomous vehicle. We understand everything in a scene within a much wider model of the world, where everything relates to everything else. We evaluate and understand innumerable levels of our environment, from its chemical makeup to its social and political implications. Traffic cones do not freak us out. The bare obstacle course of getting around, such as in a vehicle, is a minor aspect, really, of this consciousness, and of our intelligence. Autonomous cars are barely up to the level of cockroaches, on balance, in overall intelligence.

The AI of text and language handling is similarly primitive. Despite the vast improvements in text translation and interpretation, the underlying models these mechanisms draw on are limited. Translation can be done without understanding text at all, merely by matching patterns from pre-digested pools of pre-translated text, regurgitated as cued by the input text. Siri-like spoken responses, on the other hand, do require some parsing of meaning out of the input, to decide what the topic and the question are. But the scope of these tools tend to be very limited, and the wider scope they are allowed, the more embarrassing their performance, since they are essentially scraping web sites and text pools for question-response patterns, instead of truly understanding the user's request or any field of knowledge.

Lastly, there are the generative ChatGPT style engines, which also regurgitate text patterns reformatted from public sources in response to topical requests. The ability to re-write a Wikipedia entry through a Shakespeare filter is amazing, but it is really the search / input functions that are most impressive- being able, like the Siri system, to parse through the user's request for all its key points. This betokens some degree of understanding, in the sense that the world of the machine (i.e. its database) is parceled up into topics that can be separately gathered and reshuffled into a response. This requires a pretty broad and structured ontological / classification system, which is one important part of intelligence.

Not only is there a diversity of forms of intelligence to be considered, but there is a vast diversity of expertise and knowledge to be learned. There are millions of jobs and professions, each with their own forms of knowledge. Back the early days of AI, we thought that expert systems could be instructed by experts, formalizing their expertise. But that turned out to be not just impractical, but impossible, since much of that expertise, formed out of years of study and experience, is implicit and unconscious. That is why apprenticeship among humans is so powerful, offering a combination of learning by watching and learning by doing. Can AI do that? Only if it gains several more kinds of intelligence including an ability to learn in very un-computer-ish ways.

This analysis has emphasized the diverse nature of intelligences, and the uneven, evolutionary development they have undergone. How close are we to a social intelligence that could understand people's motivations and empathise with them? Not very close at all. How close are we to a scientific intelligence that could find holes in the scholarly literature and manage a research enterprise to fill them? Not very close at all. So it is very early days in terms of anything that could properly be called artificial intelligence, even while bits and pieces have been with us for a long time. We may be in for fifty to a hundred more years of hearing every advance in computer science being billed as artificial intelligence.


Uneven development is going to continue to be the pattern, as we seize upon topics that seem interesting or economically rewarding, and do whatever the current technological frontier allows. Memory and calculation were the first to fall, being easily formalizable. Communication network management is similarly positioned. Game learning was next, followed by the Siri / Watson systems for question answering. Then came a frontal assault on language understanding, using the neural network systems, which discard the former expert system's obsession with grammar and rules, for much simpler statistical learning from large pools of text. This is where we are, far from fully understanding language, but highly capable in restricted areas. And the need for better AI is acute. There are great frontiers to realize in medical diagnosis and in the modeling of biological systems, to only name two fields close at hand that could benefit from a thoroughly systematic and capable artificial intelligence.

The problem is that world modeling, which is what languages implicitly stand for, is very complicated. We do not even know how to do this properly in principle, let alone having the mechanisms and scale to implement it. What we have in terms of expert systems and databases do not have the kind of richness or accessibility needed for a fluid and wide-ranging consciousness. Will neural nets get us there? Or ontological systems / databases? Or some combination? However it is done, full world modeling with the ability to learn continuously into those models are key capabilities needed for significant artificial intelligence.

After world modeling come other forms of intelligence like social / emotional intelligence and agency / management intelligence with motivation. I have no doubt that we will get to full machine consciousness at some point. The mechanisms of biological brains are just not sufficiently mysterious to think that they can not be replicated or improved upon. But we are nowhere near that yet, despite bandying about the word artificial intelligence. When we get there, we will have to pay special attention to the forms of motivation we implant, to mitigate the dangers of making beings who are even more malevolent than those that already exist... us.

Would that constitute some kind of "singularity"? I doubt it. Among humans there are already plenty of smart people and diversity, which result in niches for everyone having something useful to do. Technology has been replacing human labor forever, and will continue moving up the chain of capability. And when machines exceed the level of human intelligence, in some general sense, they will get all the difficult jobs. But the job of president? That will still go to a dolt, I have no doubt. Selection for some jobs is by criteria that artificial intelligence, no matter how astute, is not going to fulfill.

Risks? In the current environment, there are a plenty of risks, which are typically cases where technology has outrun our will to regulate its social harm. Fake information, thanks to the chatbots and image makers, can now flood the zone. But this is hardly a new phenomenon, and perhaps we need to get back to a position where we do not believe everything we read, in the National Enquirer or on the internet. The quality of our sources may become once again an important consideration, as they always should have been.

Another current risk is that the automation risks chaos. For example in the financial markets, the new technologies seem to calm the markets most of the time, arbitraging with relentless precision. But when things go out of bounds, flash breakdowns can happen, very destructively. The SEC has sifted through some past events of this kind and set up regulatory guard rails. But they will probably be perpetually behind the curve. Militaries are itching to use robots instead of pilots and soldiers, and to automate killing from afar. But ultimately, control of the military comes down to social power, which comes down to people of not necessarily great intelligence. 

The biggest risk from these machines is that of security. If we have our financial markets run by machine, or our medical system run by super-knowledgeable artificial intelligences, or our military by some panopticon neural net, or even just our electrical grid run by super-computers, the problem is not that they will turn against us of their own volition, but that some hacker somewhere will turn them against us. Countless hospitals have already faced ransomware attacks. This is a real problem, growing as machines become more capable and indispensable. If and when we make artificial people, we will need the same kind of surveillance and social control mechanisms over them that we do over everyone else, but with the added option of changing their programming. Again, powerful intelligences made for military purposes to kill our enemies are, by the reflexive property of all technology, prime targets for being turned against us. So just as we have locked up our nuclear weapons and managed to not have them fall into enemy hands (so far), similar safeguards would need to be put on similar powers arising from these newer technologies.

We may have been misled by the many AI and super-beings of science fiction, Nietzsche's Übermensch, and similar archetypes. The point of Nietzsche's construction is moral, not intellectual or physical- a person who has thrown off all the moral boundaries of civilization, expecially Christian civilization. But that is a phantasm. The point of most societies is to allow the weak to band together to control the strong and malevolent. A society where the strong band together to enslave the weak.. well, that is surely a nightmare, and more unrealistic the more concentrated the power. We must simply hope that, given the ample time we have before truly comprehensive and superior artificial intelligent beings exist, we have exercised sufficient care in their construction, and in the health of our political institutions, to control them as we have many other potentially malevolent agents.


  • AI in chemistry.
  • AI to recognize cells in images.
  • Ayaan Hirsi Ali becomes Christian. "I ultimately found life without any spiritual solace unendurable."
  • The racism runs very deep.
  • An appreciation of Stephen J. Gould.
  • Forced arbitration against customers and employees is OK, but fines against frauds... not so much?
  • Oil production still going up.

Saturday, April 8, 2023

Molecules That See

Being trans is OK: retinal and the first event of vision.

Our vision is incredible. If I was not looking right now and experiencing it myself, it would be unbelievable that a biological system made up of motley molecules could accomplish the speed, acuity and color that our visual system provides. It was certainly a sticking point for creationists, who found (and perhaps still find) it incredible that nature alone can explain it, not to mention its genesis out of the mists of evolutionary time. But science has been plugging away, filling in the details of the pathway, which so far appear to arise by natural means. Where consciousness fits in has yet to be figured out, but everything else is increasingly well-accounted. 

It all starts in the eye, which has a curiously backward sheet of tissue at the back- the retina. Its nerves and blood vessels are on the surface, and after light gets through those, it hits the photoreceptor cells at the rear. These photoreceptor cells come in two types, rods (non-color sensitive) and cones (sensitive to either red, green, or blue). The photoreceptor cells have a highly polarized and complicated structure, where photosensitive pigments are bottom-most in a dense stack of membranes. Above these is a segment where the mitochondria reside, which provide power, as vision needs a lot of energy. Above these is the nucleus of the cell (the brains of the operation) and top-most is the synaptic output to the rest of the nervous system- to those nerves that network on the outside of the retina. 

A single photoreceptor cell, with the outer segment at the very back of the retina, and other elements in front.

Facing the photoreceptor membranes at the bottom of the retina is the retinal pigment epithelium, which is black with melanin. This is finally where light stops, and it also has very important functions in supporting the photoreceptor cells by buffering their ionic, metabolic, and immune environment, and phagocytosing and digesting photoreceptor membranes as they get photo-oxidized, damaged, and sloughed off. Finally, inside the photoreceptor cells are the pigment membranes, which harbor the photo-sensitive protein rhodopsin, which in turn hosts the sensing pigment, retinal. Retinal is a vitamin A-derived long-chain molecule that is bound inside rhodopsin or within other opsins which respectively confer slightly shifted color sensitivity. 

These opsins transform the tickle that retinal receives from a photon into a conformational change that they, as GPCRs (G-protein coupled receptors), transmit to G-proteins, called transducin. For each photon coming in, about 50 transducin molecules are activated. Each of activated transducin G-protein alpha subunits induce (in its target cGMP phosphodisterase) about 1000 cGMP molecules to be consumed. The local drop in cGMP concentration then closes the cGMP-gated cation channels in the photoreceptor cell membrane, which starts the electrical impulse that travels out to the synapse and nervous system. This amplification series provides the exquisite sensitivity that allows single photons to be detected by the system, along with the high density of the retinal/opsin molecules packed into the photoreceptor membranes.

Retinal, used in all photoreceptor cell types. Light causes the cis-form to kick over to the trans form, which is more stable.

The central position of retinal has long been understood, as has the key transition that a photon induces, from cis-retinal to all-trans retinal. Cis-retinal has a kink in the middle, where its double bond in the center of the fatty chain forms a "C" instead of a "W", swinging around the 3-carbon end of the chain. All-trans retinal is a sort of default state, while the cis-structure is the "cocked" state- stable but susceptible to triggering by light. Interestingly, retinal can not be reset to the cis-state while still in the opsin protein. It has to be extracted, sent off to a series of at least three different enzymes to be re-cocked. It is alarming, really, to consider the complexity of all this.

A recent paper (review) provided the first look at what actually happens to retinal at the moment of activation. This is, understandably, a very fast process, and femtosecond x-ray analysis needed to be brought in to look at it. Not only that, but as described above, once retinal flips from the dark to the light-activated state, it never reverses by itself. So every molecule or crystal used in the analysis can only be used once- no second looks are possible. The authors used a spray-crystallography system where protein crystals suspended in liquid were shot into a super-fine and fast X-ray beam, just after passing by an optical laser that activated the retinal. Computers are now helpful enough that the diffractions from these passing crystals, thrown off in all directions, can be usefully collected. In the past, crystals were painstakingly positioned on goniometers at the center of large detectors, and other issues predominated, such as how to keep such crystals cold for chemical stability. The question here was what happens in the femto- and pico-seconds after optical light absorption by retinal, ensconced in its (temporary) rhodopsin protein home.

Soon after activation, at one picosecond, retinal has squirmed around, altering many contacts with its protein. The trans (dark) conformation is shown in red, while the just-activated form is in yellow. The PSB site on the far end of the fatty chain (right) is secured against the rhodopsin host, as is the retinal ring (left side), leaving the middle of the molecule to convey most of the shape change, a bit like a bicycle pedal.

And what happens? As expected, the retinal molecule twists from cis to trans, causing the protein contacts to shift. The retinal shift happens by 200 femtoseconds, and the knock-on effects through the protein are finished by 100 picoseconds. It all makes a nanosecond seem impossibly long! As imaged above, the shape shift of retinal changes a series of contacts it has with the rhodopsin protein, inducing it to change shape as well. The two ends of the retinal molecule seem to be relatively tacked down, leaving the middle, where the shape change happens, to do most of the work. 

"One picosecond after light activation, rhodopsin has reached the red-shifted Batho-Rh intermediate. Already by this early stage of activation, the twisted retinal is freed from many of its interactions with the binding pocket while structural perturbations radiate away as a transient anisotropic breathing motion that is almost entirely decayed by 100 ps. Other subtle and transient structural rearrangements within the protein arise in important regions for GPCR activation and bear similarities to those observed by TR-SFX during photoactivation of seven-TM helix retinal-binding proteins from bacteria and archaea."

All this speed is naturally lost in the later phases, which take many milliseconds to send signals to the brain, discern movement and shape, to identify objects in the scene, and do all the other processing needed before consciousness can make any sense of it. But it is nice to know how elegant and uniform the opening scene in this drama is.


  • Down with lead.
  • Medicare advantage, cont.
  • Ukraine, cont.
  • What the heck is going on in Wisconsin?
  • Graph of the week- world power needs from solar, modeled to 2050. We are only scratching the surface so far.



Saturday, April 1, 2023

Consciousness and the Secret Life of Plants

Could plants be conscious? What are the limits of consciousness and pain? 

Scientific American recently reviewed a book titled "Planta Sapiens". The title gives it all away, and the review was quite positive, with statements like: 

"Our senses can not grasp the rich communicative world of plants. We therefore lack language to describe the 'intelligence' of a root tip in conversation with the microbial life of the soil or the 'cognition' that emerges when chemical whispers ripple through a lacework of leaf cells."

This is provocative indeed! What if plants really do have a secret life and suffer pain with our every bite and swing of the scythe? What of our vaunted morals and ethics then?

I am afraid that I take a skeptical view of this kind of thing, so let's go through some of the aspects of consciousness, and ask how widespread it really is. One traditional view, from the ur-scientific types like Descartes, is that only humans have consciousness, and all other creatures, have at best a mechanism, unfeeling and mechanical, that may look like consciousness, but isn't. This, continued in a sense by B. F. Skinner in the 20th century, is a statement from ignorance. We can not fully communicate with animals, so we can not really participate in what looks like their consciousness, so let's just ignore it. This position has the added dividend of supporting our unethical treatment of animals, which was an enormous convenience, and remains the core position of capitalism generally, regarding farm animals (though its view of humans is hardly more generous).

Well, this view is totally untenable, from our experience of animals, our ability to indeed communicate with them to various degrees, to see them dreaming, not to mention from an evolutionary standpoint. Our consciousness did not arise from nothing, after all. So I think we can agree that mammals can all be included in the community of conscious fellow-beings on the planet. It is clear that the range of conscious pre-occupations can vary tremendously, but whenever we have looked at the workings of memory, attention, vision, and other components assumed to be part of or contributors to conscious awareness, they all exist in mammals, at least. 

But what about other animals like insects, jellyfish, or bacteria? Here we will need a deeper look at the principles in play. As far as we understand it, consciousness is an activity that binds various senses and models of the world into an experience. It should be distinguished from responsiveness to stimuli. A thermostat is responsive. A bacterium is responsive. That does not constitute consciousness. Bacteria are highly responsive to chemical gradients in their environment, to food sources, to the pheromones of fellow bacteria. They appear to have some amount of sensibility and will. But we can not say that they have experience in the sense of a conscious experience, even if they integrate a lot of stimuli into a holistic and sensitive approach to their environment. 


The same is true of our own cells, naturally. They also are highly responsive on an individual basis, working hard to figure out what the bloodstream is bringing them in terms of food, immune signals, pathogens, etc. Could each of our cells be conscious? I would doubt it, because their responsiveness is mechanistic, rather than being an independent as well as integrated model of their world. Simlarly, if we are under anaesthesia and a surgeon cuts off a leg, is that leg conscious? It has countless nerve cells, and sensory apparatus, but it does not represent anything about its world. It rather is built to send all these signals to a modeling system elsewhere, i.e. our brain, which is where consciousness happens, and where (conscious) pain happens as well.

So I think the bottom line is that consciousness is rather widely shared as a property of brains, thus of organisms with brains, which were devised over evolutionary time to provide the kind of integrated experience that a neural net can not supply. Jellyfish, for instance, have neural nets that feel pain, respond to food and mates, and swim exquisitely. They are highly responsive, but, I would argue, not conscious. On the other hand, insects have brains and would count as conscious, even though their level of consciousness might be very primitive. Honey bees map out their world, navigate about, select the delicacies they want from plants, and go home to a highly organized hive. They also remember experiences and learn from them.

This all makes it highly unlikely that consciousness is present in quantum phenomena, in rocks, in bacteria, or in plants. They just do not have the machinery it takes to feel something as an integrated and meaningful experience. Where exactly the line is between highly responsive and conscious is probably not sharply defined. There are brains that are exceedingly small, and neural nets that are very rich. But it is also clear that it doesn't take consciousness to experience pain or try to avoid it, (which plants, bacteria, and jellyfish all do). Where is the limit of ethical care, if our criterion shifts from consciousness to pain? Wasn't our amputated leg in pain after the operation above, and didn't we callously ignore its feelings? 

I would suggest that the limit remains that of consciousness, not that of responsiveness to pain. Pain is not problematic because of a reflex reaction. The doctor can tap our knee as often as he wants, perhaps causing pain to our tendon, but not to our consciousness. Pain is problematic because of suffering, which is a conscious construct built around memory, expectations, and models of how things "should" be. While one can easily see that a plant might have certain positive (light, air, water) and negative (herbivores, fungi) stimuli that shape its intrinsic responses to the environment, these are all reflexive, not reflective, and so do not appear (to an admittedly biased observer) to constitute suffering that rises to ethical consideration.

Saturday, March 18, 2023

The Eye is the Window to the Brain

Alpha oscillations of the brain prefigure the saccades by which our eyes move as we read.

Reading is a complicated activity. We scan symbols on a page, focusing on some in turn, while scanning along for the next one. Data goes to the brain not in full images, but in the complex coding of differences from moment to moment. Simultaneously, various levels of processing in the brain decode the dark and light spots, the letter forms, the word chunks, the phrases, and on up to the ideas being conveyed.

While our brain is not rigidly clocked like a computer, (where each step of computation happens in sync with the master clock), it does have dynamic oscillations at several different frequencies and ranging over variable regions and coalitions of neurons that organize its processing. And the eye is really a part of that same central nervous system- an outpost that conveys so much sensitive information, both in and out.

We take in visual scenes by jerks, or saccades, using our peripheral vision to orient generally and detect noteworthy portions, then bringing our high-acuity fovea to focus on them. The eye moves about four times per second, a span that is used to process the current scene and to plan where to shift next. Alpha oscillations (about 10 per second) in the brain, which are inhibitory, are known to (anti-) correlate with motor control of the saccade period. The processing of the visual sensory system resets its oscillations with each shift in scene, so is keyed to saccades in a receiving sense. Since vision only happens in the rest/focal periods between saccades, it is helpful, conceptually, to coordinate the two processes so that the visual processing system is maximally receptive (via its oscillatory phase) at the same time that the eye comes to rest after a saccade and sends it a new scene. Conversely, the visual sensory system would presumably tell the motor system when it was done processing the last unit, to gate a shift to the next scene.

A recent paper extended this work to ask how brain oscillations relate to the specific visual task of reading, including texts that are more or less difficult to comprehend. They used the non-invasive method of magnetic encephalography to visualize electrical activity within the brains of people reading. The duration of saccades were very uniform, (and short), while the times spent paused on each focal point (word) varied slightly with how difficult the word was to parse. It is worth noting that no evidence supports the lexical processing of words out of the peripheral vision- this only happens from foveal/focused images.

Subjects spent more time focused on rare/difficult words than on easy words, during a free reading exercise (C). On the other hand, the duration of saccades to such words was unchanged (D).

In the author's main finding, alpha oscillations were correlated as the person shifted from word to word, pausing to view each one. These oscillations tracked the pausing more closely when shifting towards more difficult words, rather than to simple words. And these peaks of phase locking happened anatomically in the Brodmann area 7, which is a motor area that mediates between the visual system and motor control of the eye. Presumably this results from communication from the visual processing area to the visual motor area, just next door. They also found that the phase locking was strongest for the start of saccades, not their end, when the scene comes back into focus. This may simply be a timing issue, since there are lags at all points in the visual processing system, and since the saccade duration is relatively fixed, this interval may be appropriate to keep the motor and sensory areas in effective synchronization.

Alpha oscillation locks to some degree with initiation of saccades, and does so more strongly when heading to difficult words, rather than to easy words. Figure B shows the difference in alpha power between the easy and difficult word target. How can this be? 

So while higher frequency (gamma) oscillations participate in sensory processing of vision, this lower alpha frequency is dominant in the area that controls eye movement, in keeping with muscle control mechanisms more generally. But it does raise the question of why they found a signal (phase locking for the initiation of a saccade) for the difficulty of the upcoming word, before it was actually lexically processed. The peripheral visual system is evidently making some rough guess, perhaps by size or some other property, of the difficulty of words, prior to fully decoding them, and it will be interesting to learn where this analysis is done.


  • New uses for AI in medicare advantage.
  • Some problems with environmental review.
  • No-compete "agreements" are no such thing, and worthless anyhow.
  • We wanna be free.

Saturday, February 18, 2023

Everything is Alive, but the Gods are all Dead

Barbara Ehrenreich's memoir and theological ruminations in "Living with a Wild God".

It turns out that everyone is a seeker. Somewhere there must be something or someone to tell us the meaning of life- something we don't have to manufacture with our own hands, but rather can go into a store and buy. Atheists are just as much seekers as anyone else, only they never find anything worth buying. The late writer Barbara Ehrenreich was such an atheist, as well as a remarkable writer and intellectual who wrote a memoir of her formation. Unusually and fruitfully, it focuses on those intense early and teen years when we are reaching out with both hands to seize the world- a world that is maddeningly just beyond our grasp, full of secrets and codes it takes a lifetime and more to understand. Religion is the ultimate hidden secret, the greatest mystery which has been solved in countless ways, each of them conflicting and confounding.

Ehrenreich's tale is more memoir than theology, taking us on a tour through a dysfunctional childhood with alcoholic parents and tough love. A story of growth, striking out into the world, and sad coming-to-terms with the parents who each die tragically. But it also turns on a pattern of mystical experiences that she keeps having, throughout her adult life, which she ultimately diagnoses as dissociative states where she zones out and has a sort of psychedelic communion with the world.

"Something peeled off the visible world, taking with it all meaning, inference, association, labels, and words. I was looking at a tree, and if anyone had asked, that's what I would have said I was doing, but the word "tree" was gone, along with all the notions of tree-ness that had accumulated in the last dozen years or so since I had acquired language. Was it a place that was suddenly revealed to me? Or was it a substance- the indivisible, elemental material out of which the entire known and agreed-upon world arises as a fantastic elaboration? I don't know, because this substance, this residue, was stolidly, imperturbably mute. The interesting thing, some might say alarming, was that when you take away all the human attributions- the words, the names of species, the wisps of remembered tree-related poetry, the fables of photosynthesis and capillary action- that when you take all this this away, there is still something left."

This is not very hard to understand as a neurological phenomenon of some kind of transient disconnection of just the kind of brain areas she mentions- those that do all the labeling, name-calling, and boxing-in. In schizophrenia, it runs to the pathological, but in Ehrenreich's case, she does not regard it as pathological at all, as it is always quite brief. But obviously, the emotional impact and weirdness of the experience- that is something else altogether, and something that humans have been inducing with drugs, and puzzling over, forever. 

Source

As a memoir, the book is very engaging. As a theological quest, however, it doesn't work as well, because the mystical experience is, as noted above, resolutely meaningless. It neither compels Ehrenreich to take up Christianity, as after a Pauline conversion, nor any other faith or belief system. It offers a peek behind the curtain, but, stripped of meaning as this view is, Ehrenreich is perhaps too skeptical or bereft of imagination to give it another, whether of her own or one available from the conventional array of sects and religions. So while the experiences are doubtless mystical, one can not call them religious, let alone god-given, because Ehrenreich hasn't interpreted them that away. This hearkens back to the writings of William James, who declined to assign general significance to mystical experiences, while freely admitting their momentous and convincing nature to those who experienced them.

Only in one brief section (which had clearly been originally destined for an entirely different book) does she offer a more interesting and insightful analysis. There, Ehrenreich notes that the history of religion can be understood as a progressive bloodbath of deicide. At first, everything is alive and sacred, to an animist mind. Every leaf and grain of sand holds wonders. Every stream and cloud is divine. This is probably our natural state, which a great deal of culture has been required to stamp out of us. Next is a hunting kind of religion, where deities are concentrated in the economic objects (and social patterns) of the tribe- the prey animals, the great plants that are eaten, and perhaps the more striking natural phenomena and powerful beasts. But by the time of paganism, the pantheon is cut down still more and tamed into a domestic household, with its soap-opera dramas and an increasingly tight focus on the major gods- the head of the family, as it were. 

Monotheism comes next, doing away with all the dedicated gods of the ocean, of medicine, of amor and war, etc., cutting the cast down to one. One, which is inflated to absurd proportions with all-goodness, all-power, all-knowledge, etc. A final and terrifying authoritarianism, probably patterned on the primitive royal state. This is the phase when the natural world is left in the lurch, as an undeified and unprotected zone where human economic greed can run rampant, safe in the belief that the one god is focused entirely on man's doings, whether for good or for ill, not on that of any other creature or feature of the natural world. A phase when even animals, who are so patently conscious, can, through the narcissism of primitive science and egoistic religion, be deemed mere mechanisms without feeling. This process doesn't even touch on the intercultural deicide committed by colonialism and conquest.

This in turn invites the last deicide- that by rational people who toss aside this now-cartoonish super-god, and return to a simpler reverence for the world as we naturally respond to it, without carting in a lot of social power-and-drama baggage. It is the cultural phase we are in right now, but the transition is painfully slow, uneven, and drawn-out. For Ehrenreich, there are plenty of signs- in the non-linear chemical phenomena of her undergraduate research, in the liveliness of quantum physics even into the non-empty vacuum, in the animals who populate our world and are perhaps the alien consciousnesses that we should be seeking in place of the hunt through outer space, and in our natural delight in, and dreams about, nature at large. So she ends the book as atheist as ever, but hinting that perhaps the liveliness of the universe around us holds some message that we are not the only thinking and sentient beings.

"Ah, you say, this is all in your mind. And you are right to be skeptical; I expect no less. It is in my mind, which I have acknowledged from the beginning is a less than perfect instrument. but this is what appears to be the purpose of my mind, and no doubt yours as well, its designed function beyond all the mundane calculations: to condense all the chaos and mystery of the world into a palpable Other or Others, not necessarily because we love it, and certainly not out of any intention to "worship" it. But because ultimately we may have no choice in the matter. I have the impression, growing out of the experiences chronicled here, that it may be seeking us out." 

Thus the book ends, and I find it a rather poor ending. It feels ripped from an X-Files episode, highly suggestive and playing into all the Deepak and similar mystical tropes of cosmic consciousness. That is, if this passage really means much at all. Anyhow, the rest of the trip is well worth it, and it is appropriate to return to the issue of the mystical experience, which is here handled with such judicious care and restraint. Where imagination could have run rampant, the cooly scientific view (Ehrenreich had a doctorate in biology) is that the experiences she had, while fascinating and possibly book-proposal-worthy, did not force a religious interpretation. This is radically unlike the treatment of such matters in countless other hands, needless to say. Perhaps our normal consciousness should not be automatically valued less than more rare and esoteric states, just because it is common, or because it is even-tempered.


  • God would like us to use "they".
  • If you are interested in early Christianity, Gnosticism is a good place to start.
  • Green is still an uphill battle.

Saturday, December 24, 2022

Brain Waves: Gaining Coherence

Current thinking about communication in the brain: the Communication Through Coherence framework.

Eyes are windows to the soul. They are visible outposts of the brain that convey outwards what we are thinking, as the gather in the riches of our visible surroundings. One of their less appreciated characteristics is that they flit from place to place as we observe a scene, never resting in one place. This is called saccade, and it represents an involuntary redirection of attention all over a visual scene that we are studying, in order to gather high resolution impressions from places of interest. Saccades happen at a variety of rates, centered around 0.1 second. And just as the raster scanning of a TV or monitor can tell us something about how it or its signal works, the eye saccade is thought, by the theory presented below, to represent a theta rhythm in the brain that is responsible for resetting attention- here, in the visual system.

That theory is Communication Through Coherence (CTC), which appears to be the dominant theory of how neural oscillations (aka brain waves) function. (This post is part of what seems like a yearly series of updates on the progress in neuroscience in deciphering what brain waves do, and how the brain works generally.) This paper appeared in 2014, but it expressed ideas that were floating around for a long time, and has since been taken up by numerous other groups that provide empirical and modeling support. A recent paper (titled "Phase-locking patterns underlying effective communication in exact firing rate models of neural networks") offers full-throated support from a computer modeling perspective, for instance. But I would like to go back and explore the details of the theory itself.

The communication part of the theory is how thoughts get communicated within the brain. Communication and processing are simultaneous in the brain, since it is physically arranged to connect processing chains (such as visual processing) together as cells that communicate consecutively, for example creating increasingly abstract representations during sensory processing. While the anatomy of the brain is pretty well set in a static way, it is the dynamic communication among cells and regions of the brain that generates our unconscious and conscious mental lives. Not all parts can be talking at the same time- that would be chaos. So there must be some way to control mental activity to manageable levels of communication. That is where coherence comes in. The theory (and a great deal of observation) posits that gamma waves in the brain, which run from about 30 Hz upwards all the way to 200 Hz, link together neurons and larger assemblages / regions into transient co-firing coalitions that send thoughts from one place to another, precisely and rapidly, insulated from the noise of other inputs. This is best studied in the visual system which has a reasonably well-understood and regimented processing system that progresses from V1 through V4 levels of increasing visual field size and abstraction, and out to cortical areas of cognition.

The basis of brain waves is that neural firing is rapid, and is followed by a refractory period where the neuron is resistant to another input, for a few milliseconds. Then it can fire again, and will do if there are enough inputs to its dendrites. There are also inhibitory cells all over the neural system, dampening down the system so that it is tuned to not run to epileptic extremes of universal activation. So if one set of cells entrains the next set of cells in a rhythmic firing pattern, those cells tend to stay entrained for a while, and then get reset by way of slower oscillations, such as the theta rhythm, which runs at about 4-8 Hz. Those entrained cells are, at their refractory periods, also resistant to inputs that are not synchronized, essentially blocking out noise. In this way trains of signals can selectively travel up from lower processing levels to higher ones, over large distances and over multiple cell connections in the brain.

An interesting part of the theory is that frequency is very important. There is a big difference between slower and faster entraining gamma rhythms. Ones that run slower than the going rate do not get traction and die out, while those that run faster hit the optimal post-refractory excitable state of the receiving cells, and tend to gain traction in entraining them downstream. This sets up a hierarchy where increasing salience, whether established through intrinsic inputs, or through top-down attention, can be encoded in higher, stronger gamma frequencies, winning this race to entrain downstream cells. This explains to some degree why EEG patterns of the brain are so busy and chaotic at the gamma wave level. There are always competing processes going on, with coalitions forming and reforming in various frequencies of this wave, chasing their tails as they compete for salience.

There are often bidirectional processes in the brain, where downstream units talk back to upstream ones. While originally imagined to be bidirectionally entrained in the same gamma rhythm, the CTC theory now recognizes that the distance / lag in signaling would make this impossible, and separates them as distinct streams, observing that the cellular targets of backwards streams are typically not identical to those generating the forward streams. So a one-cycle offset, with a few intermediate cells, would account for this type of interaction, still in gamma rhythm.

Lastly, attention remains an important focus of this theory, so to speak. How are inputs chosen, if not by their intrisic salience, such as flashes in a visual scene? How does a top-down, intentional search of a visual scene, or a desire to remember an event, work? CTC posits that two other wave patterns are operative. First is the theta rhythm of about 4-8 Hz, which is slow enough to encompass many gamma cycles and offer a reset to the system, overpowering other waves with its inhibitory phase. The idea is that salience needs to be re-established each theta cycle freshly, (such as in eye saccades), with maybe a dozen gamma cycles within each theta that can grow and entrain necessary higher level processing. Note how this agrees with our internal sense of thoughts flowing and flitting about, with our attention rapidly darting from one thing to the next.

"The experimental evidence presented and the considerations discussed so far suggest that top-down attentional influences are mediated by beta-band synchronization, that the selective communication of the attended stimulus is implemented by gamma-band synchronization, and that gamma is rhythmically reset by a 4 Hz theta rhythm."

Attention itself, as a large-scale backward flowing process, is hypothesized to operate in the alpha/beta bands of oscillations, about 8 - 30 Hz. It reaches backward over distinct connections (indeed, distinct anatomical layers of the cortex) from the forward connections, into lower areas of processing, such as locations in the visual scene, or colors sought after, or a position a page of text. This slower rhythm could entrain selected lower level regions, setting some to have in-phase and stronger gamma rhythms vs other areas not activated in this way. Why the theta and the alpha/beta rhythms have dramatically different properties is not dwelt on by this paper. One can speculate that each can entrain other areas of the brain, but the theta rhythm is long and strong enough to squelch ongoing gamma rhythms and start many off at the same time in a new competitive race, while the alpha/beta rhythms are brief enough, and perhaps weak enough and focused enough, to start off new gamma rhythms in selected regions that quickly form winning coalitions heading upstream.

Experiments on the nature of attention. The stimulus shown to a subject (probably a monkey) is in A. In E, the monkey was trained to attend to the same spots as in A, even though both were visible. V1 refers to the lowest level of the visual processing area of the brain, which shows activity when stimulated (B, F) whether or not attention is paid to the stimulus. On the other hand, V4 is a much higher level in the visual processing system, subject to control by attention. There, (C, G), the gamma rhythm shows clearly that only one stimulus is being fielded.

The paper discussing this hypothesis cites a great deal of supporting empirical work, and much more has accumulated in the ensuing eight years. While plenty of loose ends remain and we can not yet visualize this mechanism in real time, (though faster MRI is on the horizon), this seems the leading hypothesis that both explains the significance and prevalence of neural oscillations, and goes some distance to explaining mental processing in general, including abstraction, binding, and attention. Progress has not been made by great theoretical leaps by any one person or institution, but rather by the slow process of accumulation of research that is extremely difficult to do, but of such great interest that there are people dedicated enough to do it (with or without the willing cooperation of countless poor animals) and agencies willing to fund it.


  • Local media is a different world now.
  • Florida may not be a viable place to live.
  • Google is god.

Saturday, August 13, 2022

Titrations of Consciousness

In genetics, we have mutation. In biochemistry, we have titration. In neuroscience, we have brain damage.

My theisis advisor had a favorite saying: "When in doubt, titrate!". That is to say, if you think you have your hands on a key biochemical component, its amount should be clearly influential on the reaction you are looking at. Adding more might make its role clearer, or bring out other dynamics, or, at very least, titration might allow you to not waste it by using just the right amount.

Neuroscience has reached that stage in studies of consciousness. While philosophers wring their hands about the "hardness" of the problem, scientists are realizing that it can be broken down like any other, and studied by its various broken states and disorders, and in its myriad levels / types as induced by drugs, damage, and by evolution in other organisms. A decade ago, a paper showed that the thalamus, a region of the brain right on top of the brain stem and the conduit of much of its traffic with the cortex, has a graded (i.e. titratable) relationship between severity of damage and severity of effects on consciousness. This led to an influential hypothesis- the mesocircuit hypothesis, which portrays wide-ranging circuitry from the thalamus that activates cortical regions, and is somewhat inhibited in return by circuits coming back. 


Degree of damage to a central part of the brain, the thalamus, correlates closely with degree of consciousness disability.

A classification of consciousness / cognition / communication deficits, ranging from coma to normal state. LIS = locked in state, MCS = minimally conscious state, VS = vegetative state (now unresponsive wakefulness syndrome, which may be persistent (PVS).

The anatomy is pretty clear, and subsequent work has focused on the dynamics, which naturally are the heart of consciousness. A recent paper, while rather turgid, supports the mesocircuit hypothesis by analyzing the activation dynamics of damaged brains (vegetative state, now called unresponsive wakefulness syndrome (UWS)), and less severe minimally conscious states (MCS). They did unbiased mathematical processing to find the relevant networks and reverberating communication modes. For example, in healthy brains there are several networks of activity that hum along while at rest, such as the default mode network, and visual network. These networks are then replaced or supplemented by other dynamics when activity takes place, like viewing something, doing something, etc. The researchers measured the metastability or "jumpiness" of these networks, and their frequencies (eigenmodes).

Naturally, there is a clear relationship between dynamics and consciousness. The worse off the patient, the less variable the dynamics, and the fewer distinct frequencies are observed. But the data is hardly crystal clear, so it got published in a minor journal. It is clear that these researchers have some more hunting to do to find better correlates of consciousness. This might come from finer anatomical localization, (hard to do with fMRI), or perhaps from more appropriate math that isolates the truly salient aspects of the phenomenon. In studies of other phenomena such as vision, memory, and place-sensing, the analysis of correlates between measurable brain activity and the subjective or mental aspects of that activity have become immensely more sophisticated and sensitive over time, and one can assume that will be true in this field as well.

Severity of injury correlates with metastability (i.e jumpiness) of central brain networks,  and with consciousness. (HC = healthy control)


  • Senator Grassley can't even remember his own votes anymore.
  • How are the Balkans doing?
  • What's in the latest Covid strain?
  • Economic graph of the week. Real income has not really budged in a very long time.

Sunday, January 16, 2022

Choices, Choices

Hippocampal maps happen in many modes and dimensions. How do they relate to conscious navigation?

How do our brains work? A question that was once mystical is now tantalizingly concrete. Neurobiology is, thanks to the sacrifices of countless rats, mice, and undergraduate research subjects, slowly bringing to light mechanisms by which thoughts flit about the brain. The parallel processing of vision, progressively through the layers of the visual cortex, was one milestone. Another has been work in the hippocampus, which is essential for memory formation as well as mapping and navigation. Several kinds of cells have been found there (or in associated brain areas) which fire when the animal is in a certain place, or crosses a subjective navigational grid boundary, or points its head in a certain direction. 

A recent paper reviewed recent findings about how such navigation signals are bound together and interact with the prefrontal cortex during decision making. One is that locations are encoded in a peculiar way, within the brain wave known as the theta oscillation. These run at about 4 to 12 cycles per second, and as an animal moves or thinks, place cells corresponding to locations behind play at the trough of the cycle, while locations progressively closer, and then in front of the animal play correspondingly higher on the wave. So the conscious path that the animal is contemplating is replayed on a sort of continuous loop in highly time-compressed fashion. And this happens not only while the animal is on the path, but at other times as well, if it is dreaming about its day, or is resting and thinking about its future options.

"For hippocampal place cells to code for both past and future trajectories while the animal navigates through an environment, the hippocampus needs to integrate multiple sensory inputs and self-generated cues by the animal’s movement for both retrospective and prospective coding."


These researchers describe a new piece of the story, that alternate theta cycles can encode different paths. That is, as the wave repeats, the first cycle may encode one future path out of a T-maze, while the next may encode another path out of the same maze, and then repeating back to A, B, etc. It is evident that the animal is trying to decide what to do, and its hippocampus (with associated regions) is helpfully providing mappings of the options. Not only that, but the connecting brain areas heading towards the prefrontal cortex (the nucleus reuniens, entorhinal cortex, and parahippocampal gyrus) separate these path representations into different cell streams, (still on the theta oscillation), and progressively filter one out. Ultimately, the prefrontal cortex represents only one path ... the one that the rat actually chooses to go down. The regions are connected in both directions, so there is clearly top-down as well as bottom-up processing going on. The conclusion is that in general, the hippocampus and allied areas provide relatively unbiased mapping services, while the cortex does the decision making about where to go, and while it may receive.

    "This alternation between left and right begins as early as 25 cm prior to the choice point and will continue until the animal makes its turn"


A rat considers its options. Theta waves are portrayed, as they appear in different anatomical locations in the brain. Hippocampal place cells, on the bottom right, give a mapping of the relevant path repeatedly encoded across single theta wave cycles. One path is encoded in one cycle, the other in the next. Further anatomical locations (heading left) separate the maps into different channels / cells, from which the prefrontal cortex finally selects only the one it intends to actually use.

The hippocampus is not just for visual navigation, however. It is now known to map many other senses in spatial terms, like sounds, smells. It also maps the flow of time in cognitive space, such as in memories, quite apart from spatial mapping. It seems to be a general facility to create cognitive maps of the world, given whatever the animal has experienced and is interested in, at any scale, and in many modalities. The theta wave embedding gives a structure that is highly compressed, and repeated, so that it is available to higher processing levels for review, re-enactment, dreaming, and modification for future planning. 

Thus using the trusty maze test on rats and mice, neuroscientists are slowly, and very painfully, getting to the point of deciphering how certain kinds of thoughts happen in the brain- where they are assembled, how their components combine, and how they relate to behavior. How they divide between conscious and unconscious processes naturally awaits more insight into what this dividing line really consists of.


  • Biochar!
  • More about the coup attempt.
  • Yes, there was electoral fraud.
  • The more you know about fossil fuels, the worse it gets.
  • Graph of the week. Our local sanitation district finds over a thousand omicron genomes per milliliter of intake, which seems astonishing.




Saturday, June 26, 2021

Tuning Into the Brain

Oscillations as a mechanism of binding, routing, and selection of thoughts.

The brain is plastic, always wiring and rewiring its neurons, even generating new neurons in some areas. But this plasticity does not come near the speed and scale needed to manage brain function at the speed of thought. At that perspective and scale, our brains are static lumps, which somehow manage to dynamically assemble thoughts, visions, impressions, actions, and so much more as we go about our frenetic lives. It has gradually become clear that neural oscillations, or electrical brain waves, which are such a prominent feature of active brains, serve centrally to facilitate this dynamic organization, joining up some areas via synchrony for focused communication, while leaving others in the dark- unattended and waiting for their turn in the spotlight. 

The physical anatomy of the brain is obviously critical to organizing all the possible patterns of neural activity, from the most unconscious primary areas of processing and instinct to whatever it is that constitutes consciousness. But that anatomy is relatively static, so a problem is- how do some brain areas take precedence for some thoughts and activities, while others take over during other times? We know that, very roughly, the whole brain tends to be active most of the time, with only slight upticks during intensive processing, as seen in the induced blood flow detected by methods such as fMRI. How are areas needed for some particular thought brought into a dynamic coalition, which is soon after dissolved with the next thought? And how do such coalitions contribute to information flow, and ultimately, thought?

Oscillations seem to provide much of this selectivity, and the last decade of research has brought an increasingly detailed appreciation of its role in particular areas, and of its broad applicability to dynamic selection and binding issues generally. A recent paper describes a bunch of simulations and related analytical work on one aspect of this issue- how oscillations work at a distance, when there is an inevitable lag in communication between two areas. 

Originally, theories of neural oscillation just assumed synchrony and did not bother too much with spatial or time delay separations. Synchrony clearly offers the opportunity of activating one area of the brain based on the activity of a separate driving area. For instance, primary visual areas might synchronize rhythmically with downstream areas and thus drive their processing of a sequence of signals, thus generating higher level activations in turn that ultimately constitute visual consciousness. Adding in spatial considerations increases complexity, since various areas of the brain exist at widely different separations, potentially making a jumble of the original idea. But on the other hand, feedback is a common, even universal, phenomenon in the brain, and requires some amount of delay to make any sense. Feedback needs to be (and anatomically must be) offset in time to avoid instant shut-down or freezing. 

Perhaps one aspect of anatomical development is to tune the brain so that certain coalitions can form with useful sequentially delays, while others can not, setting in the anatomical concrete a certain time-delay characteristic for each anatomically connected group. Indeed, it is known that myelination- the process of white matter development during childhood and early adulthood- speeds up axonal conduction, thus greatly altering the delay characteristics of the brain. Keeping these delays tuned to produce usable coalitions for thought could be a significant hurdle as this development proceeds, and explain some of the deep alterations of cognition that accompany it. The opportunity to assemble more wide-ranging coalitions of entrained neurons is obviously beneficial to complex thought, but just how flexible are such relations? Could the speeding up of one possible coalition destroy a range of others?

The current paper simply makes the case that delays are perfectly conducive to oscillatory entrainment, and also that regions with higher frequencies tend to more effectively drive downstream areas with slightly lower intrinsic frequencies, though other relationships can also exist. Both phenomena contribute to assymmetric information flow, from one area to the next, given oscillatory entrainment. The computer simulations the authors set up were relatively simple- populations of a hundred neurons with some inhibitory and most excitatory, all behaving as closely as possible to natural neurons, modestly inter-connected, with some connections to another second similar population, and each given certain stable or oscillatory inputs. Each population showed a natural oscillation, given normal behavior of neurons (with inhibitory feedback) and a near-white noise input baseline that they injected for each population. On top of that, they injected oscillatory inputs as needed to drive each population's oscillations to perform the experiment, at particular frequencies and phases.


The authors manipulated the phase delay between the two populations (small delta), and also manipulated the frequency mismatch between them (called de-tuning, big delta). This led to a graph like shown above, where each has its own axis and leads to a regimes (red) of high entrainment (B) and information transfer (C). The degree of entrainment is apparent in the graphs in D, taken from the respective black points in B, with the driver population in red, the receiver population in blue, as diagramed in A. In this case, practically all the points are in red zones, and thus show substantial entrainment.

While this simulation method appears quite powerful, the paper was not well-written enough, and the experiments not clear enough, to make larger points out of this work. It is one small piece of a larger movement to pin down the capabilities and exact nature and role of neural oscillations in the brain- a role that has been tantalizing for a century, and is at last starting to be appreciated as central to cognition.

Based on the following articles: