The fact that perceptions come via our sensory organs, and stop once those organs are impaired, strongly suggests a simple camera-type model of one-way perceptual flow. Yet, recent research all points in the other direction, that perception is a more active process wherein the sense organs are central, but are also directed by attention and by pre-existing models of the available perceptual field in a top-down way. Thus we end up with a cognitive loop where the mind holds models of reality which are incrementally updated by the senses, but not wiped and replaced as if they were simple video feeds. The model is the perception and is more valuable than the input.
One small example of a top-down element in this cognitive loop is visual attention. Our eyes are little outposts of the brain, and are told where to point. Even if something surprising happens in the visual field, the brain has to do a little processing before shifting attention, and thus the eyeballs, to that event. Our eyes are shifting all the time, being pointed to areas of interest, following movie scenes, words on a page, etc. None of this is directed by the eyes themselves, (including the jittery saccade system), but by higher levels of cognition.
The paper for this week notes ironically that visual perception studies have worked very hard to eliminate eye and other motion from their studies, to provide consistent mapping of what the experimenters present, to where the perceptions show up in visual fields of the brain. Yet motion and directed attention are fundamental to complex sensation.
Other sensory systems vary substantially in their dependence on motion. Hearing is perhaps least dependent, as one can analyze a scene from a stationary position, though movement of either the sound or the subject, in time and space, are extremely helpful to enrich perception. Touch, through our hairs and skin, is intrinsically dependent on movement and action. Taste and smell are also, though in a subtler way. Any monotonic smell will die pretty rapidly, subjectively, as we get used to it. It is the bloom of fresh tastes with each mouthful or new aromas that create sensation, as implied by the expression "clearing the palate". Aside from the issues of the brain's top-down construction of these perceptions through its choices and modeling, there is also the input of motor components directly, and dynamic time elements, that enrich / enliven perception multi-modally, beyond a simple input stream model.
The many loops from sensory (left) to motor (right) parts of the perceptual network. This figure is focused on whisker perception by mice. |
The current paper discusses these issues and makes the point that since our senses have always been embodied and in-motion, they are naturally optimized for dynamic learning. And that the brain circuits mediating between sensation and action are pervasive and very difficult to separate in practice. The authors hypothesize very generally that perception consists of a cognitive quasi-steady state where motor cues are consistent with tactile and other sensory cues (assuming a cognitive model within which this consistence is defined), which is then perturbed by changes in any part of the system, especially sensory organ input, upon which the network seeks a new steady state. They term the core of the network the motor-sensory-motor (MSM) loop, thus empahsizing the motor aspects, and somewhat unfairly de-emphasizing the sensory aspects, which after all are specialized for higher abundance and diversity of data than the motor system. But we can grant that they are an integrated system. They also add that much perception is not conscious, so the fixation of a great deal of research on conscious reports, while understandable, is limiting.
"A crucial aspect of such an attractor is that the dynamics leading to it encompass the entire relevant MSM-loop and thus depend on the function transferring sensor motion into receptors activation; this transfer function describes the perceived object or feature via its physical interactions with sensor motion. Thus, ‘memories’ stored in such perceptual attractors are stored in brain-world interactions, rather than in brain internal representations."
Eventually, the researchers present some results, from a mathematical model and robot that they have constructed. The robot has a camera and motors to move around with, plus computer and algorithm. The camera only sends change data, as does the retina, not entire visual scenes, and the visual field is extremely simple- a screen with a dark and light side, which can move right or left. The motorized camera system, using equations approximating a MSM loop, can relatively easily home in on and track the visual right/left divider, and thus demonstrate dynamic perception driven by both motor and sensory elements. The cognitive model was naturally implicit in the computer code that ran the system, which was expecting to track just such a light/dark line. One must say that this was not a particularly difficult or novel task, so the heart of the paper is its introductory material.
- The US has long been a wealthy country.
- If we want to control carbon emissions, we can't wait for carbon supplies to run out.
- Market failure, marketing, and fraud, umpteenth edition. Trump wasn't the only one getting into the scam-school business.
- Finance is eating away at your retirement.
- Why is the House of Representatives in hostile hands?
- UBI- utopian in a good way, or a bad way?
- Trump transitions from stupid to deranged.
- The fight against corruption and crony Keynesianism, in India.
- Whom do policymakers talk to? Hint- not you.
- Whom are you talking to at the call center? Someone playing hot potato.
- Those nutty gun nuts.
- Is Islam a special case, in how it interacts with political and psychological instability?
- Graph of the week- a brief history of inequality from 1725.
No comments:
Post a Comment