Of all the scientific problems, consciousness is among the most interesting. It is the study of ourselves, of our most intimate nature, whose very proximity makes investigation problematic, not to say perilous. Its intrinsic structure misdirects us from its physical basis, and it has thus been the subject of millennia of fruitless, indeed unsound and narcissistic, speculation. Not that there is any significant evidence that goes against the basic reductionist model that it all happens in our brains, but how it happens ... that remains largely unknown, due to the rather bizarre technology involved, which is miniaturized, rapidly dynamic, very wet, and case-hardened, in addition to being ethically protected from willful damage.
A recent paper took another step towards teasing out the dynamic differences between conscious and unconscious processing using EEG and MEG, that is, electrical and magnetic recordings from the scalps of human subjects while they are looking at various visual tests. Visual spots were very briefly presented, to register at the very limits of visual perception. Then the subjects were asked both whether they consciously saw the spot, and in either case, where they thought it was.
An interesting wrinkle in this work is that even if the subjects didn't consciously see the spot, their guesses were much more accurate than chance. This is because of the well-known phenomenon of blind-sight, by which we still process our visual perception to very high levels of analysis whether it enters consciousness or not. So the researchers ended up with three classes of brain state from their subjects- consciously perceived images, unconsciously perceived and correctly placed images, and unconsciously perceived but not correctly placed images. The last set is assumed to the most basic level of unconscious perception, which is interrupted before higher levels of visual analysis or transmission between the visual system and the conscious guessing system.
A technical advantage of this method is that in virtually all respects, the conscious, correctly guessed case is the same as the unconscious, correctly guessed case. The visual system found the object, and the guessing system was told where it was. Only the consciousness system is left in the dark, as it were, which these researchers try to exploit to find its electrical traces.
They spend the paper trying to differentiate these brain states by their EEG and MEG scanning methods, and find that they can detect slight differences. Indeed, they try to predict the visual spot location purely from the brain signals, independent of the subject's report, and find that they can do a pretty good job, as shown below. This was done by machine learning methods, where the various trials, with known results, are fed into a computer program, which picks out whatever salient parts of the signals might be informative. This learned pattern can then be deployed on unknown data, resulting in novel predictions, as they present.
The researchers also show that even when unconscious, and even when wrongly reported by the subject, the brain data can pick up the correct visual location quite a bit of the time, and it is encoded out to quite late times, to about 0.6 seconds. This indicates that just because a percept is unconscious doesn't mean that it is much more brief or transient than a conscious one.
At this point they deepened their decoding of the brain signals, looking at the differences between the conscious + correct guess versus the unconsciousn + correct guess trials. What they found supported the consensus interpretation that consciousness is an added event on top of all the unconscious processing that goes forth pretty much the same way in each case.
"These analyses indicate that seen–correct trials contained the same decodable stimulus information as unseen–correct trials, plus additional information unique to conscious trials."
Great, so where are these differentiating signals coming from? When they tried to deconstruct the programmed decoding by location, they found that they could roughly locate the signals in the superior frontal and parietal lobes. But this was a pretty weak effect. The consciousness part of the signal was difficult to localize, and probably happens in wide-spread fashion.
Finally, the researchers tried another analysis on their data, looking whether their classifiers that were so successful above in modelling the subject's perception could be time-shifted. The graphs above track the gradual development, peaking, and decay of the visual perception, all in under one second. Could the classifier that works at 300 milliseconds also work at 500 milliseconds? If so, that would imply that the pattern of activity in the brain was similar, rather than moving about dynamically from one processing center and type to another.
They found that, roughly speaking, their data supported a dramatic distinction between the two tested cases of consciously seen versus unconsciously seen and corrected located. Up to about 300 milliseconds, both were the same, characterized by a succession of visual processing steps where the classifiers for each time slice were useless for other time slices. Thereafter, for the next 500 milliseconds, the time-range of their models went up dramatically, but significantly less so for the conscious case than the unconscious case. This suggested to them the next model, where consciousness involves a series of further processing steps that are not just a long-term reverberation of the perception set up at 400 milliseconds (right side), but additional processes that look different in the EEG / MEG signals and require different classifiers to be picked up in data analysis (yellow, black, and blue curves on the left). These are not as tight as the earlier lock-step series of gestalts, but are detectably different.
"The high level of decoding observed on seen trials, over a long time window of ∼270–800 ms, combined with a low level of off-diagonal temporal generalization beyond a fixed horizon of ∼160 ms, implies that a conscious location is successively encoded by a series of about four distinct stages, each terminating relatively sharply after a processing duration of ∼160 ms. Conversely, the lower decoding level and longer endurance observed for unconsciously perceived stimuli indicate that they do not enter into the same processing chain."
One has to conclude that these are mere hints of what is going on, though more accessible hints, via EEG, than those gained so painfully by fMRI and more invasive techniques. What these inferred extra processing steps are, where they take place, and how they relate to each other and to our subjective mind states, all remain shrouded in mystery, but more importantly, remain for future investigation.
- Who should pay for recessions? The FIRE sector, obviously.
- Pinkovskiy on Piketty. Another person seemingly fooled by the highly unusual nature of the 20th century, and the presumption of high growth as "normal".
- Jeb! says workers just need to be screwed harder.
- Has ISIS done what the US could not? Bring the Taliban begging?
- GOP gearing up for WW3.
- Lynyrd Skynyrd and all that ... the psychology of defeat and honor.
- One fiscal union works, the other one does not.
- A little nostalgia for early days of molecular biology.
- Bill Mitchell: South Korea has its economic head screwed on right.
No comments:
Post a Comment