Sunday, June 24, 2018

Now You See it

Rapid-fire visual processing analysis shows how things work in the brain.

We now know what Kant only deduced- that the human mind is full of patterns and templates that shape our perceptions, not to mention our conceptions, unconsciously. There is no blank slate, and questions always come before answers. Yet at the same time, there must be a method to the madness- a way to disentangle reality from our preconceptions of it, so as to distinguish the insane from the merely mad.

The most advanced model we have for perception in the brain is the visual system, which commandeers so much of it, and whose functions reside right next to the consciousness hot-spot. From this system we have learned about the step-wise process of visual scene interpretation, which starts with the simplest orientation, color, movement, and edge finding, and goes up, in ramifying and parallel processes, to identifying faces and cognition. But how and when do these systems work with retrograde patterns from other areas of the brain- focusing attention, or associating stray bits of knowlege with the visual scene?

A recent paper takes a step in this direction by developing a method to stroboscopically time-step through the visual perception process. They find that it involves a process of upwards processing prior to downwards inference / regulation. Present an image for only 17 milliseconds, and the visual system does not have a lot of conflicting information. A single wave of processing makes its way up the chain, though it has quite scant information. This gives an opportunity to look for downward chains of activity, (called "recurrent"), also with reduced interference from simlutaneous activities. The subjects were able to categorize the type of scene (barely over random chance) when presented images for this very brief time, surprisingly, enough.

The researchers used a combination of magnetoencephalography (MEG), which responds directly to electrical activity in the brain and has very high time resolution, but not so great depth or spatial resolution, with functional magentic resonance imaging (fMRI), which responds to blood flow changes in the brain, and has complementary characteristics. Their main experiments measured how accurately people categorized pictues, between faces and other objects, as the time of image presentation was reduced, down from half a second to 34, then 17 milliseconds, and as they were presented faster, with fewer breaks in between. The effect this had on task performance was, naturally, that accuracy degraded, and also that what recognition was achieved was slowed down, even while the initial sweep of visual processing came up slightly faster. The theory was that this kind of categorization needs downward information flow from the highest cognitive levels, so this experimental setup might lower the noise around other images and perception and thus provide a way to time the perception process and dissect its parts, whether feed-forward to feed-back.

Total time to categorization of images presented at different speeds. The speediest presentation (17 milliseconds) gives the last accurate discernment (Y-axis), but also the fastest processing, presumably dispensing with at least some of the complexities that come up later in the system with feedbacks, etc., which may play an important role in accuracy.

They did not just use the subject reports, though. They also used the fMRI and MEG signals to categorize what the brains were doing, separately from what they were saying. The point was to see things happening much faster- in real time- rather than to wait for the subjects to process the images, form a categorical decision, generate speech, etc. etc. They fed this MEG data through some sort of classifier, (SVM), getting reliable categorizations of what the subjects would later report- whether they saw a face or another type of object. They also focused on two key areas of the image processing stream- the early visual cortex, and the inferior temporal cortex, which is a much later part of the process, in the ventral stream, which does object classification.

The key finding was that, once all the imaging and correlation was done, the early visual cortex recorded a "rebound" bit of electrical activity as a distinct peak from the initial processing activity. The brevity of picture presentation (34 ms and 17 ms) allowed them to see (see figure below) what longer presentation of images hides in a clutter of other activity. The second peak in the early visual cortex corresponds quite tightly to one just slightly prior in the inferior temporal cortex. While this is all very low signal data and the product of a great deal of processing and massaging, it suggests that decisions about what something is not only happen in the inferior temporal cortex, but impinge right back on the earlier parts of the processing stream to change perceptions at an earlier stage. The point of this is not entirely clear, really, but perhaps the most basic aspects of object recognition can be improved and tuned by altering early aspects of scene perception. This is another step in our increasingly networked models of how the brain works- the difference between a reality modeling engine and a video camera.

Echoes of categorization. The red trace is from the early visual processing system, and the green is from the much later inferior temporal cortex (IT), known to be involved in object categorization / identification. The method of very brief image presentation (center) allows an echo signal that is evidently a feedback (recession) from the IT to come back into the early visual cortex- to what purpose is not really known. Note that overall, it takes about 100 milliseconds for forward processing to happen in the visual system, before anything else can be seen.

  • Yes, someone got shafted by the FBI.
  • We are becoming an economy of feudal monoliths.
  • Annals of feudalism, cont. Workers are now serfs.
  • Promoting truth, not lies.
  • At the court: better cowardly and right wing, than bold and right wing.
  • Coal needs to be banned.
  • Trickle down not working... the poor are getting poorer.