Back

Reply to Stanislas Dehaene

November 22, 2009

image

This is a response to Stanislas Dehaene’s “Signatures of Consciousness.”

This was a very interesting talk, and Dehaene and colleagues are doing fascinating neuroscience. But as is often the case with neuroscientists engaged in fascinating research, Dehaene seems impatient with related problems in philosophy. Finding such problems boring is not the same as solving them, however. Dehaene may have added a few bars to the tune, but he is still whistling past the graveyard on the deeper problem of consciousness.

 

To truly explain the emergence of consciousness, we need to be able to say why a group of neurons firing in a certain pattern will be conscious, while neurons firing in another way will not. Why would the difference between consciousness and unconsciousness be a matter of “long distance synchrony between many regions” of the brain?  And why should the time-course of such integration be 300 milliseconds? If these facts don’t strike you as intrinsically mysterious, consider: What if, distributed geological processes gave rise to consciousness over a time-course of 300 years?  Let’s just say, for the sake of argument, that they do.  We can see at a glance that the reason why this should be so remains perfectly inscrutable.  In fact, it would be nothing short of a miracle if long distance synchrony among distributed processes in the earth sufficed to make the planet conscious.

The purely contingent connection between physical and phenomenal facts doesn’t magically become explanatory once you swap tectonic plates for neural assemblies. The connection between unconscious physical events and there being “something that it is like” to be the totality of those events seems likely to always appear brute — and, therefore, mysterious. This has been characterized as an “explanatory gap” and as the “hard problem of consciousness,” and it is surely both. Dehaene et al. seem to think that these conceptual problems will evaporate once the correlations between first-person report and third-person neurophysiology grow sufficiently tight. I do not agree.

There are a few other issues that come to mind when considering this (again, genuinely interesting) work. Daniel Kahnemann raises one, and I’d like to echo it here: People are (or at least seem to be) continually conscious (of something) even in the absence of novel events. Therefore, the P300 (or any other marker of perceptual novelty) would seem to be the wrong yardstick by which to measure the steady state of consciousness. Even if consciousness is more punctate than we realize — being built upon discrete waves of novelty-detection — the P300 still seems too coarse a marker for it. While the rise time to consciousness of my next percept may be 300 milliseconds, there must be dozens (or hundreds) of such waves crashing on the shores of my brain in every 300 millisecond window. So what is Dehaene actually measuring when detecting a lone P300?

Finally, another concern arises out of research on the split-brain: A small number of human beings have had some or all of the white-matter tracts connecting the right and left hemispheres of their brain surgically severed.  (This is generally undertaken as a treatment for severe epilepsy, though other surgeries occasionally require that some of these fibers also be cut.)  The typical result, where linguistic ability is almost entirely sequestered in the left hemisphere, yields a right hemisphere that is mute and agraphic, but (almost surely) independently conscious. Given that the search for consciousness in the brain is utterly reliant upon our being able to correlate behavioral reports with neural events, split-brain research suggests that the presence or absence of consciousness in the case of any inarticulate area of brain will always remain an open question.

In fact, given the immensity of simultaneous neural activity taking place on either side of the corpus callosum, it seems likely that even a normal brain will be functionally split to one or another degree.  Two hundred million nerve fibers seem unlikely to be able to integrate the activity of the ten billion neurons in each cerebral hemisphere, each of which makes hundreds or thousands (sometimes tens of thousands) of connections to its neighbors. This would suggest that islands of dissociated consciousness might exist in normal brains as well.

The problem this poses for the study of consciousness may be insurmountable. If I were to interrogate my brain with the help of a colleague — one who was willing to expose my cortex and begin probing with a microelectrode — neither of us will know what to make of a region which, when probed, fails to influence the content of “my” consciousness.  The split-brain phenomenon suggests that all that I will be in a privileged position to decide is whether or not I (as perhaps only one among many loci of consciousness to be found in my brain) feel anything when my friend applies the current.  Feeling nothing, I will not know whether the neurons in question constitute a region of consciousness in their own right — for the simple reason that I may be just like a split-brain patient given to wonder, with his articulate left hemisphere, whether or not his right hemisphere is conscious.  It surely is, and yet no amount of experimental probing on “his” part will drive the relevant experiential facts into view. As long as we must correlate changes in the brain — or any other physical system — with subjective report, those corners of the universe that are functionally mute, may yet be conscious, and our attempt to specify the necessary and sufficient conditions for consciousness will fail to take them into account.

Of course, such obstacles to explaining the emergence of consciousness do not suggest that Dehaene’s labors are in vain. Indeed, his work might fulfill what he describes as its central goal — allowing us to determine, to a moral certainty, whether or not a person who appears to be in a coma or vegetative state is aware of changes in his environment. It would be one thing to develop a theory of consciousness that can specify the necessary and sufficient conditions for it — in computers, starfish, orangutans, etc. It is arguably more important, and easier, to determine whether the victim of a stroke can hear his wife’s voice. Needless to say, the prospect of perfecting such clinical applications is genuinely thrilling. But it will not silence the philosophers.

November 24, 2009

Link to Article