OCT. 10, 2014
OF the three most fundamental scientific questions about the human condition, two have been answered.
First, what is our relationship to the rest of the universe? Copernicus answered that one. We’re not at the center. We’re a speck in a large place.
Second, what is our relationship to the diversity of life? Darwin answered that one. Biologically speaking, we’re not a special act of creation. We’re a twig on the tree of evolution.
Third, what is the relationship between our minds and the physical world? Here, we don’t have a settled answer. We know something about the body and brain, but what about the subjective life inside? Consider that a computer, if hooked up to a camera, can process information about the wavelength of light and determine that grass is green. But we humans also experience the greenness. We have an awareness of information we process. What is this mysterious aspect of ourselves?
Many theories have been proposed, but none has passed scientific muster. I believe a major change in our perspective on consciousness may be necessary, a shift from a credulous and egocentric viewpoint to a skeptical and slightly disconcerting one: namely, that we don’t actually have inner feelings in the way most of us think we do.
Imagine a group of scholars in the early 17th century, debating the process that purifies white light and rids it of all colors. They’ll never arrive at a scientific answer. Why? Because despite appearances, white is not pure. It’s a mixture of colors of the visible spectrum, as Newton later discovered. The scholars are working with a faulty assumption that comes courtesy of the brain’s visual system. The scientific truth about white (i.e., that it is not pure) differs from how the brain reconstructs it.
The brain builds models (or complex bundles of information) about items in the world, and those models are often not accurate. From that realization, a new perspective on consciousness has emerged in the work of philosophers like Patricia S. Churchland and Daniel C. Dennett. Here’s my way of putting it:
How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information.
You might object that this is a paradox. If awareness is an erroneous impression, isn’t it still an impression? And isn’t an impression a form of awareness?
But the argument here is that there is no subjective impression; there is only information in a data-processing device. When we look at a red apple, the brain computes information about color. It also computes information about the self and about a (physically incoherent) property of subjective experience. The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing. Cognition is captive to those internal models. Such a brain would inescapably conclude it has subjective experience.
I concede that this approach is counterintuitive. One reason is that it seems to leave a gap in the logic: Why would the brain waste energy computing information about subjective awareness and attributing that property to itself, if the brain doesn’t in fact have this property?
This is where my own work comes in. In my lab at Princeton, my colleagues and I have been developing the “attention schema” theory of consciousness, which may explain why that computation is useful and would evolve in any complex brain. Here’s the gist of it:
Take again the case of color and wavelength. Wavelength is a real, physical phenomenon; color is the brain’s approximate, slightly incorrect model of it. In the attention schema theory, attention is the physical phenomenon and awareness is the brain’s approximate, slightly incorrect model of it. In neuroscience, attention is a process of enhancing some signals at the expense of others. It’s a way of focusing resources. Attention: a real, mechanistic phenomenon that can be programmed into a computer chip. Awareness: a cartoonish reconstruction of attention that is as physically inaccurate as the brain’s internal model of color.
In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it.
One reason that the brain needs an approximate model of attention is that to be able to control something efficiently, a system needs at least a rough model of the thing to be controlled. Another reason is that to predict the behavior of other creatures, the brain needs to model their brain states, including their attention. This theory pulls together evidence from social neuroscience, attention research, control theory and elsewhere.
Almost all other theories of consciousness are rooted in our intuitions about awareness. Like the intuition that white light is pure, our intuitions about awareness come from information computed deep in the brain. But the brain computes models that are caricatures of real things. And as with color, so with consciousness: It’s best to be skeptical of intuition.
No comments:
Post a Comment