What is a physicist doing weighing in on the mysteries of the mind? Tim Dean went to find out.
…David Chalmers, Director of the Centre for Consciousness at the Australian National University in Canberra, puts it a different way. He draws a distinction between what he called the “easy problem” of consciousness, which is explaining how electrical impulses racing through a network of neurons can produce behaviour, and the “hard problem”, which is explaining how on Earth that network can ever produce something like the redness of red. Chalmers imagines a being that does all the information processing we do, and which can make the kinds of decisions we make, but doesn’t have any conscious experience, no “qualia”. If such a being is at all possible, then it suggests a complete theory of the mind needs to talk about more than just information processing and brains. It needs to talk about conscious experience too.
I ask Kaku about the conspicuous absence of consciousness in his theory and he hastens to dismiss the problem, borrowing another analogy from science. “It used to be that the question of ‘what is life?’ dominated and paralysed biology for decades. Now the question is irrelevant. We now know there are gradations – we have different kinds of viruses, different forms of life. So biologists no longer ask the question ‘what is life?’, because it turned out to be many layers of a continuum.
“It’s the same thing about ‘what is redness?’, ‘what is a sunset?’, ‘what is a sensation of ecstasy and thrill?’ or ‘what are qualia?’. Today that absorbs a lot of philosophers’ attention, but I think that just like ‘what is life?’, that will disappear.”
His counterpoint to Chalmers’ thought experiment of a thinking being without qualia is a thought experiment of his own. One day, he muses, “we will have a robot that understands red in ways a hundred times richer than any human. We will have a robot that can tell us the electromagnetic spectrum of red, that can give you all the sensations of red for different kinds of animals, a richness of red far beyond any human’s. And then the robot will say, ‘do humans understand red?’, and it will say, ‘obviously not’.”…