July 11, 2008

Can an Insect, a Robot, or God Be Aware?

Can a lobster ever truly have any emotions? What about a beetle? Or a sophisticated computer? The only way to resolve these questions conclusively would be to engage in serious scientific inquiry—but even before studying the scientific literature, many people have pretty clear intuitions about what the answers are going to be. A person might just look at a computer and feel certain that it couldn’t possibly be feeling pleasure, pain or anything at all. That’s why we don’t mind throwing a broken computer in the trash. Likewise, most people don’t worry too much about a lobster feeling angst about its impending doom when they put one into a pot of boiling water. In the jargon of philosophy, these intuitions we have about whether a creature or thing is capable of feelings or subjective experiences—such as the experience of seeing red or tasting a peach—are called “intuitions about phenomenal consciousness.”

The study of consciousness (see here and here) has long played a crucial role in the discipline of philosophy, where facts about such intuitions form the basis for some complex and influential philosophical arguments. But, traditionally, the study of these intuitions has employed a somewhat peculiar method. Philosophers did not actually go ask people what intuitions they had. Instead, each philosopher would simply think the matter over for him- or herself and then write something like: “In a case such as this, it would surely be intuitive to say…”

The new field of experimental philosophy introduces a novel twist on this traditional approach. Experimental philosophers continue the search to understand people’s ordinary intuitions, but they do so using the methods of contemporary cognitive science (see also here and here)—experimental studies, statistical analyses, cognitive models, and so forth. Just in the past year or so, a number of researchers have been applying this new approach to the study of intuitions about consciousness. By studying how people think about three different types of abstract entities—a corporation, a robot and a God—we can better understand how people think about the mind.

The Mental Bottom Line on Corporations
In one recent study, experimental philosophers Jesse Prinz of the University of North Carolina-Chapel Hill and I looked at intuitions about the application of psychological concepts to organizations composed of whole groups of people. To take one example, consider Microsoft Corporation. One might say that Microsoft “intends to adopt a new sales strategy” or that it “believes Google is one of its main competitors.” In sentences such as these, people seem to be taking certain psychological concepts and applying them to a whole corporation.

But which psychological concepts are people willing to use in this way? The study revealed an interesting asymmetry. Subjects were happy to apply concepts that did not attribute any feeling or experience. For example, they indicated that it would be acceptable to use sentences such as:
• Acme Corporation believes that its profit margin will soon increase.
• Acme Corporation intends to release a new product this January.
• Acme Corporation wants to change its corporate image.
But they balked at all of the sentences that attributed feelings or subjective experiences to corporations:
• Acme Corporation is now experiencing great joy.
• Acme Corporation is getting depressed.
• Acme Corporation is experiencing a sudden urge to pursue Internet advertising.
These results seem to indicate that people are willing to apply some psychological concepts to corporations but that they are not willing to suppose that corporations might be capable of phenomenal consciousness.

Bots and Bodies
Perhaps the issue here is that people only attribute phenomenal consciousness to creatures that have the right sort of bodies. To test this hypothesis, we can look to other kinds of entities that might have mental states but do not have bodies that look anything like the bodies that human beings have.
One promising approach here would be to look at people’s intuitions about the mental states of robots. Robots look very different from human beings from a physical perspective, but we can easily imagine a robot that acts very much like a human being. Experimental studies could then determine what sorts of mental states people were willing to attribute to a robot under these conditions. This approach was taken up in experimental work by Justin Sytsma, a graduate student, and experimental philosopher Edouard Machery at the University of Pittsburgh and in work by Larry (Bryce) Huebner, a graduate student at UNC-Chapel Hill, and all of the experiments arrived at the same basic answer.

In one of Huebner’s studies, for example, subjects were told about a robot who acted exactly like a human being and asked what mental states that robot might be capable of having. Strikingly, the study revealed exactly the same asymmetry we saw above in the case of corporations. Subjects were willing to say:
• It believes that triangles have three sides.
But they were not willing to say:
• It feels happy when it gets what it wants.

Here again, we see a willingness to ascribe certain kinds of mental states, but not to ascribe states that require phenomenal consciousness. Interestingly enough, this tendency does not seem to be due entirely to the fact that a CPU, instead of an ordinary human brain, controls the robot. Even controlling in the experiment for whether the creature had a CPU or a brain, subjects were more likely to ascribe phenomenal consciousness when the creature had a body that made it look like a human being.

God in the Machine
What if something has no body? How does that change our conceptions of what conscious experience might be possible? We can turn to the ultimate disembodied creature: God. A recent study by Harvard University psychologists Heather Gray, Kurt Gray and Daniel Wegner looked at people’s intuitions about which kinds of mental states God could have. By now, you have probably guessed the result. People were content to say that God could have psychological properties such as:
• Thought
• Memory
• Planning
But they did not think God could have states that involved feelings or experiences, such as:
• Pleasure
• Pain
• Fear

In subsequent work, the researchers directly compared attributions of mental states to God with attributions of mental states to Google Corporation. These two entities—different though they are in so many respects—elicited exactly the same pattern of responses.

Looking at the results from these various studies, it is hard to avoid having the sense that one should be able to construct a single unified theory that explains the whole pattern of people’s intuitions. Such a theory would describe the underlying cognitive processes that lead people to think that certain entities are capable of a wide range of psychological states but are not capable of truly feeling or experiencing anything. Unfortunately, no such theory has been proposed thus far. Further theoretical work here is badly needed.

[Via Mind Matters from Scientific American - Edited by Jonah Lehrer, the science writer behind the blog The Frontal Cortex and the book Proust was a Neuroscientist.]

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.