A Functionalist View of Consciousness in Bicentennial Man

Based on Asimov’s Bicentennial Man, the 1999 film Bicentennial Man stars Robin Williams as an increasingly lifelike robot. One fundamental question is: at any stage, does Andrew possess “consciousness”? Withholding the belief that humans over all other things have been supernaturally “chosen” or imbued with consciousness, a naturalistic assessment of qualitative experience permits Andrew to possess it. If evolution or otherwise a long series of physical processes could produce consciousness[1], then there is little reason to believe that a very advanced robotic realization of the same property is not logically and physically possible. Neural chauvinism is described as it is because there is little argument otherwise. Functionalism can not correctly answer the question “does Andrew feel or not?” because within its framework, the question is nonsense.

Functionalism defines a mind as something with input, processing, and output, [2] but it makes no assertions about the specific physical content of any of those three categories. Qualitative experience is one, but not the only, means by which the mind can recognize and sort inputs.[3] It has its own set of possible physical manifestations (e.g., the evolutionary, biological conditions that led to human consciousness). Mental states are defined by their functional properties, which are defined by causal roles of different order. The highest order roles are those which immediately precede output, but extremely complex causal networks precede them (in minds worth investigating). The multiple realizability thesis applies to all causal orders, which include qualitative experience. This accounts for phenomena such as inverted spectra and is sufficiently abstract to accommodate all suggestible material structures. Just as different materials can realize the same structure of qualitative experience, by implication different structures of qualitative experience can realize the same mental states.

An important point to make here is that there is no absolute binary variable that determines “feelings” and “no feelings.” Consciousness is not a singularly defined characteristic, that is, there is not only one sole consciousness that a mind either possesses or does not possess. Qualitative experience merges continuously into the conceptual space of mental causal functions; it is one of infinitely different possible causal structures that can inhabit the mind. Logically, there is nothing essential or special about it to the definition of a mind (though it is extremely practical). In other words, the notion of qualitative experience itself is only a title we impose on causal properties. Only as a matter of convenience should we ever set qualia apart from other aspects of the processing functions of the mind. To ask, “does Andrew feel or not?” is a philosophically loaded question: it either presupposes that “feel” is binary and an exclusive descriptor of qualitative experience, or it is pointing to a specific definition of “feel” in the aforementioned continuum (but this is much less likely).

Suppose one asked, “is that chair red or not?” If the question were posed in the same sense as we are viewing the question about Andrew, it would be committing the same fallacy. Because the color spectrum is continuous, there is no absolute defining line between those colors which are red and the next class of color in the spectrum. In this case, the inquirer is appealing to some commonly accepted definition of red, for the ease of avoiding having to ask “Is the dress of light wave frequency range X-Y?” However, this social norm bears no absolute factual justification- it is based on pragmatic, short-hand linguistic expressions of color. The same applies to “feel”: it is folk psychology and a mere conversational convenience. Of course, “feel,” “qualia,” “qualitative experience,” and “consciousness” are still useful terms. After all, our subjective idea of qualitative experience is as different from involuntary functions of the mind (like heart-rate control) as “reds” are different from “blues,” despite the fact that there is no absolute dividing line between them. The distinction just needs to be made that these terms are contingent on whatever we decide are their most pragmatic uses.

Could Andrew possibly be a zombie? The first issue to handle is the notion of philosophical zombies altogether. Heil claims, “zombies satisfy all the functionalist criteria for possessing a full complement of states of mind.”[4] Some philosophers utilize this as an objection to functionalism, but the functionalist should be able to contend that a mind can exist without qualia. Given the functionalist definition of a mind in tandem with the possibility of different material states determining a single causal state, there is no reason why this can not be the case. As such, there is nothing philosophically hazardous in permitting such a broad definition for “a mind.” This allows for the possibility of behavioral zombies, whose outputs are functionally isomorphic to qualia-bestowed/cursed creatures.

In what seems like a conflicting view, Paul and Patricia Churchland argue that functionalism requires intrinsic properties to play causal roles, explaining that “our sensations are anyway token-identical with the physical states that realize them.”[5] This permits a 60 Hertz frequency spike, for example, to suffice as those properties. However, whether absent qualia is an issue for functionalism or not is ultimately decided by the specific definition of “qualia.” The relevant implication of both positions is that robot minds are not logically problematic, no matter how qualia are treated.[6]

Approximating its original intention, we can appropriately revise the question to something other than nonsense: “does Andrew possess the essential qualities commonly associated with human feelings?” Realistically, this is a question of empirics. He may very well be the logically possible behavioral zombie. First, a precise definition of the nature and breadth of the “feelings” must be decided. Then, the components of the mind in question must be tested to determine if those conditions are met.

The short answer is that it can go either way. Then, at each stage in his life, he can be tested for those characteristics. Was his positronic brain sufficient to give him human-like qualitative experience? Did the incorporation of a central nervous system guarantee it? In any case, there is no short-cut to a definite, final conclusion without scientific inquiry.


[1] I switch between the uses of “consciousness,” “qualitative experience,” and “qualia” frequently where I feel it is appropriate, but generally statements about them are in reference to the same logical argument. If anything sounds strange or incorrect, I can explain it.

[2] https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)

[3] I’m not sure what the official constraints of “Functionalism” are on what a mind can be, but this form permits something as simple as a circuit switch box to be a (very rudimentary) mind. By this, Andrew also has a mind.

[4] Heil, John. Philosophy of Mind: A contemporary introduction. (H, p. 124)

[5] Churchland, p. 353-354

[6] In other words, for any fixed definition of qualia, my position (appropriately adapted to the new definition) is most likely not at odds with the Churchlands’; we are only attacking the issue in different ways. For convenience, the rest of the paper should be understood in terms of my framework.

Leave a Comment

Your email address will not be published. Required fields are marked *