Yet Another Refutation of the Chinese Room Argument

[Edited November 9, 2015]

I recently listened to a discussion of AI between Massimo Pigliucci and Dan Kaufman, in which they both endorsed the Chinese Room Argument, if I remember correctly. (See here.) This led me to take another look at the Chinese Room Argument, and to write a detailed refutation of it. There have of course been a huge number of responses in the past, but many of them have not been to my satisfaction, and I think I can bring a couple of new insights to the subject. To summarise, I say that the argument has no genuine content, but just creates the illusion of an argument by means of a fallacy of equivocation. It conflates two different systems, and then jumps without argument from the fact that one doesn’t understand Chinese to the conclusion that the other doesn’t understand Chinese.

I’ll start by clarifying the argument, then show that it appears to be trying to get something for nothing, and then discuss the fallacy which creates the illusion of something when there’s nothing.

Here’s the most recent version of the argument that I can find:

Strong AI is answered by a simple thought experiment. If computation were sufficient for cognition, then any agent lacking a cognitive capacity could acquire that capacity simply by implementing the appropriate computer program for manifesting that capacity. Imagine a native speaker of English, me for example, who understands no Chinese. Imagine that I am locked in a room with boxes of Chinese symbols (the database) together with a book of instructions in English for manipulating the symbols (the program). Imagine that people outside the room send in small batches of Chinese symbols (questions) and these form the input. I know is that I am receiving sets of symbols which to me are meaningless. Imagine that I follow the program which instructs me how to manipulate the symbols. Imagine that the programmers who design the program are so good at writing the program, and I get so good at manipulating the Chinese symbols, that I am able to give correct answers to the questions (the output). The program makes it possible for me, in the room, to pass the Turing Test for understanding Chinese, but all the same I do not understand a single word of Chinese. The point of the argument is that if I do not understand Chinese on the basis of implementing the appropriate program for understanding Chinese, then neither does any other digital computer solely on that basis because the computer, qua computer, has nothing that I do not have. [“Chinese Room Argument”, Scholarpedia, 2009]

This argument is the one that is widely known by the term “Chinese Room Argument” (CRA). In the Scholarpedia article it’s followed by a second argument, based on syntax and semantics, and which I’ll call the “Syntax-and-Semantics Argument” (SSA). In this post I will address only the CRA, leaving the SSA for a later post, where I will also discuss the relationship between them.

Here’s the the earliest published version of the argument, omitting the description of the CR scenario, which was rather long:

As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. [“Minds, Brains and Programs”, 1980]

The CRA depends on the unstated assumption that, if Searle doesn’t understand any Chinese, then nothing in the CR understands any Chinese. This may seem so obvious as not to need stating, but I will argue that this move depends on an equivocation that renders it fallacious. So the crucial and fallacious move in the argument has gone unstated! For the time being, you don’t need to accept that the distinction I’m making is a significant one. But I do need to rewrite the argument in a way that makes the move explicit, so that I can address the point in due course. I will also break the argument into two parts, and clarify it in a couple of other ways. So here’s my version of the CRA, with the crucial move in bold:

1. According to Strong AI, there exist (at least in principle) certain programs, such that the execution of any such program would be sufficient to produce an understanding of Chinese. Let P be any such program. Let Searle take the role of a computer and execute P. Obviously, despite executing P, Searle doesn’t understand any Chinese. Since Searle doesn’t understand any Chinese, nothing in the CR understands any Chinese. Therefore, contrary to Strong AI, the execution of P by Searle was not sufficient to produce any understanding of Chinese.
2. Now consider any other computer executing P. That computer has nothing relevant that Searle does not have. So, since Searle’s execution of P was not sufficient to produce any understanding of Chinese, neither will be this computer’s execution of P. Since this argument works for any program P and any computer executing that program, it follows that no computer can produce an understanding of Chinese solely by virtue of executing an appropriate program.

Note that the crucial move has been made by the end of part #1, and the conclusion of that part already contradicts Strong AI. So I’ll treat part #1 as the argument to be addressed, and argue that it’s fallacious. I’ll ignore part #2, which depends on part #1.

First note how trivial the argument is. Apart from setting up the scenario, it just relies on appealing to the apparently obvious fact that Searle doesn’t understand any Chinese. If we accept the bolded move without question (or without noticing) it seems to follow immediately that this is a counter-example to Strong AI. Appealing to an obvious premise, from which the conclusion immediately follows, is not much of an argument. Is it really possible to establish a controversial philosophical point with such a trivial argument?

Perhaps all the real work has been done in the construction of a clever scenario, and there’s something about the CR scenario which enables us to see a significant fact that was previously hidden. So let’s look at the CR scenario. How was it constructed? All Searle did was take the familiar scenario of an electronic computer executing a program, and replace the electronic computer with a human computer, called Searle. Since the human computer is doing just the same thing as the electronic computer, this substitution seems irrelevant. (Searle himself seemed to treat the difference as irrelevant when he generalised from the human computer to other computers in part #2.) The argument remains trivial.

It would be instructive to see what Searle’s argument would have looked like if he hadn’t made the switch to a human computer. In other words, let’s try substituting an electronic computer for Searle (the human computer) in the argument. We then get the following assertion: Obviously, despite executing P, the electronic computer doesn’t understand any Chinese. This would have been a clear case of question-begging, since this is what Searle needs to establish, and he clearly wouldn’t have made any argument in support of it. No doubt Searle himself appreciates that, or he wouldn’t have switched to a human computer. He sees the switch to a human computer as an important move.

So Searle has taken a vacuous question-begging argument and tried to make it into a valid argument by replacing an electronic computer with a human computer. If the substitution makes no relevant difference, as I claim, then it can’t produce a valid argument from an invalid one. If you think the substitution does make a relevant difference, the onus is on you to make sure you understand just how it does so. Remember that the electronic and human computers are functionally equivalent. The only difference is in their internal operation. But why should that internal difference have any bearing on the argument? (Like Searle, I’m ignoring differences of speed, memory size and reliability, since these make no difference to the in-principle argument.)

If the substitution makes no relevant difference, how does it produce an argument that has enough appearance of validity to convince many readers? The answer is that it introduces a spurious complication that serves to distract readers’ attention from the question-begging, and creates an ambiguity around which a fallacy of equivocation can be constructed. In the electronic computer scenario, there is only one language-using system present, namely the Chinese-speaking system that arises from the execution of the AI program, P. In switching to a human computer, Searle has added a second language-using system. He’s given the computer its own English-speaking system. After the substitution we need to be careful to keep track of which system we’re thinking about, but Searle makes us attend to the wrong system.

Before proceeding, it might be useful to say something more about the Chinese-speaking AI program, P. Since Searle claims that his argument works for any P, I can stipulate any P that I like, and I’ll choose one that makes the issues clearer. Let P be a full-brain simulation of an actual Chinese person (call him Lee) down to whatever level of detail is needed to make sure that the behaviour of the program is near-enough equivalent to Lee’s behaviour. Since this is a thought experiment, we can even simulate every single atom in Lee’s brain. Searle accepts that an AI could be behaviourally equivalent to a real person, so he has no reason to deny that the simulation can do everything Lee can do, at least with regard to a text-based dialogue, as in the CR. The Lee-simulation (“Lee-sim”) will give responses that reflect all of the real Lee’s knowledge, abilities, memories, personality, etc. (I deliberately say “reflect”. I’m not begging the question by pre-supposing that Lee-sim will have any mental states of its own.) I invite you first of all to imagine Lee-sim running on an electronic computer. A Chinese interlocutor can submit Chinese questions to Lee-sim and get Chinese answers. (Assume the computer has a scanner for inputting questions in Chinese characters, and a printer for printing answers.) If the interlocutor asks, “What country do you live in?”, Lee-sim might answer in Chinese, “China”. If the interlocutor asks, “Do you understand Chinese?”, Lee-sim might answer, “Of course I understand Chinese. How else could I be answering your questions?”. It seems reasonable–and consistent with our usual way of speaking about computer systems–to talk about Lee-sim this way, treating Lee-sim as a system that we can refer to by a noun.

Now let’s return to the CR scenario, with its human computer, called Searle. Searle is executing Lee-sim, and since Searle is functionally equivalent to the electronic computer, there’s no reason we shouldn’t continue to refer to Lee-sim as a system, and talk about it just the way we’ve been doing so far. The Chinese interlocutor can submit the same questions to Lee-sim as before, and get similar answers. But now we can also submit English questions, and get answers from the English-speaking system that has all Searle’s “native” knowledge, abilities, memories, personality, etc. By “native” I mean to exclude any abilities or other traits that may arise from the execution of Lee-sim. In some ways those are Searle’s abilities, and in some ways they aren’t. In one sense Searle can speak Chinese, because he can execute Lee-sim and produce Chinese output, but in another sense it’s not really Searle who’s speaking Chinese, it’s Lee-sim. We can avoid such linguistic ambiguities by using the term “native-Searle” to refer to the system that includes only Searle’s native traits. Native-Searle can’t speak Chinese. We can direct questions to native-Searle (instead of Lee-sim) by asking them in English. If we ask in English “What country do you live in?”, native-Searle might answer, “the USA”. If we ask in English, “Do you understand Chinese?”, native-Searle will answer, “No”.

A major purpose of the last two paragraphs has been to justify talking about the scenario in terms of two systems. If you’ve heard the “Systems Reply” to the CRA before, you will have heard people talking of two systems, or two sub-systems of the whole system. Such talk may have seemed peculiar or even unacceptable to you. I feel it’s sometimes introduced without sufficient explanation. My talk of two sub-systems is not begging any questions about whether each sub-system has its own mind. I haven’t mentioned minds. Nor is it my goal to persuade you that there are two minds. I’m only refuting Searle’s argument, not arguing for a contrary position. In order to reveal Searle’s equivocation as clearly as possible, I need the vocabulary to refer to two different collections of traits that are present within the combined system, and the best way to do that is to talk in terms of two systems. Such a vocabulary is made useful and appropriate by the neat separation of the two sets of traits.

Now that I can talk in terms of two systems, I can say that of course native-Searle doesn’t understand Chinese: native-Searle can’t even speak Chinese, let alone understand it. If anything can understand Chinese, it’s Lee-sim. Again, it’s not my goal to persuade you that Lee-sim can understand Chinese. I just want to show you that Searle has failed to address the question of whether Lee-sim can understand Chinese. He’s only addressed the irrelevant question of whether native-Searle can understand Chinese, and used that to distract you from attending to the relevant question. If he’d stuck to an electronic computer scenario, it would have been obvious that he needed to address the question of whether Lee-sim can understand Chinese, because Lee-sim would have been the only system present. Introducing a second system (native-Searle) only served to distract your attention from that question.

More specifically, the distraction did its work by means of a fallacy of equivocation. Searle invited you to accept, without argument, the premise “I do not understand a single word of Chinese”. By creating a weird scenario involving two separate language-using systems in one body, he has made the word “I” ambiguous. It could refer just to native-Searle. Or it could refer to the whole system, which incorporates both native-Searle and Lee-sim. On the first reading, the premise is trivially true, but irrelevant. (Yes, native-Searle doesn’t understand any Chinese, but that’s not the relevant question.) On the second reading, the premise is question-begging. Searle is just asserting that the whole system doesn’t understand any Chinese, i.e. nothing understands any Chinese. But that’s what he needed to show, not just assert. (This is equivalent to the question-begging premise that the electronic computer doesn’t understand any Chinese, which he would have been using if he hadn’t switched to a human computer.) An unwary reader accepts the premise on the reading that makes it trivially true but irrelevant (native-Searle doesn’t understand any Chinese), and then follows along when Searle makes a non sequitur jump to the question-begging reading (the whole system doesn’t understand any Chinese). Searle has made no argument from the first of these propositions to the second. He has made no argument in support of the conclusion that nothing in the CR understands any Chinese.

In the Scholarpedia article, Searle responds to the Systems Reply as follows:

The Systems Reply can be answered as follows. Suppose one asks, Why is it that the man does not understand, even though he is running the program that Strong AI grants is sufficient for understanding Chinese? The answer is that the man has no way to get from the syntax to the semantics. But in exactly the same way, the whole system, the whole room in which the man is located, has no way to pass from the syntax of the implemented program to the actual semantics (or intentional content or meaning) of the Chinese symbols. The man has no way to understand the meanings of the Chinese symbols from the operations of the system, but neither does the whole system. In the original presentation of the Chinese Room Argument, I illustrated this by imagining that I get rid of the room and work outdoors by memorizing the database, the program, etc., and doing all the computations in my head. The principle that the syntax is not sufficient for the semantics applies both to the man and to the whole system.

The Systems Reply objects that the CRA conflates a sub-system with the whole system, and then illegitimately jumps from the fact that the sub-system doesn’t understand Chinese to the conclusion that the whole system doesn’t understand Chinese. Instead of addressing that objection, Searle has now appealed to a different argument (based on his syntax/semantics principle) in support of the claim that the whole system doesn’t understand Chinese. This isn’t defending the CRA; it’s invoking a different argument for the same conclusion.

Could we charitably assume that Searle has always expected us to take his syntax/semantics principle (or some similar general principle) as our basis for accepting that the whole system doesn’t understand Chinese, and that he therefore never committed the fallacy of equivocation that Systems Repliers have attributed to him? No. Not only is this interpretation inconsistent with the wording of the CRA texts, but it would make the Chinese Room scenario entirely redundant. If the argument was based on such a general principle, then that principle could just as well have been applied directly to electronic computers. The switch to a human computer (and back in part #2) would have been pointless.

There’s another major problem with Searle’s response to the Systems Reply. He’s appealing to his syntax/semantics principle to support his claim that there’s no understanding of Chinese in the CR. But, as we’ll see in my post on the SSA, he also appeals to that claim to support his syntax/semantics principle. So he’s arguing in a circle! I suggest that the reason he finds himself resorting to such desperate measures is because each of his arguments is vacuous.

Before I finish, I’ll briefly address a few points that I’ve omitted above, but which are often raised in connection with the CRA.

1. Appeal to intuition. The CRA has often been interpreted as just an appeal to intuition. If you don’t read the argument as committing a fallacy of equivocation, then it seems that Searle is just appealing to the intuition that the whole system doesn’t understand Chinese. Perhaps the best-known response to the CRA is by Dennett and Hofstadter (“The Mind’s Eye”, 1981). Though they defend the Systems Reply briefly, the bulk of their response addresses the CRA as an “intuition pump”. Even if the CRA isn’t strictly an appeal to intuition, there are clearly intuitions at work which are addressed well by Dennett and Hofstadter, and I recommend their response.

2. Location of the program. In his 1980 paper, Searle interpreted a crude version of the Systems Reply as a concern over whether his argument had taken into account all the physical stuff in the room, in particular the pieces of paper on which the program and working data were stored, and even “the room” itself. In response he modified his scenario, having himself memorise the program and data, and work outdoors. He alludes to that move in the more recent response that I’ve quoted above. The move is irrelevant to more careful versions of the Systems Reply, including my own. The fallacy I’ve described is the same regardless of the location or materials in which Lee-Sim is implemented. However, locating all the materials inside Searle’s head makes his equivocation more effective, since it’s easier to read the word “I” as referring to the whole system when both sub-systems are entirely implemented inside his head.

3. Types of understanding. Searle and his critics have often differed over the meaning of the word “understanding”, which has led to some talking at cross-purposes. Searle uses “understanding” as a proxy for mind in his argument, and seems to think that the word must be limited to systems with minds, or perhaps to conscious systems. He says that the “understanding” we attribute to other systems is only “metaphorical” and not real. Many of his critics think that this distinction is misguided. While I agree with the critics, the point is irrelevant to my response. The fallacy I’ve described is the same whichever way we take the word “understanding”.

4. Other arguments. In addition to the CRA and the SSA, Searle makes a number of other arguments, including a use of the statement that “simulation is not duplication”. I will address these other arguments in a third post.

Advertisements

3 thoughts on “Yet Another Refutation of the Chinese Room Argument

  1. This is a nice article which correctly demonstrates that the Chinese Room Argument is essentially empty, and is merely an appeal to people’s intuition. What is puzzling is why many philosophers (such as Massimo Pigliucci and Dan Kaufman, as you mention, and of course Searle himself), regard this as a substantive and important argument.

    Liked by 1 person

Comments are closed.