More Responses to Searle on Strong AI

In two earlier posts I addressed Searle’s two best-known arguments against Strong AI, the Chinese Room Argument and the Syntax-and-Semantics Argument. Here I’d like to address some additional arguments and assertions that he’s made.

1. Appeals to intuition about stuff

According to Strong AI, any computer executing the right sort of program would have a mind, regardless of the physical materials (or “substrate”) that the computer is made from. In principle this could include a non-electronic computer. Searle attempts to ridicule Strong AI by asking readers to imagine non-electronic computers constructed from unlikely materials, and then appealing to their intuition that a system made from those materials couldn’t have a mind.

First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizenbaum (1976, Ch.2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place–only something that has the same causal powers as brains can have intentionality–and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn’t get any extra intentionality by memorizing the program, since memorizing it won’t teach him Chinese. [“Minds, Brains and Programs”, 1980]

(Here Searle talks about “intentionality”, but I’ll stick to the more familiar terms “mind” and “consciousness”, which he uses elsewhere. The “Chinese story understanding program” he refers to is a contemporary AI program written by Roger Schank, but I doubt anyone would claim that that program was conscious, and we should think instead about a hypothetical human-level AI program.)

Searle’s argument about the English speaker is a rehash of the CRA, which I’ve addressed in a previous post, so I’ll ignore that example. Apart from that, all he has is an appeal to the intuition that the other systems are not made of the right kind of stuff for mind. Not only is this is just an appeal to intuition, but Searle is biasing readers’ intuitions by presenting his examples in a misleading way. First, he fails to explain even minimally how a computer could be constructed from such materials, and secondly he gives no idea of the issues of scale that are involved.

It’s very hard to imagine how a computer could realistically be constructed from a roll of toilet paper and stones. In lieu of an explanation, Searle refers the reader to Weizenbaum. But, on reading Weizenbaum for myself, I find that no such computer is described. Searle has misinterpreted the text. What Weizenbaum presents is a pretty standard explanation of a Turing machine, of the sort that has a paper tape on which binary symbols are marked and erased. All Weizenbaum has done is replace the paper tape with toilet paper, and the symbols with black and white pebbles. This does not describe an entire computer, because a mechanism is still needed to move the pebbles around in accordance with the transformational rules. In Weizenbaum’s account, this role is played by a human being. In effect, Searle has left out the computer’s processor and mentioned only its memory. Without a processor nothing will happen. (I haven’t seen Searle repeat this example in subsequent writing, so perhaps he discovered his error shortly after the publication of this paper.)

Searle’s “sequence of water pipes” is more plausible, though he’s failed to mention anything about the arrangement of the pipes, or the valves that would be required at the junctions. I don’t know much about electronics, but I guess a junction of pipes could be fitted with a valve of some sort and made to control flows of water in some way that is analogous to the operation of a transistor. A computer could then be built, in principle, with the same logical design as a digital computer, replacing electric circuits with water pipes, and transistors with valve-equipped junctions. Of course, this would be quite impractical on any significant scale, and a computer capable of executing a human-level AI would have to be vast in size, perhaps larger than a cubic mile. It would also be many times slower than a human brain.

What does Searle gain by appealing to our intuitions about flows of water through pipes and valves, instead of to our intuitions about flows of elecricity through metal circuits and transistors? I suspect the main reason for his switch from electricity to water is that we’re familiar with electronic computers, and impressed by their abilities. We haven’t seen equally impressive water-based computers, and Searle describes them in simplistic terms that make the idea seem very unimpressive indeed. Searle is appealing to our intuitions based on our ordinary experience of water pipes, doing nothing much more than carrying water.

Whatever system we look at, if we zoom in on the very small details, we don’t see anything that looks like it could be conscious. That doesn’t just apply to electronic and water-based computers. It applies to brains too. Individual neurons don’t look conscious, and if we looked at even lower levels of abstraction, e.g. at biomolecules, those would look even less conscious. In that case too, we might wonder, “Where’s the consciousness?” If we follow such intuitions consistently, it seems that no mere “matter in motion” could be sufficient for consciousness. Searle is turning a blind eye to such problems in the case of brains, because he knows that he himself is conscious. Since he’s already convinced that what makes a system conscious is being made of the right kind of stuff, it follows that brain stuff is one of the right kinds. So he’s appealing to two intuitions, not just one: (1) it’s the kind of stuff that matters; (2) the stuff of water-based computers is not the right kind. And since intuition (1) directly contradicts Strong AI, this amounts to little more than saying that my intuition tells me you’re wrong.

2. Simulation is not duplication

Supporters of Strong AI say that a sufficiently detailed simulation of a human brain would have a mind, in the full sense that the simulated brain has a mind. Searle has responded to this claim by asserting that “simulation is not duplication”, but it’s unclear just why he thinks this truism is relevant. He has also made such remarks as the following:

No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? [“Minds, Brains and Programs”, 1980]

There seems to be an implied argument here, but it’s not quite clear what it is. I can think of two possible interpretations.

(A) It can be interpreted as a refutation of an argument that is implicitly being attributed to Strong AI supporters: simulation is duplication; therefore a simulation duplicates all the properties of the simulated system, including mental properties. On this interpretation, I say that Searle is refuting a straw man. Strong AI supporters are not making any such argument. No one thinks that all the properties of a simulated system are present in the simulation. That would indeed be just as absurd as Searle suggests.

(B) It can be interpreted as an argument against Strong AI: based on some examples of properties that are not duplicated in simulations (burning and wetness), we are apparently to infer by induction that no other properties are duplicated in simulations. But significantly all the properties Searle mentions are physical properties. He fails to mention properties that might be called functional, computational or informational, which are duplicated by an appropriate simulation. Consider a calculator (electronic or mechanical) and an appropriate computer simulation of that calculator. Let’s say, for simplicity, that the calculator and the simulation are running in parallel, and taking the same inputs. When the calculator has a certain number stored in its memory, so does the simulation. If the calculator is adding, so is the simulation. And so on. Strong AI says that consciousness is broadly this sort of property, and not a physical property. For Searle to assume the contrary is begging the question.

So it seems that Searle is either refuting a straw man or begging the question. But, without a clear statement of his argument, it’s unclear which response to make. I think Searle’s lack of clarity contributes to his own confusion.

3. Syntax is not intrinsic to physics

Worse yet, syntax is not intrinsic to physics. The ascription of syntactical properties is always relative to an agent or observer who treats certain physical phenomena as syntactical. [“Is the Brain a Digital Computer?”, 1990]

I’ve already addressed this issue in the appendices of my post on the Syntax-and-Semantics Argument. I explained there why the ways that we can sensibly interpret the states of a computer are constrained by the causal nature of the system. They cannot sensibly be interpreted however an observer wishes. In summary:

Searle is looking at the memory states in isolation, instead of taking them in the context of the processor [and program]. It’s that context that gives the states their meaning, and makes them symbols, not just physical states.

In the same paper, Searle writes:

Cognitivists cheerfully concede that the higher levels of computation, e.g. “multiply 6 times 8” are observer relative; there is nothing really there that corresponds directly to multiplication; it is all in the eye of the homunculus/beholder. But they want to stop this concession at the lower levels. [“Is the Brain a Digital Computer?”, 1990]

I doubt that Strong AI supporters generally make such a concession. I certainly don’t.

At times in the paper it’s unclear (as in #2 above) whether Searle is making an argument against the Strong AI position or responding to an argument for Strong AI which he supposes its supporters to be making. When he seems to be doing the latter, the implied arguments seem to be simplistic straw men. Given that he believes he can refute Strong AI by means of trivial yet decisive arguments (like the CRA and SSA), it’s perhaps not surprising that he attributes such trivial arguments to supporters of Strong AI too. Of course, it’s quite possible that some supporters of Strong AI actually have made such simplistic arguments, but I haven’t seen them from leading supporters, like Daniel Dennett. For the most part I think these simplistic arguments are the products of Searle’s misunderstandings of Strong AI.

4. The Turing test is behaviouristic

The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic [“Minds, Brains and Programs”, 1980]

We don’t need to adopt any sort of behaviourism in taking the Turing test as a test for mind. We only need to reject epiphenomenalism. In the philosophy of mind, epiphenomenalism is the view that the presence of mind has no effect on physical events. Searle seems to hold the closely related view that the presence of mind has no effect on external behaviour, since he thinks that a computer AI could have just the same behaviour as a brain, despite having no mind. If he were right, it would naturally follow that tests of behaviour cannot provide evidence of the presence of mind. If, on the other hand, the presence of mind does affect behaviour, then behaviour can provide evidence of the presence of mind, at least in principle.

The question of epiphenomenalism is a long-standing one in philosophy of mind, and I don’t propose to address it here. I just wanted to make the point that Searle is mistaking a rejection of epiphenomenalism for an acceptance of some sort of behaviourism. Needless to say, Searle and Strong AI supporters have very different views of the nature of mind (or consciousness). Unfortunately Searle has some unhelpful ways of characterising these differences.

5. Strong AI is dualistic

…this residual opertionalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn’t matter. [“Minds, Brains and Programs”, 1980]

Strong AI is not dualistic at all. Searle appears either to have misunderstood the view, or to have confused himself by casting it in ambiguous language. Carefully put, what Strong AI says is this: there exists a set of possible programs such that any system that instantiates one of these programs has a mind (or more specifically, a mind with certain mental states). This is no more dualistic than saying: there exists a set of possible programs such that any system that instantiates one of these programs is a word-processor (or more specifically, is running Microsoft Word).

Because of the way that we talk about hardware and software, and because a computer’s software can be so easily changed, it may be tempting to see software as non-physical. But two otherwise identical computers with different programs in RAM are in different physical states, and it’s those different physical states that cause the computers to behave differently. In that sense, instantiated software is physical. So, when we say that what matters is the program, we are not denying that physical, causal states matter. (Remember that the program must be considered in the context of a particular processor. A different processor might execute the program differently.)

6. Strong AI pays no attention to causal powers

On Strong AI it’s precisely the causal powers of a particular computer, running particular software, that would make it the right sort of system for having a mind. But those causal powers are not restricted to any particular types of materials, except insofar as the materials must be suitable for implementing a computational system with the right sort of algorithm.

Advertisements

3 thoughts on “More Responses to Searle on Strong AI

  1. I think you’re right on most of this.

    However I think the practicalities of building a computer from unconventional materials are beside the point. I think the right thing to do is just bite the bullet and say that, yes, if we could build a computer from toilet paper and pebbles, with the right algorithm it would be conscious. Searle might find that laughable but again he’s just falling back on his intuitions as he always does.

    I agree with everything you said on simulation/duplication, but I might be inclined to argue that in a sense, burning and wetness are indeed duplicated by simulation. From a viewpoint within the simulation, a simulated fire does indeed burn. From the perspective of a viewpoint within the simulation, it is a physical fire that is not hot.

    So to argue that it doesn’t is to beg the question that viewpoints within the simulation are not valid.

    I don’t think that Strong AI entails rejecting epiphenomenalism. I think epiphenomenalism is more or less correct, although this is easy to misunderstand. The fact that I have conscious experience is not needed to explain my behaviour, because my behaviour can be accounted for by looking at the actions of neurons and so on. My qualia and so on are causally inert in the sense that they are not needed to explain anything I do.

    But on the other hand (and this is what I think you’re getting at) I rather suspect it is impossible to have a system that behaves like me that does not also experience qualia and so on, because I think my experience is just what it is like to be such a system.

    This is also why I think Strong AI *is* dualistic, and that the software/hardware divide is also dualistic. But dualism doesn’t have to mean Cartesian dualism. There has been an understanding of the difference between form and substance for thousands of years and this is a kind of dualism.

    > Because of the way that we talk about hardware and software, and because a computer’s software can be so easily changed, it may be tempting to see software as non-physical.

    And I think this is right. Because what makes Microsoft Word what it is is not some set of physical properties but an abstract algorithm which can be realised on any of a more or less limitless set of possible physical implementations. The software or program itself is not a physical object. What is physical is the process whereby a particular instance of it runs on a particular machine.

    Like

  2. Hi DM,

    However I think the practicalities of building a computer from unconventional materials are beside the point. I think the right thing to do is just bite the bullet and say that, yes, if we could build a computer from toilet paper and pebbles, with the right algorithm it would be conscious.

    To be clear, I wasn’t denying that. My point was that if people are going to judge by intuition, they should at least have in mind a realistic picture of the sort of computer they’re making a judgement about.

    Like

Comments are closed.