We Are the Room
I've just had occasion to read John Searle's famous "Chinese Room" paper all the way through for the second time. Although I've read the paper once before and discussed the ideas presented in it countless times, this is the first time I've ever read it carefully. (For any readers who have never heard of the Chinese Room thought experiment, the Wikipedia page is here. The rest of this post assumes familiarity.)
I found it a very satisfying read. It's not often that I read things I disagree with but nevertheless find compelling. Recreating Searle's arguments hardly does them justice. The power of the paper lies in the clever and careful way he words things. But that is also its weakness: once the actual words are no longer in front of you, the argument loses its force.
Searle enumerates several "common objections" to his thought experiment:
- The Systems Reply - it is the entire room that is intelligent, not just the man "processing" the Chinese
- The Robot Reply - if we put the room in a robot it would indeed think like we do
- The Brain Simulator Reply - a computer with brain architecture could think
- The Combination Reply - some combination of the above would do the trick
I more or less agree with Searle's answer to the Robot Reply and the Brain Simulator Reply. Each of them, in their way, gives away the farm by conceding that intelligence requires a special kind of hardware - and the whole point of strong AI is that hardware doesn't matter. Intelligence is a program - a series of functions. It can run in all manners of hardware. Being a proponent of the Systems Reply, that is indeed what I believe.
That said, I think one can take the analogy too far. Not just any hardware will do for a computer, after all. There are certain characteristics we have to insist on. For example, the components do have to in some sense be uniform. We couldn't make a computer half out of silicon diodes and half out of wood beads because of integration problems. It's not well understood where the boundaries are here, but I think it's safe to say there are some, and that Searle could be made to concede the point. He could probably also be made to agree (though more reluctantly, I suspect) that there is a complexity threshold. Searle is able to exploit all kinds of ambiguities in the common useage of the word think to convince us that applying it to AI is merely stretching an analogy - in the same way that talking about a thermostat (his example) that "thinks" is an analogy. I do not believe, however, he would have as much luck if we replaced many of the instances of the word thought with intelligence in his paper. While it is possible (but only just) to stretch the word "think" to apply to a thermostat (and so convince your reader that his belief that AI might "think" is based on a mundane, irrelevant sense of the term), no one will accept that a thermostat can be intelligent. And it is actually intelligence that we are after in AI. There is a complexity threshold. Newell and Simon are very clear about this. Intelligence is a physical symobl system of sufficient size - and, one presumes, of sufficient generality.
Searle's argument, it seems to me, rests on two points. The first is not actually an intellectual point but a very clever associative exploitation. This has to do with the rainstorm example. We do not consider a simulated rainstorm in a computer to be the real thing, why should be consider simluated thinking to be the real thing? Even on a second go through, that catches me. And yet, it's a silly example. We consider a simulated rainstorm a mere simulation because in some sense rainstorms have to be made out of rain. Just as sugar has to be made out of sugar, etc. The building material is part of the definition. There are, however, other concepts in which material plays no role. We can make chairs out of plastic, wood, metal, porcelin, dirt, popsicle sticks, beanbags, whatever. All of them are chairs, and there is no sense in which any one of them is less of a chair for not being made of the traditional stained wood. So the only question the example really raises is whether intelligence is the kind of thing that has to be made out of neurons. Searle's example, in other words, though superficially rhetorically convincing, only brings us back to the starting point.
The other point is that it seems unlikely that computers will ever have experiences of qualia. That is, the internal symbols being processed in its circuits have no substantive connection to the world - they are all defined merely in terms of how they act on each other. Well, that's true enough, I suppose. But it's not terribly clear how that's any different from what goes on in a "real" brain. A real brain also deals with representations of the world. It is not as though if we eat a cookie our experience of the sugar in it is direct. Sugar directly affects the interface, yes - but the experience of sweetness is (part of an) internal representation of the cookie. There is nothing about "sweetness" that is inherent in the cookie (though the sugar is physically there). We can assume that Searle's room also has some representation of sweetness - if only as a basis for communication about it (Chinese presumably has a word for it, after all). This representation affects the other symbols in the program in such a way as to produce precisely the same output the sensation of tasting something sweet would in a human brain. There may be some meaningful sense in which the internal experience is not the same. Computational sweetness functions in the same way as real sweetness, but it doesn't feel the same.
Well, fine. Maybe it doesn't. But if it is functionally the same, I'm not sure what basis we have for drawing a distinction. My only real way of knowing that other people experience sweetness, after all, is that they either tell me they do or behave in ways that lead me to think they do. What would it mean to discover a neurological disorder in humans that made their internal experience of sweetness different and yet left their behavior untouched? I would go so far as to say that it is not possible to discover such a disorder, at least not with any certainty. Discovering it would mean finding some measurably different arrangement in the brain. But whatever sort of "different" configuration there is is either so similar as to be indistinguishable (individual humans presumably don't represent sweetness to themselves in precisely the same way, after all. Brains change and adapt.), or else so different as to cause observable behavioral differences. In either case, we see that the implementation is the definition. Either a brain has some (distributed?) representation of "sweetness" or it doesn't. And we can only tell that it has this representation by noticing that it behaves (or, rather, signals the body in general and the articulators in particular) as though it did. If in some future more advanced world we know precisely what the neuronal configuration is, we would expect it to be subtly different from person to person, given that we expect that the actual configurations of brains in general are subtly different from person to person. So there is already some important sense in which we accept that "sweetness" is of variable implementation.
Now, let's grant for a second that a computer's "internal experience" of sweetness isn't going to be exactly the same as a human's. It would still seem out of line to say that the representation "isn't even remotely the same" as a human's - because it shares the important property with human "sweetness" of having the same cognitive effects on the computer that neuronally represented sweetness has on a human. Though the internal experience is different, the cognitive experience is the same. Everything related to thinking about sweetness is (by definition, really) the same because any processing that results from the experience effects the same kinds of behavior that neuronal sweetness would have in a human.
Anyone who has accepted this (as Searle plainly has - else his thought experiment, in which the room passes the Turing Test - he's very clear about that, would be quite different) accepts that "sweetness" has multiple realizations. Searle would presumably reply that a different realization of sweetness isn't really sweetness, all the same. It is the functional equivalent of sweetness, but sweetness depends on a particular instantiation. Of course, this starts to seem silly. Either a thing is testably different or it isn't - and the only test Searle is offering is here is definitional. It's circular to say "it's only sweetness if it happens in neurons." His point is trivially proven by definition. But let's play along. It's only sweetness if it happens in neurons, but we grant that there could be a functional equivalent of sweetness in a program. Even such a person as Searle presumably thinks that thinking about sweetness is something that happens over and above the experience. The "thinking about" part comes in some sense from how the experience of sweetness interacts with the rest of the system. In which case, it seems very clear indeed that computers CAN "think about" sweetness, or "think based on" sweetness, or "think as a result of" sweetness, or whatever else. That is to say, either there is a functional way in which the different representation of sweetness in a computer program makes a difference on the system, or else it is the same for any effect of thinking.
Where Searle goes wrong, in other words, is (a) in assuming that proponents of strong AI believe that clever enough computer programs would be exactly the same as humans in every aspect (they wouldn't - starting with the fact that they wouldn't be made out of meat) and (b) in mistaking internal sensations of things for thinking. Thinking is a process over and above sensation. If we do not allow, for the purposes of a thought experiment, that different representations of a sensation in different systems will make a difference on the system, then thinking is unaffected by the difference in representation. Though the sensations are indeed "different" (perhaps - I should stress again that there is really no way to know), the process that results from them is not.
In addition to mistaking sensation for thinking, Searle is also failing to appreciate what "semantics" really is, I think. The main point we are meant to get out of this article seems to be that syntax is not semantics. Fine, granted. But we would then need to ask what "semantics" is that makes it something that can only happen in meat brains? It seems to me that semantics is every bit as much a formal system as syntax, in fact. Concepts are formed from the world in a mechanical way - and their formal, semantic definition has a lot to do with how they combine. To take Chomsky's famous example of a sentence that is syntactically correct but semantically faulty:
Colorless green ideas sleep furiously.
What syntax tells us is that colorless and green are adjectives and that ideas is a noun and that adjective can combine with nouns (and other adjectives). The merge operation that combines green with ideas is syntactically legal, as is the merge operation that combines colorless with green ideas. Syntax is a formal system of symbol combination (and transformation) based on grammatical category. Well, what is semantics but a formal system of symbol combination (and transformation?) based on meaning? The sentence is semantically faulty becuse colorless doesn't combine with green and because green doesn't combine with ideas, and, indeed, because colorless doesn't combine with ideas. The formal rule that tells us that acts on the stored concepts. Semantics as a formal system doesn't actually care what's "in the world" the same way that a calculator doesn't actually care whether you're calculating sums of tax dollars or sales of cartons of eggs. It doesn't need to know what thing corresponds to the symbol - only what rules of combination are relevant. It "trusts," just as the calculator "trusts," that the representations it is given are valid. (It might be more accurate to say that the user of the calculator trusts that the internal representations are accurate representations of the relevant aspects of the world.) As long as the symbol table is created with reference to (relevant things in) the real world (and I think it's fair to say that Searle's example assumes that it is), then there is no real sense in which semantics is different from syntax in the way that Searle seems to think it is. Semantics is also a representation-independent formal system. In other words, the kind of thing that a computer can easily implement.
[Addenda]
I've just come back from discussing this in class, and there are a few things I'd like to add.
First, some intersting comments have made me think that I am, in some sense, approaching this the wrong way. On reading this originally, I took it to be a very finely crafted thought experiment in the sense that it does a good job pumping our intuitions. If the purpose of a thought experiment is to reveal to us our assumptions (and allow us to question them), then this one does a great job. Searle is very effective in demonstrating that more or less everyone shares the presupposition that thinking can only go on in (meat) brains. My approach was always to take that assumption and try to challenge it. But another way to look at it is to say that this is being too easy on Searle. This argument goes that Searle does indeed do a good job demonstrating that the naive response (to reject the idea that "thinking" could go on in a machine) is based on a prejudice - that prejudice being that there is something about the (chemical) substance of animal neurons that uniquely allows thinking. But it is not clear why this should be so. Searle's problem, this case goes, is that having revealed this assumption he continues to hold it without questioning it. Seen from this perspective, the burden of proof would seem to be on Searle - and not the proponent of Strong AI - to demonstrate that there is something intrinsic to the chemcial makeup of animal neurons that enables "thinking."
This clicked for me with the discussion of a counter-though experiment - one in which Searle is asked to imagine that his neurons are slowly replaced one at a time with functionally equivalent electronic devices. Searle's response is apparently that he would sense his consciousness slowly ebbing away as the process reached a critical threshold. But once the question is actually put to Searle rather than the reader, I actually feel like I deserve more of an answer! We get the feeling that he must have some basis for believing that slow substitution of electronic neural devices for real neurons would have this effect, and yet it's hard to imagine what such a reason would be.
Second, someone brought up Doug Hoffsteader's response, which is to pick on Searle's counter to the "Systems Reply" involving the idea of the human operator "internalizing" the table of Chinese instructions. Hoffsteader wants to say that this would in effect ammount to "knowing" Chinese. In some sense he's right, but I believe Hoffsteader is falling for a trap here. The trap is that this isn't actually as far as our intuitions will take us. We can indeed, I believe, imagine someone internalizing all the relevant syntactic and semantic rules of Chinese (including the semantic properties of items with respect to other items - see the Chomsky example above) without having to connect any of this to objects in the real world - and having imagined such a thing, there is, in fact, no reason to believe that internalizing all of these rules would enable someone to then connect the internal Chinese symbols to corresponding (grounded) English symbols and so deduce their meanings. In other words, it's "plausible" to me that someone could internalize Chinese without actually being able to speak it in the real world. The interface, in this case, would be something other than sensory input (more precisely, it would be sensory input translated into start symbols on a lookup table and therefore not the kind of familiar input that we as people are used to).
The point here - that Hoffsteader is, I believe, missing - is that despite what Searle seems to think about this, there is no reason to suppose that such a system is not thinking. Quite the contrary, it is quite clear to me (ironically thanks to Searle's own example!), that it definitely is thinking. It has, in fact, become precisely the "thinking" component abstracted away from the particular interface we're used to. There is possibly still some sense in which its "feelings" would be different, but this is not relevant to thinking or intelligence. All that Searle has done for me here is shown me the obvious - that an organism like a human is an integrated thing, consisting of some parts "thinking," some parts "sensation," some parts mere machinery. The answer to the question, then, of whether the human who has internalized all the Chinese rules is "thinking" is thus a resounding "yes, and possibly in a purer form than he is used to doing so." The "room," in this sense, is very much "platform independent," since it is able to go from being a room with paper instructions to being hosted entirely in a human mind with no loss of function.
There are interesting questions as to what such a human would be like, of course - to have two different minds functioning with different interfaces in his single brain. It's a bit more cumbersome, probably, than having a partitioned hard drive with Windoze and Linux (would maybe be like having Windoze and Linux sharing the same namespace - no partition involved. Yikes!). But these questions are just fun...beside the point. The point is that while there are reasons to believe, as Searle suggests and Hoffsteader rejects, that there would be some kind of qualitative difference between the interal experiences the person had of Chinese and of English (wherein he understands English in the traditional way and Chinese in some more abstract way), there is no reason to believe that these internal differences pose any problem for concluding that the human is indeed thinking in Chinese.
There's still an interesting question of what role images play, of course. The "abstracted" Chinese in the human brain would have somehow managed to turn all images associated with concepts into a list of response rules (rules which detail how they intereact with other concepts, which are also represented as lists of interaction rules), so it's important to keep in mind that though no images are involved, the interactions between concepts are theoretically the same as they would be were associated images present. Seen in this way, it seems to me that images are primarily a data compression device - they allow storage of long tables of rules in a compact space.
0 Comments:
Post a Comment
<< Home