[Wittrs] Re: What the Man in the Room Knows (and when does he know it?)

  • From: Gordon Swobe <gts_2000@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 23 Mar 2010 08:51:08 -0700 (PDT)

> Where in anything Searle writes do you see him saying the guy is the system.

Read the paper, Stuart! I gave you the link.

Better yet, I'll post the relevant section. Hold on while I cut and paste it. 
Okay thanks for waiting.

1. The Systems Reply (Berkeley). "While it is true that the individual person 
who is locked in the room does not understand the story, the fact is that he is 
merely part of a whole system, and the system does understand the story. The 
person has a large ledger in front of him in which are written the rules, he 
has a lot of scratch paper and pencils for doing calculations, he has ‘data 
banks’ of sets of Chinese symbols. Now, understanding is not being ascribed to 
the mere individual; rather it is being ascribed to this whole system of which 
he is a part."

My response to the systems theory is quite simple: Let the individual 
internalize all of these elements of the system. He memorizes the rules in the 
ledger and the data banks of Chinese symbols, and he does all the calculations 
in his head. The individual then incorporates the entire system. There isn’t 
anything at all to the system that he does not encompass. We can even get rid 
of the room and suppose he works outdoors. All the same, he understands nothing 
of the Chinese, and a fortiori neither does the system, because there isn’t 
anything in the system that isn’t in him. If he doesn’t understand, then there 
is no way the system could understand because the system is just a part of him.

Actually I feel somewhat embarrassed to give even this answer to the systems 
theory because the theory seems to me so implausible to start with. The idea is 
that while a person doesn’t understand Chinese, somehow the conjunction of that 
person and bits of paper might understand Chinese. It is not easy for me to 
imagine how someone who was not in the grip of an ideology would find the idea 
at all plausible. Still, I think many people who are committed to the ideology 
of strong AI will in the end be inclined to say something very much like this; 
so let us pursue it a bit further. According to one version of this view, while 
the man in the internalized systems example doesn’t understand Chinese in the 
sense that a native Chinese speaker does (because, for example, he doesn’t know 
that the story refers to restaurants and hamburgers, etc.), still "the man as a 
formal symbol manipulation system" really does understand Chinese. The 
subsystem of the man
 that is the formal symbol manipulation system for Chinese should not be 
confused with the subsystem for English.

So there are really two subsystems in the man; one understands English, the 
other Chinese, and "it’s just that the two systems have little to do with each 
other." But, I want to reply, not only do they have little to do with each 
other, they are not even remotely alike. The subsystem that understands English 
(assuming we allow ourselves to talk in this jargon of "subsystems" for a 
moment) knows that the stories are about restaurants and eating hamburgers, he 
knows that he is being asked questions about restaurants and that he is 
answering questions as best he can by making various inferences from the 
content of the story, and so on. But the Chinese system knows none of this. 
Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the 
Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle 
squoggle." All he knows is that various formal symbols are being introduced at 
one end and manipulated according to rules
 written in English, and other symbols are going out at the other end. The 
whole point of the original example was to argue that such symbol manipulation 
by itself couldn’t be sufficient for understanding Chinese in any literal sense 
because the man could write "squoggle squoggle" after "squiggle squiggle" 
without understanding anything in Chinese. And it doesn't meet that argument to 
postulate subsystems within the man, because the subsystems are no better off 
than the man was in the first place; they still don't have anything even 
remotely like what the English-speaking man (or subsystem) has. Indeed, in the 
case as described, the Chinese subsystem is simply a part of the English 
subsystem, a part that engages in meaningless symbol manipulation according to 
rules in English.

Let us ask ourselves what is supposed to motivate the systems reply in the 
first place; that is, what independent grounds are there supposed to be for 
saying that the agent must have a subsystem within him that literally 
understands stories in Chinese? As far as I can tell the only grounds are that 
in the example I have the same input and output as native Chinese speakers and 
a program that goes from one to the other. But the whole point of the examples 
has been to try to show that that couldn't be sufficient for understanding, in 
the sense in which I understand stories in English, because a person, and hence 
the set of systems that go to make up a person, could have the right 
combination of input, output, and program and still not understand anything in 
the relevant literal sense in which I understand English. The only motivation 
for saying there must be a subsystem in me that understands Chinese is that I 
have a program and I can pass the Turing test;
 I can fool native Chinese speakers. But precisely one of the points at issue 
is the adequacy of the Turing test. The example shows that there could be two 
"systems," both of which pass the Turing test, but only one of which 
understands; and it is no argument against this point to say that since they 
both pass the Turing test they must both understand, since this claim fails to 
meet the argument that the system in me that understands English has a great 
deal more than the system that merely processes Chinese. In short, the systems 
reply simply begs the question by insisting without argument that the system 
must understand Chinese.

Furthermore, the systems reply would appear to lead to consequences that are 
independently absurd. If we are to conclude that there must be cognition in me 
on the grounds that I have a certain sort of input and output and a program in 
between, then it looks like all sorts of noncognitive subsystems are going to 
turn out to be cognitive. For example, there is a level of description at which 
my stomach does information processing, and it instantiates any number of 
computer programs, but I take it we do not want to say that it has any 
understanding (cf. Pylyshyn 1980). But if we accept the systems reply, then it 
is hard to see how we avoid saying that stomach, heart, liver, and so on are 
all understanding subsystems, since there is no principle way to distinguish 
the motivation for saying the Chinese subsystem understands from saying that 
the stomach understands. It is, by the way, not an answer to this point to say 
that the Chinese system has information
 as input and output and the stomach has food and food products as input and 
output, since from the point of view of the agent, from my point of view, there 
is no information in either the food or the Chinese—the Chinese is just so many 
meaningless squiggles. The information in the Chinese case is solely in the 
eyes of the programmers and the interpreters, and there is nothing to prevent 
them from treating the input and output of my digestive organs as information 
if they so desire.

This last point bears on some independent problems in strong AI, and it is 
worth digressing for a moment to explain it. If strong AI is to be a branch of 
psychology, then it must be able to distinguish those systems that are 
genuinely mental from those that are not. It must be able to distinguish the 
principles on which the mind works from those on which nonmental systems work; 
otherwise it will offer us no explanations of what is specifically mental about 
the mental. And the mental-nonmental distinction cannot be just in the eye of 
the beholder but it must be intrinsic to the systems; otherwise it would be up 
to any beholder to treat people as nonmental and, for example, hurricanes as 
mental if he likes. But quite often in the AI literature the distinction is 
blurred in ways that would in the long run prove disastrous to the claim that 
AI is a cognitive inquiry. McCarthy, for example, writes. "Machines as simple 
as thermostats can be said to have
 beliefs, and having beliefs seems to be a characteristic of most machines 
capable of problem solving performance" (McCarthy 1979). Anyone who thinks 
strong AI has a chance as a theory of the mind ought to ponder the implications 
of that remark. We are asked to accept it as a discovery of strong AI that the 
hunk of metal on the wall that we use to regulate the temperature has beliefs 
in exactly the same sense that we, our spouses, and our children have beliefs, 
and furthermore that "most" of the other machines in the room—telephone, tape 
recorder, adding machine, electric fight switch—also have beliefs in this 
literal sense. It is not the aim of this article to argue against McCarthy's 
point, so I will simply assert the following without argument. The study of the 
mind starts with such facts as that humans have beliefs, while thermostats, 
telephones, and adding machines don't. If you get a theory that denies this 
point you have produced a
 counterexample to the theory and the theory is false. One gets the impression 
that people in AI who write this sort of thing think they can get away with it 
because they don't really take it seriously, and they don't think anyone else 
will either. I propose, for a moment at least, to take it seriously. Think hard 
for one minute about what would be necessary to establish that that hunk of 
metal on the wall over there had real beliefs, beliefs with direction of fit, 
propositional content, and conditions of satisfaction; beliefs that had the 
possibility of being strong beliefs or weak beliefs; nervous, anxious, or 
secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or 
hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. 
Neither is stomach, liver, adding machine, or telephone. However, since we are 
taking the idea seriously, notice that its truth would be fatal to strong AI's 
claim to be a science of the mind.
 For now the mind is everywhere. What we wanted to know is what distinguishes 
the mind from thermostats and livers. And if McCarthy were right, strong AI 
wouldn't have a hope of telling us that. 


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: