[Wittrs] Re: What the Man in the Room Knows (and when does he know it?)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 23 Mar 2010 17:36:10 -0000

Thanks for posting the part you think relevant. Makes things much easier 
(though, of course, it isn't new -- have read it many times before)!

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> > Where in anything Searle writes do you see him saying the guy is the system.
>
> Read the paper, Stuart! I gave you the link.
>

> Better yet, I'll post the relevant section. Hold on while I cut and paste it. 
> Okay thanks for waiting.
>

>
> 1. The Systems Reply (Berkeley). "While it is true that the individual person 
> who is locked in the room does not understand the story, the fact is that he 
> is merely part of a whole system, and the system does understand the story. 
> The person has a large ledger in front of him in which are written the rules, 
> he has a lot of scratch paper and pencils for doing calculations, he has 
> â??data banksâ?? of sets of Chinese symbols. Now, understanding is not being 
> ascribed to the mere individual; rather it is being ascribed to this whole 
> system of which he is a part."
>

> My response to the systems theory is quite simple: Let the individual 
> internalize all of these elements of the system. He memorizes the rules in 
> the ledger and the data banks of Chinese symbols, and he does all the 
> calculations in his head. The individual then incorporates the entire system. 
> There isnâ??t anything at all to the system that he does not encompass. We 
> can even get rid of the room and suppose he works outdoors. All the same, he 
> understands nothing of the Chinese, and a fortiori neither does the system, 
> because there isnâ??t anything in the system that isnâ??t in him. If he 
> doesnâ??t understand, then there is no way the system could understand 
> because the system is just a part of him.
>


The answer to this, Gordon, is fairly simple (and I have said it before): The 
system in question doesn't understand because it isn't specked to understand, 
it's specked to do a rote operation.

There are two issues here -

1) The phenomenon of understanding can be explained as a system property 
(rather than a process property); and

2) The specific system implemented in the CR is inadequate to achieve 
understanding because it only performs a relatively limited function.

Once these two things are separated, you can readily see why Searle's response 
to the System Reply misses the boat. Unfortunately many versions of the System 
Reply are inadequately explicated, focusing on #1 while eliding over #2. But 
Dennett actually covers the second item as well in his critique as transcribed 
from his book Consciousness Explained.


> Actually I feel somewhat embarrassed to give even this answer to the systems 
> theory because the theory seems to me so implausible to start with. The idea 
> is that while a person doesnâ??t understand Chinese, somehow the conjunction 
> of that person and bits of paper might understand Chinese. It is not easy for 
> me to imagine how someone who was not in the grip of an ideology would find 
> the idea at all plausible. Still, I think many people who are committed to 
> the ideology of strong AI will in the end be inclined to say something very 
> much like this; so let us pursue it a bit further. According to one version 
> of this view, while the man in the internalized systems example doesnâ??t 
> understand Chinese in the sense that a native Chinese speaker does (because, 
> for example, he doesnâ??t know that the story refers to restaurants and 
> hamburgers, etc.), still "the man as a formal symbol manipulation system" 
> really does understand Chinese. The subsystem of the man that is the formal 
> symbol manipulation system for Chinese should not be confused with the 
> subsystem for English.
>

[Sean's rule of 25 lines may kick in here so I will arbitrarily break up 
Searle's comments below but without impacting their meaning. -- SWM]

> So there are really two subsystems in the man; one understands English, the 
> other Chinese, and "itâ??s just that the two systems have little to do with 
> each other." But, I want to reply, not only do they have little to do with 
> each other, they are not even remotely alike. The subsystem that understands 
> English (assuming we allow ourselves to talk in this jargon of "subsystems" 
> for a moment) knows that the stories are about restaurants and eating 
> hamburgers, he knows that he is being asked questions about restaurants and 
> that he is answering questions as best he can by making various inferences 
> from the content of the story, and so on.

But the Chinese system knows none of this. Whereas the English subsystem knows 
that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that 
"squiggle squiggle" is followed by "squoggle squoggle." All he knows is that 
various formal symbols are being introduced at one end and manipulated 
according to rules  written in English, and other symbols are going out at the 
other
> end.

> The whole point of the original example was to argue that such symbol 
> manipulation by itself couldnâ??t be sufficient for understanding Chinese in 
> any literal sense because the man could write "squoggle squoggle" after 
> "squiggle squiggle" without understanding anything in Chinese. And it doesn't 
> meet that argument to postulate subsystems within the man, because the 
> subsystems are no better off than the man was in the first place; they still 
> don't have anything even remotely like what the English-speaking man (or 
> subsystem) has. Indeed, in the case as described, the Chinese subsystem is 
> simply a part of the English subsystem, a part that engages in meaningless 
> symbol manipulation according to rules in English.
>

Note his point about the limitations of the Chinese sub-system! He says "the 
Chinese subsystem knows only that 'squiggle squiggle' is followed by 'squoggle 
squoggle'."

Of course, that system doesn't even "know" that if "know" means what it means 
when we use the term for humans and other animals which have some capacity of 
understanding and recognition. But let's grant that the usage doesn't already 
give his store away! THE POINT STILL REMAINS THAT THE CHINESE SUB-SYSTEM 
DOESN'T UNDERSTAND BECAUSE IT LACKS THE CONNECTIONS HE TELLS US THE MAN OR THAT 
HUMAN CHINESE SPEAKERS HAVE. It doesn't connect symbols to mental images and 
ideas which have further connections to a whole host of others in a layered 
network of representations of the world.

This goes right back to the second point above: that the system under 
consideration lacks understanding because it's only specked to operate in a 
more limited way, a rote way! On the other hand suppose it were feasible to 
spec a fully comprehending system within an existing comprehending system. In 
that case the system WOULD understand Chinese, whether or not the man 
implementing it recognized that or not.

Of course this gets pretty ridiculous in terms of anything but a thought 
experiment but it is at least in principle possible to imagine a machine with 
multiple minds specked into it, all operating distinctly. Indeed, humans 
sometimes have multiple personalities with different ideas, different beliefs, 
different competencies and different histories so it's at least possible to 
imagine more than one fully specked mind system on a single platform.  


> Let us ask ourselves what is supposed to motivate the systems reply in the 
> first place; that is, what independent grounds are there supposed to be for 
> saying that the agent must have a subsystem within him that literally 
> understands stories in Chinese? As far as I can tell the only grounds are 
> that in the example I have the same input and output as native Chinese 
> speakers and a program that goes from one to the other. But the whole point 
> of the examples has been to try to show that that couldn't be sufficient for 
> understanding, in the sense in which I understand stories in English, because 
> a person, and hence the set of systems that go to make up a person, could 
> have the right combination of input, output, and program and still not 
> understand anything in the relevant literal sense in which I
> understand English.


That's right but only because understanding is a more complex system than what 
the CR is specked to do. Understanding is NOT merely receiving inputs and 
matching them to outputs.


> The only motivation for saying there must be a subsystem in me that 
> understands Chinese is that I have a program and I can pass the Turing test;


>  I can fool native Chinese speakers. But precisely one of the points at issue 
> is the adequacy of the Turing test. The example shows that there could be two 
> "systems," both of which pass the Turing test, but only one of which 
> understands; and it is no argument against this point to say that since they 
> both pass the Turing test they must both understand, since this claim fails 
> to meet the argument that the system in me that understands English has a 
> great deal more than the system that merely processes Chinese. In short, the 
> systems reply simply begs the question by insisting without argument that the 
> system must understand Chinese.
>

Yes, of course he's right that the Turing Test is inadequate but that isn't the 
issue. It's whether consciousness is replicable by computational processes 
running on computers. If you spec a system in which the processes do all the 
things brains do, from receiving, sorting, arranging, breaking down, combining, 
relating, storing and retrieving inputs for the purpose of building up layered 
pictures of the world (including both exterior and interior events and 
establishing some of these as a reviewing self that uses other elements in the 
mix) then you could, at least in theory, achieve what the CR cannot achieve, 
i.e., understanding.

Most examples of the System Reply I have seen gloss over this fact and focus on 
the system in principle. But what is left unsaid is that it must be the right 
kind of system. Dennett deserves credit for not making this mistake in 
Consciousness Explained.


> Furthermore, the systems reply would appear to lead to consequences that are 
> independently absurd. If we are to conclude that there must be cognition in 
> me on the grounds that I have a certain sort of input and output and a 
> program in between, then it looks like all sorts of noncognitive subsystems 
> are going to turn out to be cognitive. For example, there is a level of 
> description at which my stomach does information processing, and it 
> instantiates any number of computer programs, but I take it we do not want to 
> say that it has any
> understanding (cf. Pylyshyn 1980).


Of course not but then a stomach system isn't specked to understand.



> But if we accept the systems reply, then it is hard to see how we avoid 
> saying that stomach, heart, liver, and so on are all understanding 
> subsystems, since there is no principle way to distinguish the motivation for 
> saying the Chinese subsystem
> understands from saying that the stomach understands.


This just strikes me as an absurd riff on his reply since he is mixing up 
different kinds of systems here. A system does not understand by dint of being 
a system. It understands by dint of being a system that does the things 
required for understanding. Stomach systems don't do that!



> It is, by the way, not an answer to this point to say that the Chinese system 
> has information
>  as input and output and the stomach has food and food products as input and 
> output, since from the point of view of the agent, from my point of view, 
> there is no information in either the food or the Chineseâ?"the Chinese is 
> just so many meaningless squiggles. The information in the Chinese case is 
> solely in the eyes of the programmers and the interpreters, and there is 
> nothing to prevent them from treating the input and output of my digestive 
> organs as information if they so desire.
>

But stomachs don't understand, they digest! Why would we expect a stomach to 
understand anymore than a kidney or a CR if they all lack the functionalities 
associated with understanding?


> This last point bears on some independent problems in strong AI, and it is 
> worth digressing for a moment to explain it. If strong AI is to be a branch 
> of psychology, then it must be able to distinguish those systems that are 
> genuinely mental from those that are not. It must be able to distinguish the 
> principles on which the mind works
> from those on which nonmental systems work;


So must critics of "strong AI" but Searle has just given evidence that he 
doesn't when he confuses different kinds of systems.


> otherwise it will offer us no explanations of what is specifically mental 
> about the mental. And the mental-nonmental distinction cannot be just in the 
> eye of the beholder but it must be intrinsic to the systems; otherwise it 
> would be up to any beholder to treat people as nonmental and, for example, 
> hurricanes as mental if he likes. But quite often in the AI literature the 
> distinction is blurred in ways that would in the long run prove disastrous to 
> the claim that AI is a cognitive inquiry. McCarthy, for example, writes. 
> "Machines as simple as thermostats can be said to have
>  beliefs, and having beliefs seems to be a characteristic of most
> machines capable of problem solving performance" (McCarthy 1979).


Yes there is such "blurring" but that's because, on the approach Searle is 
criticizing, mind is seen to be not any single thing but a range of features 
that occur in degrees along a continuum. Thus it is conceivable to speak of 
thermostats having beliefs though such beliefs cannot be like ours. However, 
they may be just a more primitive version of what it means for a human to have 
a belief since the view that consciousness is a system property is grounded in 
the idea that complexity matters.


> Anyone who thinks strong AI has a chance as a theory of the mind ought to 
> ponder the implications of that remark. We are asked to accept it as a 
> discovery of strong AI that the hunk of metal on the wall that we use to 
> regulate the temperature has beliefs in exactly
> the same sense that we, our spouses, and our children have beliefs,


I can't speak for what McCarthy meant but I can certainly see a way of 
interpreting what he is quoted by Searle as saying which doesn't imply that at 
all. Given Searle's own errors in understanding others' positions, however, I 
am loath to accept Searle's characterization of McCarthy's statement above.


> and furthermore that "most" of the other machines in the roomâ?"telephone, 
> tape recorder, adding machine, electric fight switchâ?"also have beliefs in 
> this literal sense. It is not the aim of this article to argue against 
> McCarthy's point, so I will simply assert the following without argument. The 
> study of the mind starts with such facts as that humans have beliefs, while 
> thermostats,
> telephones, and adding machines don't.


In one sense yes, in another no. Searle's point is overly simplistic here but 
it does reflect his apparent conviction that mind isn't reducible to underlying 
constituents that are, themselves, not mind-like.


> If you get a theory that denies this point you have produced a
>  counterexample to the theory and the theory is false. One gets the 
> impression that people in AI who write this sort of thing think they can get 
> away with it because they don't really take it seriously, and they don't 
> think anyone else will either. I propose, for a moment at least, to take it 
> seriously. Think hard for one minute about what would be necessary to 
> establish that that hunk of metal on the wall over there had real beliefs, 
> beliefs with direction of fit,
> propositional content, and conditions of satisfaction;


But there is no evidence from that quote alone that McCarthy says anything like 
this! If beliefs are part of what we mean by "consciousness" (as I think we can 
agree they are), then there will be levels of belief observable on a continuum 
of entities. My cat, now departed I'm sorry to say, had beliefs but they were 
nothing like mine. She had no beliefs about the wider world or abstractions 
such as numerical quantities or equations or word meanings, though she 
certainly believed I meant her no harm and that the dog down the block did. 
Lower animals have even less complex beliefs than my cat had and so forth. On 
such a view it is not necessarily wrong to speak of the way in which a 
thermostat operates as a form of belief (though, frankly, I wouldn't bother to 
say any such thing).


> beliefs that had the possibility of being strong beliefs or weak beliefs; 
> nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious 
> beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The 
> thermostat is not a candidate. Neither is stomach, liver, adding machine, or 
> telephone. However, since we are taking the idea seriously, notice that its 
> truth would be fatal to strong AI's
> claim to be a science of the mind.


No it wouldn't! Not if consciousness is a system property anyway, which is 
crucial to the "strong AI" thesis.


>  For now the mind is everywhere. What we wanted to know is what distinguishes 
> the mind from thermostats and livers. And if McCarthy were right, strong AI 
> wouldn't have a hope of telling us that.
>
> -gts
>

I presume this has all been Searle speaking and not you, right Gordon? Because 
if not, it would explain some of the rather egregious mistakes in some of the 
text above!

Look, getting back to Searle's response to the System Reply, the answer to this 
is pretty clear! It's that the consciousness of the system depends on the kind 
of system it is, the things it is specked to do. And that is why it could make 
sense to speak of understanding and beliefs as being evidenced by less 
sophisticated, less complex systems, i.e., we are talking about more primitive 
or less developed versions of the systems and sub-systems in question.

Back to the main issue: The man in the room is not the system, nor is the man 
in whom the room is instantiated (assuming that were even possible). The system 
is the range of functions being performed in a certain way. The hardware is 
only relevant insofar as it is capable of running the system(s) in question.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: