[Wittrs] Re: Further Thoughts on Dennett, Searle and the Conundrum of Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 01 Apr 2010 14:11:59 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> >> If the man truly has no bigger role in the experiment
> >> than you give him credit for; that is, if he plays only the
> >> role of a cog in the machinery implementing so-called rote
> >> processes, then he would fail to understand the English
> >> version of the stories.
> > >
> > >
> > > -gts
> >

> > He isn't doing the same thing in both cases. In one he is
> > reading the material while in the other he is following
> > rules for matching symbols. The point is to ask what it
> > means to read and understand vs. reading for the purpose of
> > symbol matching.

> The man tries to understand the symbols in both cases, as evidenced by the 
> fact that he succeeds in the English case. And he has exactly the same sort 
> of resources available to him in both cases, including > his intelligence 
> from hundreds of billions of neurons.

His role in the CR is to perform the processes that enable responses that look 
like there is understanding behind the activities of the CR, even if there 

That the man doesn't understand the meanings of the symbols is the point of the 
demonstration, i.e., that a UTM can, in principle, appear to have understanding 
when nothing of the kind is present, and this is supposed to lead us to the 
conclusion that nothing of the  kind (understanding) COULD be present in the 

(I have argued elsewhere why it doesn't but that's beyond the immediate point 
here which is just that the man's role in the CR is to play a CPU. So I will 
merely allude to this larger point to keep this overall issue in focus.)

> But according to your argument the man does not actually attempt to use his 
> full cognitive capacities to try to understand the symbols in the Chinese 
> case. According to you, he exists only as you say as a "cog in the machinery 
> implementing rote processes" and not as a full-fledged cognitive system whose 
> job it is to try to understand the symbols.

Yes, the point of the argument is that the CPU cannot, in principle, understand 
because the man, in performing what a CPU does, doesn't understand. It's 
irrelevant whether he is trying to understand or not. The point is we would not 
count a symbol matching operation as an example of understanding the meanings 
represented by the symbols.

The man's understanding, IN THIS CONTEXT, is irrelevant to whether the CR (a 
system of certain operations being performed by the CR's constituents, which 
included the man qua CPU) understands.

> In that way you get the CRA wrong. You refute a strawman.
> -gts

You misunderstand the CR and the CRA that Searle develops from it. It is NOT 
about the man, it IS about the system the CR represents (which includes the 

If it were only about the man, you wouldn't need the CR qua system in the 
picture at all and you wouldn't have Searle arguing from this about what 
computational process-based systems (computers) can or cannot actually 


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: