[Wittrs] Re: Further Thoughts on Dennett, Searle and the Conundrum of Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 29 Mar 2010 21:10:54 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> The idea of "rote responses" comes from your imagination.

You mean the CR is doing more than a "rote response"? What do you think it is 
doing then if it doesn't understand the information it is processing? What do 
you think the man qua CPU is doing when he matches one squiggle to a squoggle 
according to a set of rules that any computer could do without knowing it is 
doing anything at all???

> The man has *full cognitive capacity* while he implements the syntactic 
> program(s). He uses his full capacities in an attempt to understand the 
> symbols both in English and in Chinese.

His cognitive capacity is irrelevant except insofar as it allows him to act 
like a mindless machine.

> He succeeds in English, (proving beyond any doubt that he does not exist 
> merely as a cog in the machinery implementing rote responses, as in your 
> bogus theory).

No one denies he succeeds in following his English instructions nor do they 
have to be in English nor does he have to be following them in English. He 
could be an Urdu man following Urdu instructions, after all!

But the issue is that he doesn't understand the Chinese symbols he is 
responding to, even if it looks to an outside observer that he does (the Turing 
Test). The rules he is following are designed to make it look like he 
understands as a cleverly programmed computer might appear to understand.

comprehension of the symbols to do what it does and neither does the man. The 
man is thus a proxy for the machine. The fact that there is a contrast between 
what he can understand and what he can't is the point. What he can't involves 
the rote responding, the mechanical following of rules, etc.

Of course THAT is NOT what we think understanding consists of and the 
conclusions we are asked to draw based on the CRA hinge on that fact.

> But he fails to understand the symbols in Chinese.

Right. He is missing whatever it is that constitutes understanding. Now what do 
you think that might be? On this latter question pivots all the rest!

> Conclusion: If one wants to understand Chinese, one must do something besides 
> manipulate Chinese symbols according to rules of syntax. In other words, 
> syntax by itself is neither constitutive of nor sufficient for semantics. 
> A3=true. End of thought experiment.
> -gts

Another foot stamp, I see. Well, now we know and can agree that something more 
than mechanical symbol manipulation (which you decline to allow to be called 
"rote responding" -- so another term will just have to do, I suppose) is 
required to happen to give us understanding.

You want to say that, since all the processes of the CR can ONLY be such 
mechanical symbol manipulation, your case is proved.

But that is the mistake because here you have rejected the description of what 
the man-CPU does in the CR as "rote responding" in favor of a description that 
actually can be seen to do double duty. That is, the "mechanical symbol 
manipulation" that involves matching one inputted squiggle to another outgoing 
squoggle both describes the basic CR capacity AND its functional performance. 
In other words the same term (or whatever equivalent you finally decide to 
settle on) is being  deployed to describe TWO levels of CR activity.

As is well known, computers can do many very complicated things using their 
basic function which is often described as "mechanical symbol manipulation" 
and, while all those things are done via such manipulation, what they 
accomplish is not merely such manipulation. Computers can do more than match 
inputs to outputs via a look-up function as the CR, in its minimalist way, does.

Using such a basic mechanism to perform complex algorithms enables them to 
produce images on screens, to read and offer complex behavioral responses to 
changing inputs (including starting and stopping other mechanisms), to convert 
sounds to images and images to sounds, to produce cgi effects, calculate 
complex equations, record and replay music, construct models of real and 
hypothetical phenomena, etc., etc.

Now the point of the CR and the CRA is to show that the responding process 
described as the function of the CR is NOT what we mean by "understanding" 
(responding intelligently). Understanding is clearly a much more complex 
process than what the CR does, just as the production of the 3-D animation for 
the film Avatar is a more complex process than what the CR does. The point of 
the CR's critics is to note that understanding, as we find it in human minds 
(consciousnesses), is more complex than the rote responding of the CR, even if 
all the different things computers can do are built on the same kind of 
mechanical symbol manipulation that is so obviously manifest in the outputs of 
the CR.

Simply put, the computational function can be deployed to do vastly more 
complex operations than what the CR does and consciousness is vastly more 
complex than what the CR does. Therefore it's absurd to presume that, merely 
because computational processing is a more limited function than understanding, 
computational processing cannot be deployed in a way that achieves 

Thus the CRA, which claims that the limited nature of the underlying functions 
of the CR precludes the CR from doing something much more complicated, is wrong.

But, again, all of this depends on realizing that understanding qua 
consciousness may be describable as a system property rather than a bottomline 
(irreducible) property of some processes but not others. This, however, is 
something you have yet to see and, if past results are indicative of future 
returns, you will not see it going forward.

So where to from here? Shall we just continue to argue an issue which, 
apparently, is more a matter of faith than reason for you?


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: