[Wittrs] Dennett on the Implicit Dualism of Searle's CRA (for Joe)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 01 Feb 2010 03:54:43 -0000

Okay Joe, I had some time this evening to offer some support for my point on 
Dennett's beating me to the punch in accusing Searle of dualism of the 
Cartesian variety.

I will provide some excerpts below in which he makes the Cartesian relation 
explicit. Note that in the following he is talking about Searle's CRA.

Starting on page 436:

". . . and while philosophers and others have always found flaws in his thought 
experiment [the CR] when it is considered as a logical argument, it is 
undeniable that its 'conclusion' continues to seem 'obvious' to many people. 
Why? Because people don't actually imagine the case in the detail that it 
requires."

[Here follow two pages of example and explication, using a putative 
conversation concerning a joke between a computer and a person. Then picking up 
on page 438 he goes on.]

"The fact is that any program that could actually hold up its end in the 
conversation depicted would have to be an extraordinarily supple, 
sophisticated, and multilayered system, brimming with 'world knowledge' and 
meta-knowledge and meta-meta-knowledge about its own responses, the likely 
responses of its interlocutor, and much, much more. Searle does not deny that 
programs can have all this structure, of course. He simply discourages us from 
attending to it. But if we are to do a good job of imagining the case, we are 
not only entitled but obliged to imagine that the program Searle is 
hand-simulating has all this structure -- and more, if only we can imagine it. 
But then it is no longer obvious . . . that there is no genuine understanding . 
. . Maybe billions of actions of all those highly structured parts produce 
genuine understanding in the system after all. If your response to this 
hypothesis is that you haven't the faintest idea whether there would be genuine 
understanding in such a complex system, that is already enough to show that 
Searle's thought experiment depends, illicitly, on your imagining too simple a 
case, an irrelevant case, and drawing the 'obvious' conclusion from it.

"Here is how the misdirection occurs. We see clearly enough that if there were 
understanding in such a giant system, it would not be Searle's understanding 
(since he is just a cog in the machiery, oblivious to the context of what he is 
doing). We also see clearly that there is nothing remotely like genuine 
understanding in any hunk of programming small enough to imagine readily -- 
whatever it is, it's just a mindless routine for transforming symbol strings 
into other symbol strings according to some mechanical or syntactical recipe. 
Then comes the suppressed premise: Surely more of the same, no matter how much 
more, would never add up to genuine understanding. But why should anyone think 
this is true? Cartesian dualists would think so, because they think that even 
human brains are unable to accomplish understanding all by themselves . . ."

[Recall my point that Searle's CRA hinges on an implicit case of substance 
dualism.]

Page 439:

"The argument that begins 'this little bit of brain activity doesn't understand 
Chinese, and neither does this bigger bit of which it is a part . . .' is 
headed for the unwanted conclusion that even the activity of the whole brain is 
insufficient to account for understanding Chinese. . . It is hard to imagine 
how 'just more of the same' could add up to understanding, but we have very 
good reason to believe that it does, so in this case we should try harder, not 
give up."

"Searle, laboring in the Chinese Room, does not understand Chinese, but he is 
not alone in the room. There is also the system, the CR, and it is to that self 
that we should attribute any understanding . . ."

"This reply to Searle's example is what he calls the Systems Reply. It has been 
the standard reply of peple in AI from the earliest outings . . . but it is 
seldom understood by people outside AI. Why not? Probably because they haven't 
learned how to imagine such a system. They just can't imagine how understanding 
could be a property that emerges from lots of distributed quasi-understanding 
in a large system. . . ."


[Recall my point that this is about how consciousness can be conceived, how we 
can imagine it! Note that I have been stressing the point that the inability to 
imagine it in the way Dennett proposes, or the unwillingness to do so, hangs on 
an implicit presumption that consciousness, or, in this case, understanding, 
cannot be reduced to more basic constituents that are not themselves instances 
of understanding. I have stressed that Searle's argument hinges on precisely 
this insistence, that because there is no understanding to be found in the 
Chinese Room, no understanding is possible. Dennett notes that Searle basically 
underspecks the CR, just as I have said, which is why the "Bicycle Reply" -- a 
tip of the hat again to Peter Brawley for this name -- is the right one, i.e., 
that just as you can't build a bicycle and expect it to fly, you can't build a 
a rote responding device and expect it to be conscious.]

Dennett again:

". . . Searle begs the question. He invites us to imagine that the giant 
program consists of some simple table-look up architecture that directly 
matches Chinese character strings to others, as if such a program could stand 
in, fairly, for any program at all. We have no business imagining such a simple 
program and assuming that it is the program Searle is simulating, since no such 
program could produce the sorts of results that would pass the Turing test, as 
advertised."

"Complexity does matter. . . ."

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts:

  • » [Wittrs] Dennett on the Implicit Dualism of Searle's CRA (for Joe) - SWM