[Wittrs] Re: Further Thoughts on Dennett, Searle and the Conundrum of Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 28 Mar 2010 00:58:09 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> "...in defending the Chinese room argument against Dennett, Searle bristles, 
> 'he misstates my position as being about consciousness rather than about 
> semantics' (p. 128), The Mystery of Consciousness."
>
> http://host.uniroma3.it/progetti/kant/field/chinesebiblio.html
>
> Exactly.
>

Here is a bit more text from the link you are quoting from (thanks for 
including it):

Referring to Searle's book The Mystery of Consciousness -

"This book is based on several consciousness-related-book reviews by Searle 
that were originally published in the New York Review of Books (1995-1997). 
Notably, it includes Daniel Dennett's reply to Searle's review of Consciousness 
Explained (and Searle's response) and David Chalmers' reply to Searle's review 
of The Conscious Mind (and Searle's response). Though in defending the Chinese 
room argument against Dennett, Searle bristles, "he misstates my position as 
being about consciousness rather than about semantics" (p. 128), The Mystery of 
Consciousness, ironically, features the Chinese room argument quite 
prominently; beginning, middle, and end."

In other words he deploys his CRA in the context of critiquing others' 
positions on what consciousness is. So if the CRA isn't about consciousness, 
why bring it up? Hmmmm . . .

> You make the same mistake as Dennett, Stuart. The CRA shows that even if Zeus 
> came down from the clouds and handed us a conscious computer, that computer 
> still would not understand Chinese solely by virtue of running a formal 
> program.
>
>

Besides the fact that Searle deploys the  CRA in making his case about 
consciousness in a book titled The Mystery of Consciousness, there is also this 
at the same link:

"Since the Chinese room argument is so 'simple and decisive' that Searle is 
"embarrassed to have to repeat it" (p. 11) - yet has so many critics - it must 
be we critics misunderstand: so Searle steadfastly maintains.  We think the 
argument is about consciousness somehow, or that it's 'trying to prove that 
"machines can't think" or even "computers can't think"' when, really, it's 
directed just at the 'Strong AI' thesis that 'the implemented program, by 
itself, is sufficient for having a mind' (p. 14). This 
oh-how-you-misunderstand-me plaint is familiar (c.f., Searle 1984a, 1990a, 
1994)) and fatuous. Searle takes it up again, in conclusion here, where he 
explains,

"'I do not offer a proof that computers are not conscious. Again, if by some 
miracle all Macintoshes suddenly became conscious, I could not disprove the 
possibility. Rather I offered a proof that computational operations by 
themselves, that is formal symbol manipulations by themselves, are not 
sufficient to guarantee the presence of consciousness. The proof was that the 
symbol manipulations are defined in abstract syntactical terms and syntax by 
itself has no mental content, conscious or otherwise. Furthermore, the abstract 
symbols have no causal powers to cause consciousness because they have no 
causal powers at all. All the causal powers are in the implementing medium. A 
particular medium in which a program is implemented, my brain for example, 
might independently have causal powers to cause consciousness. But the 
operation of the program has to be defined totally independently of the 
implementing medium since the definition of the program is purely formal and 
thus allows implementation in any medium whatever. Any system - from men 
sitting on high stools with green eyeshades, to vacuum tubes, to silicon chips 
- that is rich enough and stable enough to carry the program can be the 
implementing medium. All this was shown by the Chinese Room Argument. (pp. 
209-210)'

"Here it is all about consciousness, yet Searle bristled that Dennett 
'misstates my position as being about consciousness rather than about 
semantics' (p. 128).  Searle is right: I don't understand. Furthermore, if it 
all comes down to programs as abstract entities having no causal powers as such 
- no power in abstraction to cause consciousness or intentionality or anything 
- then The Chinese Room Argument is gratuitous. 'Strong AI,' thus construed, is 
straw AI: only implemented programs were ever candidate thinkers in the first 
place. It takes no fancy 'Gedankenexperiment' or 'derivation from axioms' to 
show this! Even the Law  of Universal Gravitation is causally impotent in the 
abstract - it is only as instanced by the shoe and the earth that the shoe is 
caused to drop. Should we say, then, that the earth has the power to make the 
shoe drop independently of gravitation? Of course not. Neither does it follow 
from the causal powers of programs being powers of their implementing media 
(say brains) that these media (brains) have causal powers to cause 
consciousness 'independently' of computation. That brains 'might,' for all we 
know, produce consciousness by (as yet unknown) noncomputational means, I 
grant. Nothing in the Chinese room, however, makes the would-be-empirical 
hypothesis that they do any more probable (Hauser forthcoming)."

So, Gordon, as with Searle's claims that he is not a dualist, not even an 
implicit one, he is no more credible here with protestation that he is not 
arguing about whether computers can be conscious when that is the only reason 
the CRA can have any significance at all (as Hauser rightly notes) and because 
even Searle deploys the CRA in his rebuttal of Dennett's claims in Dennnett's 
book Consciousness Explained!

Searle has a high profile reputation as a serious philosopher, not least 
because of the role his CRA has played in the field but one has to say, more 
and more, that he looks like the emperor with no clothes when he has to stoop 
to such silly defenses. He should just recognize the flaws in the CRA and move 
on but then that would put in question at least part of what his reputation is 
based on. What looked so compelling to him at first and to many others 
(including me) can be shown to be rife with flaws, not least being an equivocal 
usage of the term "does not constitute and is insufficient for" in the third 
premise, and, of course, the implicitly dualist presumption that underlies the 
generalized claim the CRA aims to make.

This particular material at that site you linked us to suggests that Searle is 
fighting a rearguard and mostly losing action in defense of his CRA, except 
perhaps for those who just really want his conclusion to be true!

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: