[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 14:05:35 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> What ever do you mean by "looking inside the CR, there is no consciousness 
> evident"?

Searle makes the point that 'nothing in the CR understands Chinese and the CR 
doesn't either'. Of course his proxy self is conscious but his proxy's 
consciousness is irrelevant to what the CR is since the CR is intended to show 
how a mindless CPU does its work.

> When I look inside the CR, I see an intelligent, conscious and educated 
> Englishman. It just so happens that he has no understanding of Chinese and so 
> he cannot understand the meanings of the Chinese symbols.

Quite right and it is the understanding of Chinese that is the proxy for 
consciousness in the CR and nothing in it understands it per Searle.

> The man in the room has everything you want to ascribe to computers 
> (consciousness, education and intelligence -- whatever we may mean by those 
> words) and yet EVEN SO he cannot come to understand Chinese symbols from 
> manipulating them in the same way that computers do.

> It does not matter whether our man in the room represents the program (as in 
> the original CRA) or the room as a whole (as in Searle's reply to his 
> "systems" critics): the CRA illustrates Searle's third axiom that nobody and 
> nothing, no matter whether man or machine, no matter whether smart or dumb, 
> no matter whether conscious or unconscious, can get semantics from syntax.

That's the problem. Searle asserts that the CR shows this and it's apparent you 
agree. But I am arguing that it doesn't show anything of the kind unless you 
hold a particular idea of consciousness which is, finally, dualistic (something 
Searle explicitly rejects -- thus putting him in self-contradiction).

> Now you want to say the room is underspecked. Okay. Let us then consider the 
> room as a Cray computer. Explain why and how adding even more and even better 
> hardware would somehow disprove the fundamental axiom that 'syntax by itself 
> is neither constitutive of nor sufficient for semantics'.
> -gts

I can do that yet again but it's better at this point to point to where this 
has been done before. Have you read the text I posted on this list that has 
been transcribed from Dennett's Consciousness Explained? He does a rather good 
job of it, on my view. If you haven't seen it (maybe it predates your joining 
us though, if so, not by much), I'll hunt up the link and post it again. (By 
the way the issue is not using a super computer, it's to use a parallel 
processing system that runs MANY different processes performing many different 
functions in an interactive and simultaneous way.


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: