[Wittrs] Re: Searle's CRA and its Implications - for Gordon Swobe

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 15 Mar 2010 03:58:27 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Sun, 3/14/10, SWM <wittrsamr@...> wrote:
>
> >> The CRA illustrates two concepts:
> >>
> >> 1) Software/hardware systems cannot by virtue of
> >> running formal programs obtain understanding of the meanings
> > of the symbols they manipulate.
> > >
> > > and
>
> ...

> >
> > It does not illustrate the first. All it illustrates is
> > that such processes in isolation (as stand alone processes)
> > are not conscious and don't have understanding. . . .

<snip>

> The article I posted covers that objection -- the article from Scientific 
> American. Did you read it?
>
> -gts


Okay, have read it. What is the section that you think addresses my response 
above? I see he did cover the various replies though he really only dealt with 
the system reply which would be the right one if addressing my comment here. 
And his response does hinge on internalizing the system and then proceeding to 
claim that he, Searle, the internalizer, still doesn't understand Chinese.

Well, of course not, as the system is now inside of him so to speak (whereas 
before he was inside it). There is no reason to think that he would have to 
have access to it anymore than there is to think we must have access to 
everything going on in our minds (or brains, depending on how one wishes to put 
this).

The point I have made in these discussions is still not being dealt with 
though, i.e., that understanding happens as a feature of a particular kind of 
system (one consisting of X processes doing Y things interactively).

That any individual process in the system has no understanding says nothing 
about what the system itself (or "larger" parts of it) is doing or is capable 
of doing.

I think the problem boils down to:

1) the conception of mind Searle holds, i.e., a conception that reflects the 
first-person ontology he talks about but which actually serves to mask a deeper 
commitment on his part to consciousness as being ontologically basic (the 
dualist presumption I have claimed lies embedded in his position); and

2) his failure to see (because of the conception of consciousness in #1 above) 
that the CR he has devised is underspecked, i.e., if understanding is a complex 
dynamic of representational relations extending through many layers of 
interlocked and interleaved pictures (which combine many different bits and 
sub-bits and megabits of information in linked connections) as I have 
suggested, then the CR would need to be running enough processes in tandem and 
interactively to replicate this same kind of event set.

Of course the CR isn't doing anything of the sort. It is a limited function 
"machine" engaged in rote translation and responding. But humans can do that 
without understanding, too, and everyone knows that the man in the room doesn't 
understand Chinese while he's following his instructions!

As you ask, what is it that he's doing when he understands something? Well 
Searle doesn't say. He just stipulates that the man understands how to follow 
his instructions (presumably in English) and doesn't understand what the 
symbols he's matching up mean.

Dennett (and others, like the Churchlands) does offer an account of what it 
means for the man to understand Chinese though. And that account describes 
things that are well beyond the capacity of the CR, as Searle has specked it, 
to perform.

Why should that matter? Because, if understanding answers to the Dennettian 
account, then there is NO reason the CR cannot be more robustly specked to do 
the things the man is doing when he understands his instructions in his own 
language, or that he would be doing IF we were to say he understands Chinese.

Searle says it's about what syntax can be expected to accomplish. But the CR 
doesn't show that syntax cannot accomplish understanding at all. It only shows 
that the CR cannot.

Note the cartoon of the man staring at the picture of the Chinese word on the 
wall and thinking about a picture of a horse. He is doing that while the CR (we 
are expected to understand) is not and so the cartoon tells us the man 
understands what the CR doesn't.

But that doesn't mean no CR could do that. What if a more robustly specked CR 
were running enough of the programs needed to produce representations and 
associations of (linkages to and with other) representations through a 
sufficiently dense array of representational mappings the way our brains 
apparently do? The way the man who visualizes a horse image does when he sees a 
certain Chinese symbol? Why shouldn't what the man do be just like what the CR 
can do even if, in the case Searle gives us, it isn't doing it?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts:

  • » [Wittrs] Re: Searle's CRA and its Implications - for Gordon Swobe - SWM