[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 12 Mar 2010 19:21:02 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

<snip>

>
> > 1) Minds (consciousness) have (has) semantics.
>
> > 2) Computers consist exclusively of syntax.
>
> > 3) Syntax does not constitute and is not sufficient for semantics.
>
> > 4) Therefore computers cannot have minds (be conscious)
>
> Okay, that's a pretty good summary of Searle's expressed view.
>

>
> > My response to the CRA is that, while #1 and #2 are unassailable if
> > we agree to accept the meanings of the terms as Searle presents them
> > (which we can for argument's sake), he is certainly mistaken on #3
> > because the CR does not demonstrate that as he claims.
>

> Wait just a moment.  You cannot do that.
>
> That Searle's Chinese Room argument fails to prove #3 only shows that
> #3 is unproved.  It does not show that #3 is "certainly mistaken".
>


Searle's CRA is not intended to prove #3 which is, in fact, one of its 
premises. It is intended to use those premises to achieve a certain conclusion, 
namely that what the constituent processes cannot do in the configuration known 
as the CR they cannot do in any other configuration, consisting of nothing but 
the same processes, either.

Premise #3 is, of course, the tricky one. It purports to show (in, as Searle 
has put it, a self-evident manner) that the constituents in the CR are not 
conscious and cannot conceivably be conscious.

But that is the mistake (or the really important one anyway), because, as 
Dennett notices, Searle aims to get us to assume that whatever an isolated 
instance of what we find in the CR cannot do, no more complex arrangement of 
these same constituents can do, either.

That, of course, is where I am saying his implicit dualist is evident.


> If we take #3 as a hypothesis, then many people consider that
> hypothesis to be highly plausible.  As far as I know, there has  not yet
> been any refutation of #3.  So it seems to me that you are  overstating
> your case.


There are, indeed, other reasons for believing that #3 is true. But those 
reasons only refer to the particular kinds of processes we find in the CR, not 
to the idea of physical processes themselves.

That is, computational processes may, indeed, just be the wrong type of 
processes to produce consciousness (as people like Edelman and Hawkins claim, 
albeit for different reasons than Searle and than each other). But Searle's 
argument hinges on looking for the consciousness in the processes under 
consideration ('nothing in the room understands Chinese and the room doesn't 
understand Chinese either). Of course, these processes are not conscious in 
isolation. But there is no reason to think brain processes would be conscious 
in isolation either.

Searle invokes the idea of abstractness of syntax, of its non-causal capacity, 
to sustain his claim, but as you yourself noted elsewhere, computational 
processes implemented in a computer are not just something abstract, they are 
something quite physical and real (and causal) and in this are no different 
than brain processes, even if there ARE differences, some of which may be 
relevant to whether they are candidates for producing consciousness or not!

But to discover those differences and their relevance is an empirical 
undertaking not a logical one as Searle would have it. At least it is not a 
logical one based on a claim that the absence of consciousness in any of the 
constituents of the CR means that no possible configuration of those 
constituents could be conscious.


>  The most you can say is that Searle does  not actually
> establish #3, and therefore does not actually prove #4.
>
>


That, actually, is enough since his point is to prove #4.

In fact, #3 depends on an intuition which only works if one harbors the picture 
of consciousness that is cannot be a function or feature of some array of 
elements that aren't, themselves, conscious. Once you don't have that picture 
of things any longer, it no longer looks compelling to think that a complex 
system of mindless physical processes cannot have the features we associate 
with being conscious.


> > If the processes going on in brains are enough to produce
> > consciousness, why should such processes in computers not be (at
> > least, in principle)?
>
> Perhaps the processes in computers are not the "right" kind of
> processes.
>

Right. That is a reasonable question. But that isn't the reason Searle gives, 
it's his conclusion -- which he can't get to without #3 being accepted and #3 
can only be accepted if you hold a picture of consciousness that says it is 
irreducible to anything not, itself, conscious.

>
> > Either way he is stuck in a dualist mode of thinking because he is
> > saying either that:
>
> > 1) Brains cause something ontologically separate (dualism) from
> > themselves; or
>

> Automobiles are made of atoms.  Automobiles cause motion and motion  is
> not made of atoms.
>

> Is belief in automobiles dualistic?
>
> Regards,
> Neil


Why would it be? Why would that matter to the question of whether the CR 
demonstrates what Searle thinks it does?

Recall a point I have often made, that consciousness is to the brain as turning 
(motion) is to the wheel. Nothing "dualistic" comes from that.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: