[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 12 Mar 2010 23:03:06 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Fri, 3/12/10, SWM <wittrsamr@...> wrote:

> > Premise #3 is, of course, the tricky one. It purports to
> > show (in, as Searle has put it, a self-evident manner) that
> > the constituents in the CR are not conscious and cannot
> > conceivably be conscious.
>
> Actually it purports to show exactly what it purports to show: that syntax by 
> itself does not give semantics.
>

But it doesn't show that IF semantics are, in fact, one of the features we get 
(i.e., recognition of meaning, understanding) by combining programs in 
operation (different syntactical operations) doing different specific things 
interactively in a certain kind of complex system. And that's my point. 
Searle's "showing" only works if one cannot conceive of mind being something 
like this. It hinges entirely on an intuition we have but being an intuition is 
not a guarantee of truth.

Now if you want to say it shows that no individual instance of syntactical 
operations has, by itself, any of the features we associate with consciousness, 
I will not dispute that. But so what?

The issue is whether syntax can produce semantics, whether non-conscious 
processes in operation can produce consciousness. And Searle's CR doesn't show 
that because it is underspecked, i.e., it does not represent such a complex 
system of multiple operations going on at all.


> The thought experiment shows the truth of the premise: the man in the room 
> follows rules of syntax that tell him to output, for example, "squiggle" in 
> response to "squoogle". He cannot from following such syntactical rules come 
> to know the meanings of squiggles and squoogles.
>


No he cannot. And neither can a CPU. And, it's not unreasonable to think, 
neither can individual brain events in isolation. You have to combine them to 
get the full effect. The same would be the case with the CR. Thus the CR's 
underspecking is fatal to the CRA but it is, at least, superficially convincing 
because it expresses a very deep intuition most of us have.


> This axiom stands on its own distinct from any considerations about 
> consciousness. No matter whether computers have consciousness or not, neither 
> they nor us can glean semantics from syntax.
>
> -gts
>
>

The point of the CRA is to draw a general conclusion to all types of 
combinations of syntactical operations from the example of the CR but you can 
only do that if you think that consciousness cannot be made up of constituents 
that aren't themselves conscious already!

Just because individual computational processes running on a computer aren't 
conscious doesn't imply anything for a combination of them UNLESS you think 
that consciousness is irreducible to anything that isn't already, like itself, 
conscious. That's why Searle is, finally, a closet dualist.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: