[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 12 Mar 2010 22:37:07 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Fri, 3/12/10, SWM <wittrsamr@...> wrote:
>
> > Searle's CRA is well known, of course, though he has
> > presented it over the years in multiple and often varying
> > iterations. In generic terms, however, it goes like this:
> >
> > 1) Minds (consciousness) have (has) semantics.
> >
> > 2) Computers consist exclusively of syntax.
> >
> > 3) Syntax does not constitute and is not sufficient for
> > semantics.
> >
> > 4) Therefore computers cannot have minds (be conscious)
>

> Let's make sure we get Searle right here. He formalized his CRA argument with 
> axioms in an article for Scientific American circa 1990. It goes like this 
> (in his own words):
>
> A1) Programs are formal (syntactic)
>
> A2) Minds have mental contents (semantics)
>
> A3) Syntax by itself is neither constitutive of nor sufficient for semantics
>
> C1) Programs are neither constitutive of nor sufficient for minds
>
> Notice his use of "programs" and not "computers" as in your paraphrase. Also 
> notice the word "consciousness" does not appear in his axioms or conclusion. 
> Fine points, but relevant.
>

Yes, I've seen it written that way, too (in four steps rather than three and 
replacing "computers" by "programs" -- although I've also seen him do it, I 
believe, with the inclusion of a premise that computers consist of programs).


> The CRA shows that if the mind runs programs like a computer, it cannot get 
> semantics from syntax, even though it has consciousness. The man in the room 
> has consciousness, after all.
>

The man in the room isn't there as a conscious entity but as an automaton 
blindly following a list of rules. His consciousness isn't relevant since he is 
acting in lieu of a cpu.


> Only later in his address to the APA did Searle address the question of 
> whether the brain actually is a digital computer. Two separate arguments!
>
> -gts
>
>

In his earlier formulations (in the eighties) he used "computers" rather than 
"programs". The choice of which has nothing to do with whether the brain is or 
is not a "digital computer" however which, as you note, is a somewhat different 
question (though I wouldn't go so far as to characterize the digital computer 
analogy as a "separate argument").

His point in the CR is to simulate a computer in its functioning, i.e., in its 
computational operations. He aims to show, by doing this, that no matter how 
much understanding seems to be inherent in this kind of set-up in the behavior, 
anyone actually looking at what's going on inside wouldn't consider this to be 
understanding.

And frankly, I agree that the CR does not, as described, evidence any 
understanding. But that is because the CR is underspecked for understanding 
(it's only specked to simulate understanding via rote responding).

The reason it's underspecked is because Searle is caught in a dualist picture 
of mind, i.e., he cannot see how understanding (and other characteristics of 
consciousness) could be effected as features of a complex system of processes 
which, individually, have none of these features but, running together in an 
inactive way, do.

But if you think the different iteration of the CRA you've provided has an 
impact on the argument I've made, how so?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: