[Wittrs] Re: Semantics, Meaning, Understanding and Consciousness

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 28 Mar 2010 14:36:11 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Sun, 3/28/10, SWM <wittrsamr@...> wrote:
>
> > As to the claim that "we will never know from the
> > experiment if programs cause consciousness", if that is
> > Searle's new position, then the whole CRA is rendered
> > pointless.
>
> Not so. The experiment serves to illustrate the third axiom of the CRA.
>

That's what I actually said at one point and you denied it. But okay, we can 
agree that the point of the CR is to demonstrate the third premise of the 
argument. The question then is 1) does it do that and 2) is it phrased in a 
misleading way.

I would say that it does not do #1 but that the answer to #2 is yes, it is (the 
equivocal meaning of "does not constitute and is not sufficient for").


> After seeing the truth of the third axiom,

But the third premise isn't true. It is both misleading and built on a faulty 
premise about consciousness, the suppressed premise that consciousness is not 
reducible to constituents that are not, themselves, conscious.


>we combine that knowledge with knowledge of the first two axioms and conclude 
>that programs don't cause what we mean by "having a mind", where it means 
>"having mental contents".
>

If "programs" only mean something abstract (as in unimplemented) then no one 
would claim otherwise and the CRA is pointless. But if we are speaking of 
implemented programs, then this is about computational processes running on 
computers and now the issue is no longer clear at all. Certainly the negative 
claim no longer stands because of the deficiencies found in the third premise.


> Had Searle not written the experiment for general audiences, he might have 
> stated C1 more clearly as "Programs are neither constitutive of nor 
> sufficient for intentionality."
>
> -gts
>

Searle himself was not clear earlier on as he admits in The Mystery of 
Consciousness ('I don'tknow why I didn't see it before, but I didn't') where he 
endeavors to replace his original CRA with a version that depends on the 
abstractness of programs and their consequent lack of causal power. But that is 
an even worse error.

Anyway, "programs" in the above formulation remains unexplicated by him. If all 
he means are lines of code or algorithmic steps in someone's head, then this is 
a specious argument. If he means programs running on computers, then it's about 
computers (as it should be, given what AI is really about) and in that case the 
argument falls flat on its face for all the reasons previously given.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: