[Wittrs] Re: Further Thoughts on Dennett, Searle and the Conundrum of Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 28 Mar 2010 14:18:36 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> Here we see the logical structure of Searle's formal argument as given in his 
> article in Scientific American that I referenced earlier.
>
> (A1) Programs are formal (syntactic).
> (A2) Minds have mental contents (semantics).
> (A3) Syntax by itself is neither constitutive of nor sufficient for 
> semantics. (This is what the Chinese room experiment shows.)
>
> (C1) Programs are neither constitutive of nor sufficient for minds.
> (This conclusion should follow without controversy from the first three 
> axioms.)
>
> That concludes his negative argument.



Only in that iteration. As we've seen, he has put this in many different ways 
over the years. In this particular rendition he has narrowed his terms a lot 
and, from what we saw on that Hauser site you routed us to, he now seems to 
want to say he is only arguing about "semantics", not consciousness (despite 
the fact that he deploys his argument against claims about computational 
consciousness routinely). Hauser is right to take him to task for that.

Now let's consider what he means by "programs". Does he mean the lines of code 
written by programmers in some document or on some disc? Does he mean the 
algorithmic instructions the code carries? Something else?

If he means "programs" in the sense of certain ordered computational operations 
implementing an algorithm, then we are talking about computers, not merely 
programs. Moreover, no one argues that programs in the abstract sense as 
described in the above paragraph constitute or are sufficient for minds.

Brains are physical platforms on which certain operations (physical events) 
take place. Computers are also physical platforms on which certain operations 
(physical events) take place. It may be that there is something brains can do 
that computers can't (a la Hawkins and a la Edelman). But the CR doesn't show 
that and the CRA doesn't make the case for that.

A conclusion that "programs are neither constitutive of nor sufficient for 
minds" doesn't say anything about the possibility of implementing a mind on a 
computer for a number of reasons. If it is just about programs in the abstract 
(see above) then it is trivially true but so what? No one claims otherwise.

The claim of "strong AI" is that one can construct a conscious mind on 
computers, not in the abstract!

But if we take "programs" to mean computational processes running on computers 
as Searle originally seemed to mean this (his earlier iterations of the CRA 
were about computers!) then there are a series of problems here. The first is 
that the third premise is equivocal and so it deceives us. It uses "neither 
constitutive of nor sufficient for" in two ways. It is obviously true that 
syntax is NOT semantics and whatever is syntax ("programs" as he is now putting 
it), cannot be taken to be symantics. That is, when you have an instance of 
syntax you don't have, ipso facto, an instance of semantics. There is a 
non-identity here which just reflects what we mean by the two terms. But the 
conclusion of the CRA (even the latest iteration) that "Programs are neither 
constitutive of nor sufficient for minds", insofar as "programs" is a stand-in 
for computational processes running on computers (the only interpretation that 
makes sense or has any application to real AI claims) can only mean something 
if the relation at issue is causal not one of identity.

That is, a claim of non-identity (syntax does not equal minds) is not the same 
as, and does not imply, a claim of non-causality (syntax cannot cause minds)! 
But the third premise can be read either way and because we are prompted to 
read it as a denial of identity, which IS obviously true, we are seduced into 
thinking that it is true when read as a denial of causal possibility as well. 
But non-identity has no implication for non-causality.


> From here he begins his positive argument for biological naturalism. He adds 
> another axiom and derives three more conclusions.
>


> (A4) Brains cause minds.
>
> (C2) Any other system capable of causing minds would have to have causal 
> powers (at least) equivalent to those of brains.
>
> (C3) Any artifact that produced mental phenomena, any artificial brain, would 
> have to be able to duplicate the specific causal powers of brains, and it 
> could not do that just by running a formal program.
>
> (C4) The way that human brains actually produce mental phenomena cannot be 
> solely by virtue of running a computer program.
>
>

C2 doesn't follow because "at least equivalent" is undefined; one could argue 
that some systems could cause a different kind of mind (missing some but not 
all features of mind as we have it).

C3 makes sense except that he has not shown with C1 that computers are, in 
fact, missing the "specific causal powers" in question because we don't know 
what they are (so how can we know they're missing in computers?) and because 
the denial part of his argument leads to an unsustainable conclusion based on a 
logical equivocation in the third premise (non-identity does not imply 
non-causality).*

C4 goes back to (and depends on) the negative part of the argument which has 
already been shown to be faulty (i.e., it contains an unsupported conclusion).

The CRA looks worse and worse the more Searle attempts to define and defend it.

SWM

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

* I won't even go into the deeper problem about Searle's dependence on a 
dualistic concept of mind in the case of the CR and its role as demonstrating 
the third premise.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: