[Wittrs] Re: An Issue Worth Focusing On

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sat, 08 May 2010 19:44:58 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> --- On Wed, 5/5/10, gabuddabout <wittrsamr@...> wrote:
>
> >> Now Gordon says he sees only a non-identity claim on
> > both sides of the premise while Joe avers that he sees
> > something entirely different from either identity or
> > causality being invoked by the use of "constitutes".
> >
> > They are playing a game with you or are in a serious state
> > of confusion.  Benighted or lying, again.
>
> Hey I resemble that remark!
>
> Seems to me that if Searle had wanted to make a causal claim in the third 
> premise, he would have done so, and that we should refrain from reading into 
> his words anything that he did not explicitly state. He italicized them after 
> all. One would presume he chose them carefully.
>
> Consider the claim "salt is neither constitutive of nor sufficient for 
> gunpowder." I take this to mean:
>
> 1) gunpowder contains no salt.
> 2) salt does not suffice for gunpowder.
>
> Strictly speaking it does not equate to a claim about the cause of gunpowder. 
>
> -gts


Hi Gordon (aka Stuart?  That would be a gas!),

On (1): Something can contain x even though x doesn't constitute it as a whole 
(read:  syntax does not constitute semantics).  On (2):  Suffice it to say 
your(2) doesn't suffice to grok Searle's "insufficient for" claim.  I'll just 
quote a reviewer below who gets the claim right like I do.

Brief history:

The point of strong AI is to suggest that a S/H system differs from a nonS/H 
system and it might be in virtue of the computational (syntactical in Searle's 
usage) properties that semantics might be caused, i/e., syntax being sufficient 
_to cause_ semantics.  Note that "Stuart's" objection that what is intended are 
RUNNING programs, and not Searle's benighted attempt to call programs 
syntactical, i.e.,  merely formal, puts such S/H systems on a par with nonS/H 
systems.  But that would be to:

1.  Ultimately agree with Searle.

2.  Get strong AI wrong.

3.  Conflate S/H with nonS/H.

4.  Ultimately disagree with Searle for a reason that is not good--implicit 
dualism for Searle because he argues against strong AI, and one can conflate 
S/H with nonS/H such that by transitivity one is arguing against nonS/H 
systems, which would be a contradiction Stuart has tried to draw given the 
conflation which Searle does not do.

Here's a reviewer (R.U.R.) of _Views into the Chinese Room: New Essays on 
Searle and Artificial Intelligence_, 2002, Ed. John M. Preston and Michael A. 
Bishop:

R. U. R. writes:

"The CRA is that: 1) Syntax is not semantics. 2) The implemented synatactical 
or formal program of a computer is not sufficient to generate semantics. 3) 
Minds have semantics. 4) Therefore, computers (so defined) are not minds/cannot 
think/do not understand because they are not sufficient to generate semantics."

So this reviewer sees the "insufficient for" claim as I do when I say it is 
meant as "insufficient to cause"--he says "insufficient to generate."  But if 
that doesn't suffice for you, maybe we can take a poll to find out what Searle 
meant?

Here's the full review:

By  R.U.R.

[This review is from: Views into the Chinese Room: New Essays on Searle and 
Artificial Intelligence (Hardcover)]

"The Chinese Room Argument (CRA) has nothing to do with the speed of computers 
or any future developments in artifical intelligence (at least as understood as 
following from Turing). The CRA is a purely formal argument intended to refute 
the claim that computers (defined as Turing machines) can think, or can 
understand, or are minds solely by virtue of their formal description. (This 
claim is the essence of "computationalism," after Turing's original 
formulation.) The CRA is that: 1) Syntax is not semantics. 2) The implemented 
synatactical or formal program of a computer is not sufficient to generate 
semantics. 3) Minds have semantics. 4) Therefore, computers (so defined) are 
not minds/cannot think/do not understand because they are not sufficient to 
generate semantics.

For example, the concepts we employ to think and the words we use to speak have 
meanings. But there is nothing in computationalism as syntax that has any 
meaning whatsoever. Whatever meaning an implemented formal program has results 
from its being programmed or interpreted by us. Syntax (e.g., a computer 
program) has no causal powers. Whatever causal powers computers have (e.g., to 
fly airplanes) results from our programming and our assigning interpretations 
to the electrical charge insides a chip, not from the program in itself.

The chapters in Views Into the Chinese Room attack different aspects of the 
CRA. But they address it as an argument that stands or falls on the truth of 
the premises and the validity of the inference, not on engineering questions 
such as the speed of computers, which are irrelevant. Searle believes that 
there are, in fact, thinking machines -- we human beings are biological 
machines that think. And he believes that there also could be artificially made 
machines that think. The CRA is meant to show only that an implemented computer 
program by itself cannot generate mental content or semantic content.

For a clear explanation of the CRA, see chapter 15 of this book, by Stevan 
Harnad, the editor of The Behavioral and Brain Sciences, where Searle's 
original paper appeared twenty years ago. Do not rely on reviewers who do not 
understand the argument in the first place."


BGood


=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: