[Wittrs] Re: An Issue Worth [Really] Focusing On

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 11 May 2010 23:05:11 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@...> wrote:
>
> I'll limit my responses to those remarks of yours, Budd, that actually seem 
> to warrant responses:


And I'll continue to explode the very bad analysis you think is justified, but 
is not.  I'll be giving reasons for a change..

>
> --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
> <snip>
>
> > >
> > > 2) "Syntax doesn't constitute semantics (syntax doesn't make up 
> > > semantics) and thus to have an instance of syntax isn't sufficient to 
> > > have semantics (because syntactical constituents cannot combine to give 
> > > us semantics).
> > >
> > > Note that #2 is a claim of non-causality.
> >
> > Yes, but you still are trying to use the constitutes idea when insufficient 
> > is sufficient for "insufficient to cause.
>
>
> Searle could very easily have said that but he didn't.

You say this but after allowing that that is what he possibly _meant_.  And, 
for the report on this bang, he actually said it if you can read English.  Now 
let's say you're going to be stubborn and stick to what I think is a retarded 
analysis:

You are saying that there are either two identity claims or two noncausality 
claims gotten out of the third premise.  Well, let's try it out:

1.  "Syntax is neither constitutive nor sufficient for semantics" becomes 
either:

2.  "Syntax is neither semantics nor is it semantics."

or

3.  "Syntax is neither sufficient to cause semantics nor sufficient to cause 
semantics."

My reading seems to be much more sane:

4.  Syntax not being constitutive is a sort of nonidentity claim but consider 
that drooling is also that which doesn't constitute playing chess.

5.  Syntax is "insufficient for" semantics is like saying that you can have 
syntax (indeed all the syntax in the world) and not necessarily have 
semantics--the CR thought experiment showed this.

Caveat emptor:

To believe that the CR couldn't possibly show this is to equivocate between 
senses of syntax (is it computational properties or 1st order properties?).  If 
the latter, then one ends up with Searle's position that brute causality causes 
semantics in any case.  If you want the original strong AI thesis, then you 
have to admit that you have a skyhook doing work for you in the form of 
functional properties (read: formal properties = syntax in Searle's usage).



>On the other hand, he did make a point of saying of the third premise that it 
>is conceptually true when the only conceptually true claim in the third 
>premise is the non-identity assertion.


Again, you don't buy his believing the first premise as part of why he might 
think it (by now) a conceptual truth that the third premise is true?  Well, 
let's see what he in fact said:

From the Scientific American article:

Searle writes:

[commenting on the third premise]

"At one level this principle [third premise] is true by definition.  One might, 
of course, define the terms syntax and semantics differently.  The point is 
that there is a distinction between formal elements [can you now have the third 
premise in mind by now?], which have no intrinsic meaning or content, and those 
phenomena that have intrinsic content.  From these premises [all three of them] 
it follows that Conclusion 1.  _Programs are neither constitutive of nor 
sufficient for minds_.  And that is just another way of saying that strong AI 
is false" (27).



>
> But no matter.

But matter.  It matters that you don't even believe "no matter" here and is why 
I showed above how retarded your analysis sounds.

> Let's agree that Searle really meant the causal denial to be what he was 
> asserting as his third premise. If so, and given that it is not conceptually 
> true and that he gives no argument for asserting its truth beyond his remark 
> about its conceptual truth -- which isn't relevant to the claim of 
> non-causality, he now has a serious problem with his argument and that is the 
> main point I have been making.


But you are trying to do so as if in a vacuum that doesn't include the content 
of the first premise.  The third premise is stated with both a sort of 
nonidentity claim as well as a noncausality claim in light of the first 
premise.  You simply are hoping to forget about it.  But that would be pretty 
bad analysis.  Imagine analyzing an argument without sufficient attention to 
all the premises.

Now, if you want to dispute the truth of the first premise, go ahead.  You will 
end up with Searle's position in any case.  And if you want to sidestep that 
upshot, you go back to treating programs as both formal and as adding some 
causality to the system in virtue of, er, um, the patterns as such of formal 
symbol manipulation, which Searle would obligingly announce he already 
considered and refuted given that the formal qualities of programs add nothing 
to the system's 1st order causal properties.  And if they added something else, 
imagine a humunculus internalyzing these formal properties and still not 
understanding Chinese.


Budd earlier, obviously:

> >  What you end up doing is putting more into the third premise than is 
> > stated.  There are two thoughts, and for the life of me I can't understand 
> > why you ever would have had such trouble as you seem to have.  So my first 
> > guess was that you were just playing a game.  It was later that I thought 
> > to give you a choice between that and outright inept interpretation.
> >
>
>
> I think you simply have trouble grasping what may seem like an overly subtle 
> point to you. No matter. I have made the case. You still insist you don't see 
> it but who am I to gainsay that? I take you at your word that you don't. Time 
> to move on, no?


What?  And accept what I think is a retarded analysis?  And after I show why it 
sounds retarded by spelling it out above in 2. and 3.?  It may be time to move 
on, Stuart, but it ain't because you have made anything other than a retarded 
analysis by deliberately forgetting that the first premise is part of the 
reason Searle says that the third is "[a]t one level ... true by definition."

By the way, that which is true by definition isn't necessarily a conceptual 
truth.  That quarters are counted 1234 in 4/4 time is true by definition.  But 
is it a conceptual truth that quarters are counted 1234?  I mean, they are 
counted "1 And 2 And" when in 2/2 time, so..

Perhaps that was too subtle by half?



>
>
> > >
> > > Recall that Searle asserts that the premise in question is "conceptually 
> > > true".
> >
> > It is conceptually true that syntax is not semantics.
>
>
> The non-identity claim, yes.
>
>
> >  It is true as a matter of fact (first premise) that syntax is formal.  
> > Given all that, it is conceptually true that syntax is insufficient to 
> > cause semantics.
>
>
> No, there is nothing in the non-identity of syntax with semantics that 
> implies insufficiency of causation. To claim that you need some other reason 
> (either hard evidence or another logical claim, if there is one that will do).

Like the first premise?  I think so.  So yes.  But it is in the argument..
>
>
> > But perhaps it is not _just_ conceptually true after all.  So be it.
> > >
>
>
> In that case Searle, who asserts the conceptual truth of the entire first 
> premise, has it wrong, doesn't he?

Depends, "at one level," how one defines syntax and semantics, no?  Would you 
like to conflate syntax with physics?  Then you will end up with the spirit of 
Searle's position (1st order properties cause semantics as well as 
consciousness).

But is he just defining the first premise as a conceptual truth?  Actually, no, 
he isn't.  Notice what he says in the Sci. Am. article that (I hope) we're both 
considering:

"Axiom 1.  Computer programs are formal (syntactic).  This point is so crucial 
that it is worth explaining in more detail" (27).

He then goes on to explain how programs work.

So, you are doing a disservice to those who wish to get Searle right by putting 
words in his mouth. Nowhere in his following words about the first premise does 
he call the content of the first premise a conceptual truth.  But, to concede, 
once one understands why programs are formal, it does seem like a conceptual 
truth we can live with.  But it was arrived at by understanding how programs 
work.  I think it is an empirical truth that programs are formal, by the by.  
But that wouldn't diminish in any way the upshot that strong AI is incoherent 
or defined differently so as to amount to Searle's position..





>Nor does he give us other reasons for thinking it true (neither another 
>logical argument nor any claims based on empirical evidence).


Well, I say that you are speaking out of your arse and perhaps forget exactly 
what he says in the article under consideration.  He gives reasons for saying 
programs are formal.  You, on the other hand, merely assert that he doesn't.  
But that can only mean that you didn't even read the article.  Or you're just 
playing a game where it can be shown that you're being retarded--like any great 
teacher who would rather teach than be worshipped.  If so, I worship you to the 
nines!



>Therefore the third premise is NOT shown to be true and, if it's not, it 
>cannot imply the conclusion that finally depends on it, i.e., that computers 
>(which consist of syntax in the form of implemented programs) cannot cause (in 
>Searle's sense) minds.

The conclusion depends as much on the first premise.  And there is a 
noncausality claim in both the first and third.  And so the conclusion doesn't 
only follow from one of the three premises.  It follows from all three.  Your 
method is to systematically grok an argument without looking at all the 
premises at the same time.  Further, you can't even get the third right.
>
> Therefore the CRA fails to prove its conclusions and is thus a failed 
> argument.

Doesn't follow.  I conclude that your analysis is deficient as an analysis of 
an argument because:

6.  It fails to appreciate all the premises.

7.  You are wrong to say above that Searle thought the first premise a 
conceptual truth.

8.  You are wrong to say that Searle offered no reasons for the first premise.

9.  Your analysis of the third premise comes out hilariously (see 2. and 3. 
above).

10. You think that syntax may be defined differently than Searle so defines 
it--but the upshot of that would be to hold Searle's position even though there 
is disagreement whether such a position is compatible or not with strong AI.

11.  You wrongly conclude that there is a dualist upshot to Searle's position 
by deliberately refusing to entertain the actual reasons he gives (see 8., 
where you are wrong).

Upshot:

You can only conclude that Searle is not being fair to AIers, not that his 
position is a form of dualism.  Ironically, to the extent that you think he 
gets strong AI wrong when saying it is incoherent, it is coherent for you 
precisely because it amounts to Searle's actual position.

All you're doing is erasing a distinction Searle thought to make vis a vis S/H 
systems and nonS/H systems.

And then you equivocate between such systems as if they were both composed 
(thought of) as systems of 1st order properties when, ironically, the very 
virtue of strong AI was SUPPOSED to be that it offered a way of studying mind 
given abstract patterns understood functionally and not just biologically.

[snip]

Stu:
> > > > Note that Searle's assertion that the third premise is conceptually 
> > > > true only applies to the reading in #1.

Budd:
> > Unless it is parasitic on the first premise..
> >

Stu:
> The first premise only asserts that "Computer programs are syntax (formal)".

But Searle does some 'splainin' contra your assertion he doesn't.  That's an 
egregious bit of foolishness, to lie like that or to pontificate without having 
read Searle quite that well.


> This doesn't imply anything causal because what things are does not, of 
> itself, imply anything about what they can do. (We need empirical information 
> about them to get at that.)

Right.  So it an empirical hypothesis whether that which is formal can cause 
anything..  Searle notes that that is incoherent.  OTOH, wanna equivocate and 
end up sharing Searle's position?  He'll say that you just contradicted 
yourself in order to do so or will note the upshot that you are spelling out a 
position he is not arguing against.

So, final upshot:

Strong AI is a well-defined research project or a whore, er, I mean, not so 
well-defined..  To the extent that it differs from the study of nonS/H systems, 
it seems well-defined but in upshot incoherent if one is to have to equivocate 
between functional properties and 1st order properties.  Once you do the 
successful equivocation, then half of you is just spelling out Searle's 
position.

Searle notes that those who might equivocate (as the system repliers are 
doing), are contradicting their original claim or conceding Searle's original 
point via changing their minds.

Alls well that ends well!


Cheers,
Budd




=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: