[Wittrs] Re: An Issue Worth [Really] Focusing On

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 12 May 2010 01:36:09 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

> --- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@> wrote:
> >
<snmip>

> And I'll continue to explode the very bad analysis you think is justified, 
> but is not.  I'll be giving reasons for a change..

> >
> > --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
> > <snip>
> >
> > > >
> > > > 2) "Syntax doesn't constitute semantics (syntax doesn't make up 
> > > > semantics) and thus to have an instance of syntax isn't sufficient to 
> > > > have semantics (because syntactical constituents cannot combine to give 
> > > > us semantics).
> > > >
> > > > Note that #2 is a claim of non-causality.
> > >
> > > Yes, but you still are trying to use the constitutes idea when 
> > > insufficient is sufficient for "insufficient to cause.
> >

> >
> > Searle could very easily have said that but he didn't.
>
> You say this but after allowing that that is what he possibly _meant_.  And, 
> for the report on this bang, he actually said it if you can read English.  
> Now let's say you're going to be stubborn and stick to what I think is a 
> retarded analysis:
>

> You are saying that there are either two identity claims or two noncausality 
> claims gotten out of the third premise.  Well, let's try it out:
>
> 1.  "Syntax is neither constitutive nor sufficient for semantics" becomes 
> either:
>
> 2.  "Syntax is neither semantics nor is it semantics."
>
> or
>
> 3.  "Syntax is neither sufficient to cause semantics nor sufficient to cause 
> semantics."
>

You miss the point that the possibility of reading each side in two ways leads 
to an elision of the meanings so that at any point one can think that one is 
making two separate claims which, of course, one is, except that both readings 
apply to both sides of the statement, hence the confusion.


> My reading seems to be much more sane:
>
> 4.  Syntax not being constitutive is a sort of nonidentity claim but consider 
> that drooling is also that which doesn't constitute playing chess.
>

???


> 5.  Syntax is "insufficient for" semantics is like saying that you can have 
> syntax (indeed all the syntax in the world) and not necessarily have 
> semantics--the CR thought experiment showed this.
>

Well it does, but that just means it shows the CR as a system doesn't give us 
what we normally mean by understanding. Once we realize the issue has to do 
with the system rather than the constituents of the system, what syntax is is 
no longer logically relevant (even if there may be empirical facts that 
separately render being syntax relevant to the issue of producing semantics).


> Caveat emptor:
>
> To believe that the CR couldn't possibly show this is to equivocate between 
> senses of syntax (is it computational properties or 1st order properties?).


A faux distinction as I've already pointed out multiple times.


>  If the latter, then one ends up with Searle's position that brute causality 
> causes semantics in any case.  If you want the original strong AI thesis, 
> then you have to admit that you have a skyhook doing work for you in the form 
> of functional properties (read: formal properties = syntax in Searle's usage).
>

If you're going to argue about the meaning of "strong AI" again, then we're 
back on the old merry-go-round! Computers, of course, are no less brute than 
brains or any other physical platform.

>
>
> >On the other hand, he did make a point of saying of the third premise that 
> >it is conceptually true when the only conceptually true claim in the third 
> >premise is the non-identity assertion.
>
>
> Again, you don't buy his believing the first premise as part of why he might 
> think it (by now) a conceptual truth that the third premise is true?  Well, 
> let's see what he in fact said:
>
> From the Scientific American article:
>
> Searle writes:
>
> [commenting on the third premise]
>
> "At one level this principle [third premise] is true by definition.  One 
> might, of course, define the terms syntax and semantics differently.  The 
> point is that there is a distinction between formal elements [can you now 
> have the third premise in mind by now?], which have no intrinsic meaning or 
> content, and those phenomena that have intrinsic content.  From these 
> premises [all three of them] it follows that Conclusion 1.  _Programs are 
> neither constitutive of nor sufficient for minds_.  And that is just another 
> way of saying that strong AI is false" (27).
>

He is again arguing that to have semantics you need semantics ("there is a 
distinction between formal elements, which have no intrinsic content") which is 
to suppose that understanding and the consciousness which has it is an 
irreducible ontological basic. It's the same mistake yet again.

Note that he refers to the consequence of "these premises", clearly indicating 
his recognition that the first premise isn't enough to yield the conclusion he 
is arguing for, contra your claim that it is.


>
>
> >
> > But no matter.
>
> But matter.  It matters that you don't even believe "no matter" here and is 
> why I showed above how retarded your analysis sounds.
>

"Retarded"? Back to the old Budd then, eh? Well I guess I am becoming used to 
such sophomoric locutions from you. I've almost come to expect them.


> > Let's agree that Searle really meant the causal denial to be what he was 
> > asserting as his third premise. If so, and given that it is not 
> > conceptually true and that he gives no argument for asserting its truth 
> > beyond his remark about its conceptual truth -- which isn't relevant to the 
> > claim of non-causality, he now has a serious problem with his argument and 
> > that is the main point I have been making.
>
>
> But you are trying to do so as if in a vacuum that doesn't include the 
> content of the first premise.


Even he recognizes you need the three premises to reach the conclusions he is 
arguing for in that text you cite above. What is the sense of citing such 
things if you don't understand what he is saying when he says them?


> The third premise is stated with both a sort of nonidentity claim as well as 
> a noncausality claim in light of the first premise.


The first premise says nothing about non-causality, read it again! You are 
merely imputing that claim to it to cover for the gap we have discovered in the 
third premise!


>  You simply are hoping to forget about it.  But that would be pretty bad 
> analysis.  Imagine analyzing an argument without sufficient attention to all 
> the premises.
>

Imagine quoting Searle while not understanding his words!


> Now, if you want to dispute the truth of the first premise, go ahead.


I don't need to because it says nothing about the causal capabilities of the 
so-called syntax! Read the actual words of the premise.


> You will end up with Searle's position in any case.  And if you want to 
> sidestep that upshot,


I don't have to because that's not the upshot and your saying it is doesn't 
make it so.


> you go back to treating programs as both formal and as adding some causality 
> to the system in virtue of, er, um, the patterns as such of formal symbol 
> manipulation, which Searle would obligingly announce he already considered 
> and refuted

Are you channeling him for us here then, that you know what he would "announce" 
so readily?


> given that the formal qualities of programs add nothing to the system's 1st 
> order causal properties.


They don't have to because the computer, running whatever algorithm it has been 
programmed with, already has its own, perfectly physical causal properties!


> And if they added something else, imagine a humunculus internalyzing these 
> formal properties and still not understanding Chinese.
>
>

> Budd earlier, obviously:
>
> > >  What you end up doing is putting more into the third premise than is 
> > > stated.  There are two thoughts, and for the life of me I can't 
> > > understand why you ever would have had such trouble as you seem to have.  
> > > So my first guess was that you were just playing a game.  It was later 
> > > that I thought to give you a choice between that and outright inept 
> > > interpretation.
> > >
> >
> >
> > I think you simply have trouble grasping what may seem like an overly 
> > subtle point to you. No matter. I have made the case. You still insist you 
> > don't see it but who am I to gainsay that? I take you at your word that you 
> > don't. Time to move on, no?
>
>
> What?  And accept what I think is a retarded analysis?


Sticks and stones and so forth . . .


>  And after I show why it sounds retarded by spelling it out above in 2. and 
> 3.?  It may be time to move on, Stuart, but it ain't because you have made 
> anything other than a retarded analysis by deliberately forgetting that the 
> first premise is part of the reason Searle says that the third is "[a]t one 
> level ... true by definition."
>

You don't understand the CRA or its premises, I'm afraid.


> By the way, that which is true by definition isn't necessarily a conceptual 
> truth.  That quarters are counted 1234 in 4/4 time is true by definition.  
> But is it a conceptual truth that quarters are counted 1234?  I mean, they 
> are counted "1 And 2 And" when in 2/2 time, so..
>
> Perhaps that was too subtle by half?
>

Stipulations are definitionally true but not conceptually in the same sense, of 
course. What is conceptually true is what is discovered in the meaning of the 
concepts as used by a particular linguistic community. Just assigning a meaning 
arbitrarily or as part of a specialized language game would not necessarily be. 
Nor did I claim it would.

>
>
> >
> >
<snip>


> >
> > >  It is true as a matter of fact (first premise) that syntax is formal.  
> > > Given all that, it is conceptually true that syntax is insufficient to 
> > > cause semantics.
> >

> >
> > No, there is nothing in the non-identity of syntax with semantics that 
> > implies insufficiency of causation. To claim that, you need some other 
> > reason (either hard evidence or another logical claim, if there is one that 
> > will do).
>
> Like the first premise?  I think so.  So yes.  But it is in the argument..
> >

It's not in the third premise or he wouldn't have had to introduce the third.

> >
> > > But perhaps it is not _just_ conceptually true after all.  So be it.
> > > >
> >
> >
> > In that case Searle, who asserts the conceptual truth of the entire first 
> > premise, has it wrong, doesn't he?
>
> Depends, "at one level," how one defines syntax and semantics, no?  Would you 
> like to conflate syntax with physics?  Then you will end up with the spirit 
> of Searle's position (1st order properties cause semantics as well as 
> consciousness).
>

Too bad no one mentioned that to Searle vis a vis this Dennettian position, eh?


> But is he just defining the first premise as a conceptual truth?  Actually, 
> no, he isn't.  Notice what he says in the Sci. Am. article that (I hope) 
> we're both considering:
>
> "Axiom 1.  Computer programs are formal (syntactic).  This point is so 
> crucial that it is worth explaining in more detail" (27).
>
> He then goes on to explain how programs work.
>
> So, you are doing a disservice to those who wish to get Searle right by 
> putting words in his mouth. Nowhere in his following words about the first 
> premise does he call the content of the first premise a conceptual truth.


Right, he reserves that for the third premise!


> But, to concede, once one understands why programs are formal, it does seem 
> like a conceptual truth we can live with.


That's the problem: You presume something about syntax (that it's not being 
semantics excludes it from bringing semantics about) because of a certain way 
of thinking about semantics that you share with Searle and other dualists. 
There's no getting around this. Unless and until you see this, you will remain 
in the same place re: this argument, i.e., completely unable to fathom the 
important distinction between a system level property and a property associated 
with some constituent(s) of the system.


> But it was arrived at by understanding how programs work.  I think it is an 
> empirical truth that programs are formal, by the by.


And, of course, it is a conceptual question as to what we mean by "formal" in 
this case.


>  But that wouldn't diminish in any way the upshot that strong AI is 
> incoherent or defined differently so as to amount to Searle's position..
>

Another reiteration of your core belief, i.e., your denial of the 
computationalist thesis of mind.

>
> >Nor does he give us other reasons for thinking it true (neither another 
> >logical argument nor any claims based on empirical evidence).
>
>
> Well, I say that you are speaking out of your arse


Ah, another fine and sophisticated Buddian locution. And a powerful argument to 
boot, no doubt!


> and perhaps forget exactly what he says in the article under consideration.  
> He gives reasons for saying programs are formal.


I've already stipulated to accepting the truth of his first two premises for 
the sake of this argument. Where have you been?


>  You, on the other hand, merely assert that he doesn't.


I assert that he is confused in his use of terms like "formal" and "syntax" but 
that for the sake of the discussion of the CRA I agree to his particular uses 
in this case. Old news, Budd. I guess you missed it . . . again!


>  But that can only mean that you didn't even read the article.


Oh for christ's sake, we read AND DISCUSSED the bloody Scientific American 
article only a short while back when Gordon posted and referenced it. How bad 
is your short term memory really? How many more times do you want me to go back 
and read it when you can't even understand the text you are citing from it?


> Or you're just playing a game where it can be shown that you're being 
> retarded--like any great teacher who would rather teach than be worshipped.  
> If so, I worship you to the nines!
>
>

I'll pass, thanks!

>
> >Therefore the third premise is NOT shown to be true and, if it's not, it 
> >cannot imply the conclusion that finally depends on it, i.e., that computers 
> >(which consist of syntax in the form of implemented programs) cannot cause 
> >(in Searle's sense) minds.
>
> The conclusion depends as much on the first premise.


Not the non-causal part of the claim which is the conclusion we're considering 
here.


>  And there is a noncausality claim in both the first and third.


I guess you figure that by endlessly repeating this, I will eventually agree 
that it's true?


>  And so the conclusion doesn't only follow from one of the three premises.  
> It follows from all three.


Of course! And the causal part is specifically supported by the claim in the 
third premise which can be read in more than one way!


> Your method is to systematically grok an argument without looking at all the 
> premises at the same time.  Further, you can't even get the third right.
> >


Sorry but this is really called for: "Oy!"


> > Therefore the CRA fails to prove its conclusions and is thus a failed 
> > argument.
>
> Doesn't follow.  I conclude that your analysis is deficient as an analysis of 
> an argument because:
>
> 6.  It fails to appreciate all the premises.
>

False.


> 7.  You are wrong to say above that Searle thought the first premise a 
> conceptual truth.
>

I never said that, you imputed it to me!

> 8.  You are wrong to say that Searle offered no reasons for the first premise.
>

Irrelevant since the first premise isn't being challenged by me, the third 
premise is and he offered no reason for the causal reading of it, nor is your 
claim that the reason lies in the first premise (which he supports via an 
explanation of programming) even intelligible since he would have had no reason 
to give us the third premise if its meaning was already built into the first!


> 9.  Your analysis of the third premise comes out hilariously (see 2. and 3. 
> above).
>

Another "Oy!" I'm afraid (apologies to Peter Brawley on analytic for borrowing 
his famous expletive).


> 10. You think that syntax may be defined differently than Searle so defines it


Of course it can or do you think that Searle is the final arbiter of linguistic 
usage?


--but the upshot of that would be to hold Searle's position even though there 
is disagreement whether such a position is compatible or not with strong AI.
>

Nonsense.

> 11.  You wrongly conclude that there is a dualist upshot to Searle's position 
> by deliberately refusing to entertain the actual reasons he gives (see 8., 
> where you are wrong).
>

See my response to your "8"!


> Upshot:
>
> You can only conclude that Searle is not being fair to AIers, not that his 
> position is a form of dualism.

Fslse again. See my numerous past arguments.


> Ironically, to the extent that you think he gets strong AI wrong when saying 
> it is incoherent, it is coherent for you precisely because it amounts to 
> Searle's actual position.
>

False again unless you think Searle and Dennett are on the same side of this 
matter. And if you do, how come no one has yet told Searle? (I know this will 
do no good because a few posts from now you will be saying the same things 
again, just as if I never responded. Is this the Neverending Story then?)


> All you're doing is erasing a distinction Searle thought to make vis a vis 
> S/H systems and nonS/H systems.
>

A distinction you have gotten badly wrong and Searle may have as well!

> And then you equivocate between such systems as if they were both composed 
> (thought of) as systems of 1st order properties when, ironically, the very 
> virtue of strong AI was SUPPOSED to be that it offered a way of studying mind 
> given abstract patterns understood functionally and not just biologically.
>

It studies mind in terms of functionalities. That implies multiple 
realizability but it does not imply a belief in the essential abstractness of 
computer programs running on computers.

> [snip]
>
> Stu:
> > > > > Note that Searle's assertion that the third premise is conceptually 
> > > > > true only applies to the reading in #1.
>
> Budd:
> > > Unless it is parasitic on the first premise..
> > >
>

> Stu:
> > The first premise only asserts that "Computer programs are syntax (formal)".
>
> But Searle does some 'splainin' contra your assertion he doesn't.  That's an 
> egregious bit of foolishness, to lie like that or to pontificate without 
> having read Searle quite that well.
>

Where do you think I assert that he doesn't explain why he makes his claim that 
we see embodied in his first premise?

>
> > This doesn't imply anything causal because what things are does not, of 
> > itself, imply anything about what they can do. (We need empirical 
> > information about them to get at that.)
>
> Right.  So it an empirical hypothesis whether that which is formal can cause 
> anything..


Except the idea that computer programs are "formal" doesn't apply in the matter 
of "implemented programs", a term Searle also uses! Now I agree that he does 
think it applies but that is one of his mistakes, i.e., to think that any 
computationalist is ever speaking about anything that is abstract as in 
separate and apart from the platform on which it is implemented!

However, note that while it's an empirical question as to whether what we mean 
by "formal" adequately denotes what we mean by computer programs, there is also 
an important conceptual aspect here that cannot be discounted. Discovering what 
anyone means by "computer program" is certainly empirical. Discovering whether 
terms like "formal" or "syntax" adequately describe it is, however, conceptual.


>> Searle notes that that is incoherent.  OTOH, wanna equivocate and end up 
>> sharing Searle's position?  He'll say that you just contradicted yourself in 
>> order to do so or will note the upshot that you are spelling out a position 
>> he is not arguing against.
>

> So, final upshot:
>
> Strong AI is a well-defined research project or a whore, er, I mean, not so 
> well-defined..


When in doubt go potty-mouth!


> To the extent that it differs from the study of nonS/H systems, it seems 
> well-defined but in upshot incoherent if one is to have to equivocate between 
> functional properties and 1st order properties.


False or, worse, unintelligible!


> Once you do the successful equivocation, then half of you is just spelling 
> out Searle's position.
>

"Oy" again!

> Searle notes that those who might equivocate (as the system repliers are 
> doing), are contradicting their original claim or conceding Searle's original 
> point via changing their minds.
>
> Alls well that ends well!
>
>
> Cheers,
> Budd

Searle is badly mistaken. The system repliers have it exactly right and your 
boy Searle just misses the boat entirely.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: