[Wittrs] Re: Reading the Third Axiom without the Equivocation

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 23 May 2010 02:24:54 -0000

Ah me, I know I shouldn't be doing this but since you resurrected one of my old 
posts from an earlier list I am interested to see what I had to say then, when 
I was newer to this Searle issue and somewhat less clear on the matter than I 
currently consider myself to be.

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:
>
<snip>

> >
> > > If you don't distinguish what Searle is distinguishing, you are 
> > > conflating what Searle is calling "syntax" with physics.
> > >
> > > This would make the third premise:
> > >
> > > Substituting "physics for "syntax":
> > >
> >
> > > 1.  Physics is neither constitutive nor sufficient for semantics.
> > >
> >


I am leaving some of your commentary in, even knowing that Sean has set this 
list up to reject too extensive retentions -- I will try to snip and skip 
enough to get this through the Wilsonian filter!

>
> Stuart (aka sid Caesar) = >
>
> > And yet we know (or Searle at least would admit to knowing) that physics is 
> > the "cause" of semantics because brains, which are physical, are (in his 
> > lexicon, of course).
> >
> > Now what does this substitution say of the CRA?
>

> This issue is not what it says of the CRA.  That's your first false step.  
> But you got me to bite, so...buttons on your underwear.  The issue is what it 
> says about your conflation of what Searle means by syntax with physics.  YOU 
> are contradicting yourself by trying to have things both ways, i.e., strong 
> AI but with a description of a nonS/H system which latter Searle isn't 
> arguing against.  Searle says > the contradiction is on the side of the 
> system repliers.


The question is whether a computationally based system, computer programs 
running on a computational platform, can be conscious without adding some ghost 
to the machine (or some other apparatus of what you call the non-S/H variety).


>  To the extent that, in your very lexicon, you believe the system repliers 
> have a point, the point actually being made is not one Searle is arguing 
> against.  I say this because there is the idea that Searle was arguing 
> against a strawman.  That's bs.


Denial isn't an argument.


>  So is Dennett's bs vis a vis Fodor arguing against strawmen.


Nor is perjoration.


>  Fodor simply shows that a lot of what connectionists think they got is 
> simply not what is got.  And some of it is shown not to be got given the 
> architecture.  I'll bet that some architecture is spelled out a bit too 
> computationally to be a candidate and that Fodor and Searle converge on 
> classical architecture being a noncandidate.
>

Nor is a summary of another's conclusions without the arguments that get us 
there. At most it's to cite authority, a classic logical error.

> So, choices:
>
> 1.  Classical architecture with "enough layered programming."
>
> 2.  Connectionist architecture (still can be done by a universal Turing 
> machine)
>
> 3..  NonS/H architecture.
>
> In:
>

> http://groups.yahoo.com/group/Philosophy_and_Science_of_Language/message/7546
>
> You wrote (Aug. 18, 2005):
>

Yes, I recall it.


> The question of strong AI is not whether
> > any particular
> > computer properly fitted with the right program can
> > produce a mind
> > with a mental life akin to ours but, rather, whether
> > there is some
> > configuration of computers and programming that can
> > do it.


This is a reference, of course, to the notion of a more robustly specked CR 
than Searle gives us, i.e., one with more things going on, albeit all performed 
by the same type of process as we find in the CR itself. The salient point 
being that we spec a Chinese city made up of many Chinese buildings containing 
many Chinese rooms all interconnected and performing a multitude of tasks using 
lots and lots of little guys with algorithms to follow like the one little guy 
in Searle's CR.


> There is a
> > great difference between the two formulations but if
> > this is not
> > noticed, all sorts of confusions arise. The author
> > of Budd's post
> > proposed a simple example of computer plus program
> > and Searle did
> > that as well in the beginning.
> >
> > But in the course of our many discussions on this
> > list, we have long
> > since left that rather simplistic formulation behind
> > since we have
> > discussed not a Chinese Room, for instance, but a
> > Chinese building or
> > even a Chinese City and this, I think, has long been
> > forgotten in the
> > heat of these discussions.
>

> Ned Block showed that a consequence of functionalism was that a Chinese city 
> could/would be conscious if....  Put it this way, Stuart, you have been 
> confused for a long time.  Credit to you that you end up with Searle's 
> position anyway except you have now (maybe
> like some others) changed your mind vis a vis the system reply.


No, I have actually become more confirmed in my conviction that the system 
reply is basically right even if the system represented by the CR isn't 
conscious because it lacks the requisite understanding.

But this is not Searle's position or he would not have denied the Churchlands 
and Dennett! You seem unable to get this critical point: If Searle agreed with 
the position as you claim, he would not be denying it!!!


> Somehow, it is the system plus the computational properties.


No, the system (and we may think of everything as a system at some level) must 
just be the right kind, i.e., the kind that can perform the requisite tasks 
that brains are performing to produce consciousness.

There are no "computational properties" in any tangible sense though one might 
say of a computer program that it has the property of being implementable on a 
computer and, when implemented, the further property of causing the computer to 
do certain things. In THAT sense we may say that there are "computational 
properties" but that is a rather innocuous sense and implies nothing in a 
physical sense about the computer program itself (because the program, at that 
level of consideration, IS purely abstract because it is just a set of ideas in 
some minds and encoded on various media). The fact that computer programs have 
this feature says nothing about their role as implemented processes on 
particular kinds of machines.

But why am I bothering, eh? This will simply roll off your back like water off 
a duck . . . again!


>  But computational properties are observer relative.

See above, again!

> There's nothing there but physics and how we can get to use physics to 
> manipulate syntax.


And there's nothing in brains but "physics" in precisely the same sense!


>  Add as much syntax as you want, and it don't mean a thing because it ain't 
> got that swing we're looking for.


Are we adding "syntax" or "physics" now? That Searle conflates them in his 
argument doesn't mean we must remain mired in that mistake! after all, you did 
acknowledge that Searle "coined" the idea of calling computer programs "syntax" 
and so, presumably, doesn't mean by "syntax" what we mean in other more 
familiar contexts for that term.


>  OTOH, if you're trying to swing weak AI, the CRA and later summary 
> statements (eight points at the end of the APA address) are not designed to 
> kill all that jazz.
>

The issue is whether computational platforms like computers running 
computational processes can be conscious (without adding any ghosts or other 
devices). That is what Searle meant by "Strong AI" though he does, admittedly, 
do a lot of wriggling in the course of his arguments. But, of course, he 
abandoned the CRA eventually even if some of his adherents, like you Budd, 
can't bring themselves to do it!


> _Now look_:
>
> >Now what does this substitution say of the CRA?
> On its face it looks absurd because we know that semantics, grasping or 
> imputing meaning to anything, is a mental occurrence and thus not 
> identifiable in the world as any kind of physical object.
>


> Then it wouldn't look absurd "on its face" would it?  You're making little 
> blunders all the time, it appears, along with the big one of conflating what 
> Searle calls "syntax" with physics..  I so love repeating that!
>

Even if you get my point totally wrong!

> >So how could we ever say of physics that it is constitutive or sufficient 
> >for semantics?
>
> Brains, but you were on a roll with major/minor cognitive dissonance vis a 
> vis getting Searle straight or well swung.
>

Brains are physical, i.e., are examples of "physics" just as Searle's computer 
programs are examples of "syntax". And we know that brains, i.e., "physics", 
cause "semantics" (consciousness qua understanding). So we should be able to 
say that "physics causes semantics" based on the empirical evidence and our 
understanding of what brains are. But then, in THAT sense of being physical 
entities, they are no different than computers! So why shouldn't computers be 
able to do what brains do in terms of the final outcome if they can be brought 
to the point of doing all the constituent tasks brains accomplish to get to 
their final outcome? Of course, this isn't an argument that computers CAN do 
it, only an argument that you can't dismiss the possibility on the grounds that 
the programs they are running are understood in an abstract sense in a 
particular context.

>
> > Yet, if we did not, we would be placing ourselves in a wholly dualist mode, 
> > insisting that whatever mind is, whatever understanding is, whatever it is 
> > to grasp or impute meaning, it had to be sourced in the non-physical. But 
> > this goes against what we know of how the world works and what Searle, 
> > himself, would say of how the world works. And Searle insists he's not a 
> > dualist.
> >
> > Again we are thrust, by Searle's reasoning (or rather he is) into 
> > contradiction!
>

> And you are unaware that it came from your attempt to find a flaw in the 
> third premise instead?????  How could you be that muddled?  You're not?  Then 
> you must be joking, Sid!
> >

The flaw I found is there despite the denials of some. But that is for each to 
judge on his or her own. You can lead a horse to water you know . . .

> >
> > > So the upshot is that you are just wrong to see an equivocation in 
> > > Searle.  You create one by not distinguishing S/H from nonS/H systems.  
> > > And you get a ridiculous substitution instance for your effort.
> > >
> >
> > You just don't get the semantics of my point. Maybe it's a physical issue?
>

> Your point may well be that it is a physical issue.  And I've shown > that 
> Searle isn't arguing where you think he is.


Well, you think you have shown it at any event. I will not deny you have tried.


>  And "the semantics of your point" reduces to your point..  You make your 
> point in a way that suggests you're not smarter than a
> fifth grader, that's all.


Ah yes, the old ad hominem again, when all else fails. At this point it's 
merely amusing.


>  You sneak in a possible difference that amounts to none and argue that the 
> difference Searle sees is a result of seeing consciousness in a certain 
> dualist way.  It is utter crap.
>


You simply deny and deny without making any kind of serious argument. Arguing, 
in the everyday sense, is not the same as making an argument in an academic or 
philosophical or most other professional senses. In arguing we call each other 
names, raise our voices, make gestures, etc. Like two drivers in an auto 
accident arguing over who is at fault. But that won't do here!


> Let's review that subtle distinction I pasted again.  In your words from 
> 2005.  See above.  Then see the dilemma above IT too.  Can't decide?


Seems pretty well put, to me, actually. What distinction or dilemma do you 
think you see? (I probably shouldn't be asking this as it's an invitation to 
continue this pointless discussion but what the hell . . .)


>  Here's how I decide it: 1. and 2. for weak AI, 3. for philosophy of mind and 
> possible AI of the type that really has that certain swing of semantics, 
> consciousness.  Don't like it?


It's unintelligible. If you were clearer perhaps I could give you a more 
substantive answer.


> Then you just might be a criteriologist like Dennett and conflate strong with 
> weak AI and may be happy to think weak AI is as good as it can possibly get


This just reflects your continued misunderstanding of Dennett's thesis and even 
Searle's notion of the distinction between the AI's.


--and he is a zombie for so pretending to think (just kidding, even though 
Jaron Lanier thinks he MUST be a zombie in his 2010 _You are not a Gadget_).
>
>

>
> > > Some conflate these by noting that anything can be given a computational 
> > > description.  Searle maintains that if one has a physicalist explanation 
> > > of something, adding a computational explanation doesn't add anything 
> > > significant to the explanation.  Of course, explaining how to simulate a 
> > > process on S/H is what some computational explanation is for.
> > >
> > > Cheers,
> > > Budd
> >
> > Your last point isn't relevant to the issue.
>
> It may very well be if you're a criteriologist or property dualist in need of 
> minor spanking.
>

You seem to have something of a fetish for that allusion!

>
> >I am not speaking of so broadly defining computation as to give "anything" a 
> >"computational description" but, rather, of whether the things we all agree 
> >are computers (and thus admit of a description in terms of executed 
> >computations) can be built to be conscious like ourselves (to have an 
> >understanding equivalent to what we mean by "understanding" when the term is 
> >applied to what we are and do).
>

>
> Well, are you conflating computation with physics or not?


I'm referring to what Searle himself called "implemented programs" in one of 
those citations we read here in the course of this ongoing debate. That he or 
you or both seem to have a problem keeping that idea in focus is a different 
question.


> If you are, you can try and make a messy for Searle but it isn't getting him 
> right.  And there is no upshot to a bad argument.
>

Which is Searle's problem and yours I'm afraid.

> >
> > Your persistent misstatement of the issue
>
> I've done a decent job showing you to be the one either confused or 
> persistently in joke mode such that if it is the latter, then at some point 
> one is going to call you on it while acknowledging you must have known a good 
> deal.  So, confused or Sid Caesar.  You pick.
>

And have I stopped beating my wife? (Te-dum-dum!)


> > in order to throw up this same old response is nothing more than making a 
> > strawman for yourself so you can pretend to have refuted the claim that 
> > Searle's CRA is wrong.
>
> I'm only talking about _your_ inept claim, after all; you know, the one that 
> led to the ridiculous substitution instance you practically welcomed later in 
> order to do your schtick.


If you're going to make such statements you ought to back them up with 
specifics. Otherwise it's just ad hominem verbiage.


>  As far as a platonic claim (THE claim) that it is wrong, I'll ask God about 
> that.


Let me know what he tells you.


> Searle still thinks its good and doesn't need to hear from God about it.  He 
> just thinks he doesn't even need the CRA.
>

There are hundreds, maybe thousands of philosophers around and they all think 
they're right while they hold a given position. But since most hold positions 
in conflict with the others, some or all of them are bound to be wrong even if 
they "think" they're right. So why whould we consider Searle's thinking his 
right, per your information, evidence that he is?


> To the extent that Searle himself said he was mistaken, he said he was 
> mistaken in thinking that the hypothesis could even be true or false.  It is 
> an incoherent hypothesis (given what we know of how
> programs work).


So the CRA, which argues that computers can't do consciousness because they are 
only syntax in action and syntax can't make the stuff of consciousness, is 
replaced by an argument that computers can't even do syntax! Thus it's not that 
syntax can't do understanding but that computers can't do syntax. Curiouser and 
curiouser! And even more of a reach!


> It is only natural that you would argue with him saying that this newer take 
> is worse than the older.


Because, of course, it is.


>  But do note that the upshot of what you think is possible is just > Searle's 
> position.



You mean he now supports Dennett's thesis? But wait! He introduced the new 
argument in The Mystery of Consciousness in which he denied Dennett's thesis! 
So how could it be that my position, which is in sync with Dennett's, is really 
just another version of Searle's? Does Searle somehow miss the fact that he's 
sympatico with Dennett after all? Shouldn't somebody have told him???

Intriguingly, no matter how many times I point out this painfully obvious fact, 
you NEVER acknowledge or respond to it. It's as if it represents a blankspot 
for you, something that isn't processing. Well maybe that's really what's going 
on here!


>  Just because he doesn't conflate computation and physics doesn't mean that 
> one who does has essentially a different position.  One in name only maybe.  
> But what good is that?  A way to make monkey shines with Searle's perfectly 
> good sense? I don't think so.
>
>

Do you think that Dennett has figured out yet that all he is proposing is 
warmed over Searle? What a travesty that the two have debated this so keenly 
when they're both really on the same side! Ah the visiscitudes of professional 
philosophy!!!

> > But as we have seen here, Searle himself recognized he was wrong vis a vis 
> > the CRA (see his introduction to his new argument in The Mystery of 
> > Consciousness). After all, why go to the trouble of a new argument if the 
> > old would have done?
> >
> > SWM
>

> That's a good question.  That you think it is telling is cool.  You will find 
> that I think it is telling in a different way than you do, however.  Today, 
> he says he was mistaken in thinking he needed an argument like the three step 
> proof with two independent clauses available in the third premise of one of 
> the summary statements of his study of how programs work in the real world.  
> That he was mistaken doesn't mean that he now doesn't think they were good
> arguments, however.


How do you know? After all, for years and years he stood by the CRA as his 
prime argument and then, suddenly, after years of criticism, he replaced it! 
Why replace an argument that is correct?

>  The thesis of strong AI is incoherent in a way that the thesis of weak AI 
> (as Searle distinguishes these) is not.


An assertion that X is incoherent isn't an argument that it is, nor is it 
support for a claim that he was right in claiming it!


> Tell that to Neil too since he once tried to argue that Searle would have 
> been more honest or more thoughtful (I forget which was
> the adjective offered) to have simply argued against weak AI.


I don't speak for Neil and am not always sure I fully understand his positions.


>  But he didn't.  And neither did he conflate weak and strong AI as Neil 
> proposes we ought to.
>

I don't recall Neil proposing that. I think his argument is that, at bottom, 
there's not a lot of difference. I don't agree with that in the context of 
Searle's argument however. I think Searle is right to make the distinction when 
he does it right, though sometimes he seems to fudge and get lost in the same 
old ambiguities.

> How could you be so bad at Searle that you don't see what I see and end up 
> agreeing with Searle anyway without realizing it?
>

How could you imagine that Searle doesn't realize that Dennett really agrees 
with him, or that Dennett doesn't?

> Since I think anybody who has been paying attention can see, I've been the 
> more accurate as far as Searle interpretation goes.


I doubt that.


>  I'll allow that Gordon, Neil and Stuart are all smarter than myself in real 
> life.


We don't know that Budd. And if you really have shown all these errors that I, 
at least, have been accused of making with regard to Searle, then I must not be 
very bright at all!


>  I mean, how dumb does one have to be if there is enjoyment in arguing with a 
> shoe?  (Still working on my Yiddish and still think to this day that Searle 
> isn't a sort of Marrano of reason who deliberately distorts philosophical 
> usage to help us see beyong conceptual dualism (aka: property dualism).  
> Hint:  Anybody accusing Searle of property dualism may be doing so just 
> because he's alive--upshot, everybody has to be one.  But that would be a 
> kind of fifth
> gradish lack of distinction and difference.


Here you list into incoherence and I have no idea what your point is. Maybe it 
has to do with the fact that I'm just not intelligent enough to see it though!


>  Prolly better to see exactly what he's driving at.  It won't bode well for 
> "epistemology as queen of philosophy" though.
>
> I hope to have shown that your joke can only go so far as to show that you 
> are confused--which would be the more funny if you weren't.
>

Is this a case of the pot calling the kettle a color only not realizing the 
kettle's not a kettle at all but something entirely different, in which case 
our poor pot is cooked!

> So, Sid Caesar (who did a bit by asking and answering what jazz was), I'll 
> end this whole thread with a quote from Aug. 18, 2005:
>
> > I actually found Budd's last post on Searle very
> > useful and pretty
> > thorough, hitting the main points of Searle's
> > thinking and
> > explicating them quite effectively. I have great
> > respect for Searle
> > and often find myself in agreement with him, though
> > I continue to
> > think he got his strong AI argument wrong.
>

Couldn't have said it better myself. Oh wait, I did!

>
> Cheers,
> Budd  (What is jazz?)

Thanks for the resurrection Budd, but I'd have thought you could find something 
a bit more intriguing that what you settled on excerpting. After all, back then 
I was, admittedly, still feeling my way in trying to interpret Searle and 
understand why I thought he was so badly mistaken.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: