[Wittrs] Re: Searle's Revised Argument -- We're not in Syntax anymore!

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 27 May 2010 23:24:23 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:
<snip>

> Nice response, Sid!
>
> Now, joking aside, how do you suppose to argue for dualism being implicit in 
> the CRA?  I'm arguing that you have to mischaracterize Searle in order to 
> have a shot.  And I'm saying, hoarsely by now, "No way, Wilber, er, Sid 
> Caesar."
>

I already have, a gazillion times. Why beat the same dead horse when he's too 
dead to feel anything anyway?


> The answer is by conflating what Searle distinguishes as S/H and nonS/H 
> systems.  One of your efforts amounted to the bald assertion that such a 
> distinction is pointless.  I suppose I can simply baldly assert that it has a 
> point.  We can go back and forth saying "No it doesn't" and "Yes it does" 
> (have a point).  Instead of that, how about reasons why or why not?
>

I gave reasons for why it's pointless. That was more than a "bald assertion." 
If you missed the reasons, I can't help that. It's that same dead horse again, 
I guess.

> Once you collapse this distinction, then it is up in the air whether you are 
> holding strong AI or Searle's biological naturalism.  That's why Peter 
> supposed to pin you down.  You waffle so bad that your position may be 
> Searle's in upshot because you don't distinguish between S/H and nonS/H.  But 
> then later you appear to distinguish by saying that maybe sophisticated 
> enough S/H may ex hypothesii cause semantics and consciousness.   Well, 
> sophisticated how?  Complex physics or complex software?  The point is that 
> it doesn't matter how complex the software is because it adds only a formal 
> character to the system.  So at this point you may want to emphasize the 
> physicality of the complexity--but the physical complexity is one thing and 
> the physical plus computational complexity is really just the former.
>


Yeah, yeah, I know, S/H, non-S/H, agreeing with Searle, getting him wrong, 
yada, yada. Nothing I could say would budge you in a million years so why 
bother?


> Software adds nothing, no matter how much you got.  So you redefine what 
> software is about by saying it is physical because running on a physical 
> platform.  And on and on.
>

And on . . . same horse and just as dead.

> To the extent that you want to hold what Searle calls strong AI in the form 
> of a research project, you collapse the above distinction in order to suppose 
> strong AI is a physicalism which Searle attempts to refute, er, later, 
> confute because incoherent, given that there is no amount of evidence one can 
> accumulate to the effect that one is discovering computation intrinsically in 
> the physics.
>


Why are you still going on about this? Didn't we end it? (No one says we 
"discover computation in the physics". You are simply fixated on a claim that 
is irrelevant.)


> The question whether the brain is a digital computer is found to be 
> incoherent.
>

Nor is anyone making that claim either! Computationalism qua "strong AI" holds 
that brains may actually work in a way that is analogous with how computers 
work and that, therefore, computers, if they can be programmed to perform the 
same tasks brains perform (and there is no reason to presume a piori that they 
can't), can be made conscious in the way brains are. But perhaps this is 
already too many words because I have found that the more I write in response 
to you the more you seem to fixate on your claims. Short, terse responses may 
be better!


> At this point, since you have collapsed the distinction, you can "derive" a 
> contradiction in Searle--he is denying that some physical systems can 
> possibly cause semantics/cons. while arguing that only physical systems can.  
> But that doesn't follow if you're going to be characterizing Searle's claims 
> in terms of the reasons he gives.
>

I've already examined his reasons on this list.

> Put another way, one can attempt to (lamely because omitting Searle's 
> reasons) argue that Searle is trying to show the impossibility of a 
> physicalist hypothesis (earlier CRA) as well as (today) the incoherence of 
> such.  But the such for Searle is redescribed as a bona fide physicalism 
> whereas Searle sees it as infected by a residual behaviorism which ironically 
> can be read as a form of dualism since computation as well as information 
> processing don't name natural kinds.
>


It doesn't matter what "Searle sees it" as. What matters is the argument he or 
you, on his behalf, can make!


> Then the monkey-shine upshot is that of course Searle's view implies implicit 
> dualism because he is arguing against a physicalist hypothesis of how a 
> system (computational system that uses physics to run) may cause semantics 
> and consciousness.
>
> I see that as clear as day.
>

Must be a foggy clear day though!

> But your conclusion doesn't follow if you understand the exact reasons why 
> Searle argues against strong AI.
>

Whatever.

> So your method is to leave out Searle's reasons in order to argue for your 
> claim.
>


I've addressed his reasons with you a gazillion times.


> My argument against your handling of Searle involves exposing your insistence 
> on leaving out Searle's reasons for his argument against strong AI.
>

The reasons don't stand up and just reiterating them in terms of "Searle thinks 
this" or "Searle says that" is not to make a a case for them!

> Anyone who mischaracterizes a position in order to argue against it is either 
> ignorant of the position or is just playing word games because one can 
> manufacture ambiguity as they please.
>

And anyone who is still missing the same points after three or four lists and 
several years of discussion . . . oh well, let's leave it at that!

> But your attempt to do that with the third premise amounted to a failure to 
> read English


I guess I confused it with the Chinese, thinking I was in the CR rather than 
the ER!


> which just as well may have been the upshot of treating the premises of the 
> CRA without the benefit of an adequate grasp of the target article which 
> inspired the summary CRA.
>


You never managed to show any inadequacy in my grasp though. Merely repeating 
it, mantra-like, is not to make any kind of a case, of course.


> If you want to argue that the CR is underspecked and designed only to do rote 
> translation, I'm going to argue that you are missing the point of the CRA.
>

Too bad all you can do is repeat the same refrain instead of actually giving 
reasons.


> The point is simple.  In fact, it is so simple that the only way to argue 
> against it is to put the systems reply in play.  But once you do that, you 
> are collapsing the S/H / nonS/H distinction or not.
>

False.

> If not, then you have to argue that the formal qualities of programs add 
> brute causality to the system.


Nope, not if the distinction is irrelevant which it is.


>  This is confused and amounts to mischaracterizing exactly how programs 
> actually work.
>

Nope.

> If so, then the systems reply is just a plea for the idea that technology may 
> be able to get done what the brain gets done, whether by similar types of 
> causes or different types of causes which will meet what Searle calls his 
> "causal reality constraint" which is not met by any possible S/H system that 
> is a system whose software is separable from the hardware.
>


Searle asserts, via his CRA, that no computational system can do it alone 
(meaning solely in virtue of being a computational system). The System Reply 
shows why that makes no sense since understanding, if it is understood as a 
system level feature, is not dependent on the nature of any particular 
constituent element of the system. Any system that can perform the same tasks 
in the same way as the brain can, conceivably, be conscious -- absent 
additional information showing that brains have or do something that is unique 
to the making of consciousness.


> The upshot is that your critique of Searle may in fact suppose that which he 
> is not arguing against.
>


Oy!

> OTOH, it may suppose that what he thinks can't pass a causal reality 
> constraint is a strawman never endorsed by anyone.

You mean like his idea of what computationalists really mean by computer 
programs being conscious?

>  But that would be to forget Hibbard, right?


Bill long ago took himself out of the discussion. He suggested I was wasting my 
time debating this with you and some others but I'm just a glutton for 
punishment.


>  And Dennett too?  Maybe not.  You see where I'm going with this?


The same place you're always going, I'm sure!


>  If Dennett is going to talk in terms of complexity, is it just brutish 
> complexity or is the complexity defined in terms of complex software such 
> that what he has in mind is a case of S/H?


The distinction in this case is absurd. Brains are physical. Computers are 
physical. Brain events are physical. Events in computers are physical. The idea 
that there is some magical something called "software" added in which has no 
causal powers is utter nonsense. No one in the AI field thinks software that 
isn't implemented can accomplish anything so this is about "implemented 
software" which, of course, means computational processes running on computers.



>  Is he going to waffle and say that there is no distinction between S/H and 
> nonS/H worth making when the software is sufficiently complex?


See my point above (though it probably won't do any good).


> If so, then the system Dennett has in mind is no longer an S/H system.


So why does Searle still deny it based on his CRA and the later argument? If 
Dennett's conscious computational system is really acceptable to Searle, why 
doesn't he just say so and move on?


> And that is consistent with Searle's position.


See immediately above.


>  If not, then where does, say, Dennett think the CRA mistaken?


We've been all over that, too! Where have you been these past months? Dennett 
thinks the point is that the CRA assumes dualism when it argues that "more of 
the same" cannot do what less of the same cannot do. Dennett argues that the 
reason the CR isn't conscious is because it is underspecked, it's the wrong 
kind of system, inadequate to the assigned task!


>  Turns out he has a problem with the second premise.  But that's because he's 
> so flippantly pragmatist as to be an eliminativist given Wittgensteinian 
> criteriology.
>

Whatever. You're not going to get this no matter how we cut it!

> So, to end, just as you have a problem with Searle's definition mongering 
> right in the first point of Searle's APA eight-point summary, so will I point 
> out that the whole project of strong AI is premised on the definitional 
> behaviorism of Wittgensteinian criteriology a la Dennett.
>

Which, if yours is a true claim, still says nothing about whether Wittgenstein 
or Dennett are right! Where's your argument that they aren't? You need to move 
past name dropping.


> It won't be lost on some when Searle points out an humunculus fallacy endemic 
> to strong AI.
>

This is like claim dropping now, naming arguments or claims without laying them 
out. It's not unlike your name dropping!


> I suppose part of the reason for that is definition mongering willy nilly.  
> But anybody can play that game.  No one wins and everything stays the same.
>

Whatever.

> An example of definition mongering:
>
> A rock has a low-grade form of consciousness because consciousness is to be 
> defined in the form of computation.  And since computers have a decidedly 
> higher grade form of computation going on compared to rocks, then even such 
> things as hand calculators are more conscious than rocks.
>

The Dennetian thesis says complexity matters, remember? Not everything is 
conscious. That would be panpsychism which your precious Searle would never 
agree to. Well neither is it consistent with anything Dennett says or I have 
said!


> But really.
>
> And maybe the above caracature misses the point about just how purely 
> physical we are to think of complex software.
>
> Perhaps.

Good, you seem to be at least marginally aware of the problem with your 
argument!


>  But then one might argue that Searle should have understood programs better 
> than he does when arguing that they are made to perform abstract syntactical 
> symbol manipulation.
>
> If so, he would be wrong about computers.  And that's all he would be wrong 
> about.
>

And about what they can do, of course. Oops!


> Unless one wants to do some definition mongering a la Wittgensteinian 
> criteriology which amounts to Dennett's research proposal, on one hand, or 
> Hacker's thesis that it is incoherent to think brains cause consciusness.
>

Do you seriously think this is to make a case for what you are saying?

> Now, there is not one thing I am confused about above.
>

And you've told us so, so there!

> But what I can't prevent is ignorant chatter about Searle in a form where his 
> reasons are omitted.
>

You have still failed to make a case for his argument, presumably by presenting 
and arguing for his reasons! But I guess you figure that, if maybe you say it 
enough times, readers here will think you have done that!

> The cool thing is that I have shown above how there is an ambiguity in 
> Stuart's notion of programs which allows for his thought to harbor Searle's 
> biological naturalism (which leaves AI wide open) while he gets to critique 
> Searle on other occasions where he omits Searle's reasons.
>
> Cheers, you crazy diamond!
>
> Budd
>

Really cool, Budd! No doubt about it.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: