[Wittrs] Re: Bogus Claim 4: Searle is Refuted by Redefining 'Understanding'

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 20 Apr 2010 01:02:57 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

>
> Stuart writes:
>
> > The CR shows only what a CR can do and the CRA, in drawing a general 
> > conclusion about all possible Rs from the CR, depends on a concept of 
> > understanding that holds that for understanding it to be present in an R 
> > like the CR it must be identifiable in some process (operation) of that R.
>
>
> But Searle is NOT drawing a general conclusion about all R's.  To think that 
> he is, you must be, again, conflating rooms defined functionally with rooms 
> defined, instead, of 1st order physical properties only.  This is a big 
> mistake and you should feel free to stop making it.
>

If Searle's CRA says nothing about computers generally and what they can do vis 
a vis consciousness, but only addresses whether a rote response device like the 
CR is conscious, then it's a pointless exercise. No one thinks consciousness is 
just rote behavior, not even your nemesis Dennett.


> Also, Searle is not denying that functional properties are supervenient on 
> the physical when making his points given the CRT, which is functionally 
> equivalent to a Turing machine, pace Gordon and Stuart....  :-)
>

Then the claim that computer processes running on computers lack real world 
causal capability is empty. You can't have this particular cake and eat it, too.


> The point is that a "machine" defined (in part) by functional properties is 
> not a machine proper (proper machines being machines which bottom out in 1st 
> order physical properties while computers bottom out in both 1st and 2nd 
> order properties).
>

If the CRA isn't about whether computers can be conscious if done right, then 
it's about nothing of interest and its use as an argument against "strong AI", 
the thesis that computers can be conscious based on programming, is pointless.


> The ignoratio elenchi, then, is to argue that since Searle denies the causal 
> efficacy of "machines" which have 2nd order properties as a defining feature 
> (software of whatever complexity), he must be
> denying a form of physicalism.


What machine exists that doesn't have physical instantiation? The machine in 
your mind?


>  Well, functionalism was introduced in the first place as an alternative to 
> type physicalism.  Searle is a sort of type physicalist while not 
> chauvinistic while functionalism is not chauvinistic but allows for false 
> positives a la the Turing test which has been shown to be insufficient as a 
> test for either semantics or consciousness via the CRT.
>

This isn't about "false positives" but about whether the "strong AI" thesis is 
viable.

> The way out for the functionalist/strong AIer, Dennett style, is to insist on 
> Wittgenstein's criteriological account for what is supposed > to count for 
> the mental (it is behavior after all).

No.


>  And to insist that if Searle finds a problem with it (i.e., that the Turing 
> test allows for false positives), then we should change our notion of 
> understanding to fit the possible data.
>

The notion of what we think understanding consists of hinges on what we think 
mind is.

Really, these discussions seem to go round in endless circles as linguistic 
usages are constantly being shifted by the players. At this stage one has to 
ask whether there is any value in continuing!


> This is why Dennett refers to Fodorian insistence on intentional realism as 
> Granny's campaign for safe science.  Isn't it the other way around, though?  
> Isn't it not just chic but part of the new "cognitive" science that it is 
> eliminativist or a bit whorish, given that everything under the sun can have 
> a computational
> description


This is irrelevant since no one who thinks computationalism (Searle's "strong 
AI") is using "computer" in THAT sense!


> which allows for everything under the sun to be conscious/unconscious to some 
> degree?


Just as irrelevant. That is NOT the claim of computationalism cum "strong AI".


> So we can interpret ourselves as having no respectable form of scientific 
> intentionality or simply redefine the notion of intentionality so as to apply 
> to thermostats.  That's what I would call pretty safe science (but cognitive 
> science?)and uninformative as to the mental for sure.
>

Depends on how one understands "mental" and similar terms, which is what this 
is all finally about isn't it?

>
>
> Stuart continues:
>
> > Obviously, if it's [understanding--Budd] a system level property then the 
> > real problem lies with the system represented by the CR, not with the 
> > constituents from which it is assembled.
>

I should have dropped either "it's" or "understanding". My error.


>
> Notice that Stuart is getting Searle correct here but perhaps doesn't know 
> it.  Searle's point is that "yes indeed there is a
> problem with the system represented by the CR!"

If it's only about the CR and nothing more, then this is a pointless argument. 
NO COMPUTATIONALIST IN THE AI FIELD THINKS THAT A SYSTEM LIKE THE CR WOULD 
QUALIFY AS INTELLIGENT OR CONSCIOUS OR AS HAVING UNDERSTANDING!


> The problem is that on the criteriological account of 
> understanding/consciousness, the Turing test is shown to be suseptible to 
> showing false positives.  So as a test it is insufficient.
>
>

Repetitive.

> Stuart then writes:
>
> > It's [the issue about whether Searle is contradicting himself as Stuart has 
> > claimed or whether Dennett's redefinition of intentionality is going to fly 
> > by the Wittgensteinian criteriological notion of mind as behavior--Budd] 
> > more than definitional, it's conceptual.
>
>
> Here I side with Joe who apparently understands that for Dennett it is 
> definitional but for Stuart it is conceptual.


It's just as conceptual for Dennett. To define is to propose a description of 
the meaning and stipulate to it. That is not what Dennett does because he is 
addressing what we actually mean via our ordinary designations of 
consciousness. He is offering a way that what we say in ordinary parlance about 
this can be understood without invoking a notion that presumes something beyond 
the physical.


>  But there is no conceptual problem for Searle here--Stuart gets to make one 
> up given that he doesn't understand what motivates the CR to begin with

I expect I understand it a good deal more than you do, Budd.


> (the distinction between systems essentially described in partial functional 
> terms from systems (like brains) which bottom out entirely in 1st order 
> properties which happen to cause consciousness, really.


I'm sorry but this isn't very clear, Budd.


>  It is a distinction he refuses to make so that he can continue with the line 
> that there is some conceptual difference between Dennett and Searle vis a vis 
> intentionality/understanding/consciousness.
>

Nor is this.

> But there is not really.


Not really what? A conceptual difference? This just shows you have missed the 
entire point of my argument, I'm afraid. Moreover rather than rebut it by 
showing why there is no real conceptual difference, contra my claim, you once 
again fall back on raw denial. Well, okay, so you assert something contrary to 
what I've presented. Can you offer reasons for denying it as I have offered 
reasons for claiming it?


>  And if there is, it is partly due to a criteriological account which is 
> definitional and not necessarily conceptual.
>
> Cheers,
> Budd
>

Well is there or isn't there? Are you saying both are the case but either way, 
even if they are contradictory claims, not only are they both right but either 
way I must be wrong, albeit for contradictory reasons?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: