[Wittrs] Re: Searle's CRA shows nothing

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 17 Mar 2010 13:29:33 -0000

Budd, before I start my day, I'm going to take a flyer here and respond to some 
of the things you've offered (in hopes we can do this civilly and even make 
some progress):

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:
<snip>

>
> Stuart sees (for at least six years and running) Searle's critique of 
> functionalism to be the product of seeing consciousness as a nonprocess-based 
> sort of thing.  Well, it is obvious that he knows that Searle thinks 
> consciousness is caused by brain _processes_.
>

Yes.

> So, when Stuart (and you?) think(s) he's caught Searle not understanding the 
> systems reply, he fails to understand that Searle is pointing out a 
> distinction between those "systems" which do heavy lifting through 
> second-order properties (UTM's, both serial and PP) and those that are a 
> product of first-order properties (brains).
>


I think this concept you've been presenting of a distinction between 
"properties" and, especially of "second order properties" needs some 
clarification. What do you mean by it? What do you think Searle means in 
relation to this term?

As you know I have argued in the past that Searle sometimes seems to confuse 
the idea of programs in abstraction (the algorithms we have in mind when 
writing or thinking about a program or which are actually memorialized in the 
notations we use to write them down) with the actual operation(s) of a machine 
like a computer actually running them (going through the steps memorialized in 
the written program). Insofar as we are talking about computational processes 
running on computers (which is the only thing we can be talking about), we are 
discussing physical events in a machine, events that are no less real (and 
causal) than whatever events are happening in brains.

And THAT is the ONLY relevant comparison here. No one means, or ever means, an 
abstract set of steps described in some language or other when considering 
whether a computationally based mind can be built.


> Stuart (and maybe yourself) don't/doesn't see that Searle makes the above 
> distinction in arguing against computational functionalism (of even the PP 
> kind).  Or he (and you) simply don't buy the distinction.
>

Speaking for myself, I guess "don't buy" it is as good a description as any.

> But one can't go from Searle's buying the distinction to a claim that he must 
> be a dualist.
>


That isn't the move I make. It has to do with the CRA itself and what is 
required to reach its conclusion.


> I can prolly get at the heart of the mischief in the following way:
>
> 1.  Computers (including PP given the Church-Turing thesis) are said to be 
> physical when one doesn't want to admit that their heavy lifting is a matter 
> of second-order properties--the second-order properties are what functional 
> properties amount to, as in "computational properties."  Such properties are 
> subject to Kim's "causal exclusion" argument about functional properties.
>


Again, computational processes (programs) running on computers are as physical 
and causal as any other processes running on any other physical platform.


> 2.  Computers are said to get at the abstract nature of mind via the abstract 
> nature of functional properties.
>

No one is claiming abstract causation.


> Searle thinks that 2. amounts to both strong AI and weak AI (or just weak AI 
> if one insists on not drawing a distinction) being noncandidates for a theory 
> of mind.
>

For Searle "strong AI" is the thesis that one can use computers to replicate 
what brains do and so produce real minds, real consciousness like brains do. 
"Weak AI" is the thesis, on his view, that we can use computational technology 
to simulate what brains do but that a simulated mind will no more be conscious 
than a simulated hurricane will be wet.

> Others think weak AI is as good as it gets and if one argues against 
> something that is as good as it gets, then one MUST be dualist!
>

No.

> But that doesn't follow because weak AI (or strong) as really not as good as 
> it gets.
>

What do you mean by "as good as it gets"?

<snip>

>
> The real reason the Turing test is not a sufficient test is because it will 
> allow false positives given that some purely functional systems may pass 
> it--but such systems are not machine enough to be candidates for theories of 
> mind (barring eliminativism and other positions actually properly called 
> conceptual dualist positions).
>

The CRA isn't about whether the Turing test is a sufficient test but about what 
we need to "see" in the mix to agree that a mind (understanding) is present.

> Supposedly, weak AI just sweeps the problem (sometimes called the hard 
> problem) under the rug and opts for a dissolution a la Dennett's intentional 
> stance which is motivated by strong AI considerations.
>
> When Dennett's strategy is painted as the only game in town, then any 
> argument againsts it, again, might be painted dualist or motivated by a 
> conception of mind that is dualist.  This doesn't
> follow.


This misses the point because it has nothing to do with the reason I've 
asserted that Searle is implicitly dualist in his argument.



>  One can argue with Searle's proposed distinction between strong/weak AI and 
> his distinction between machines and what are not really machine enough 
> (functional systems).  But that obviously falls far short of making any 
> plausible claim as to Searle's closet dualism.  It is rhetoric, pure and 
> simple.
>

Read the actual argument I've made rather than contenting yourself with a 
characterization of it ("rhetoric, pure and simple") without referencing its 
terms.

SWM

<snip>

>
> Cheers,
> Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: