[Wittrs] Re: Searle's CRA shows nothing

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 22:40:08 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "iro3isdx" <wittrsamr@...> wrote:
>
>
> --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
>
>
> >> As I read through this thread, I get the impression that some people
> >> do not understand the systems reply.
>
>
> > My reaction was to think they either contradicted themselves or
> > changed the subject into a subject that Searle wasn't arguing
> > against.
>
> The Systems Reply is as completely natural reply for AI people  to make.
> It is how they are thinking about things (such as how to  handle
> semantics).  So, no, they did not change the subject into  something
> Searle was not arguing against - that is, unless Searle  wasn't actually
> arguing against AI.
>
> Regards,
> Neil



Hi Neil,

They (some AIers) want to do with functional properties (computers of whatever 
type) what can be done with a system composed of first-order properties 
(animal/human brain).

I think Brandom is thinking along these lines.

It is to be pointed out that Searle doesn't argue against weak AI. He thinks 
anything we can understand can be simulated.

Now, if one agrees with Searle about simulation not being the same thing as 
emulation, one can still be a weak AIer and not feel phased by any of Searle's 
arguments about strong AI.

On the other hand, there are those who think weak AI is all one can reasonably 
hope to get when the goal is a theory of mind.

Stuart sees (for at least six years and running) Searle's critique of 
functionalism to be the product of seeing consciousness as a nonprocess-based 
sort of thing.  Well, it is obvious that he knows that Searle thinks 
consciousness is caused by brain _processes_.

So, when Stuart (and you?) think(s) he's caught Searle not understanding the 
systems reply, he fails to understand that Searle is pointing out a distinction 
between those "systems" which do heavy lifting through second-order properties 
(UTM's, both serial and PP) and those that are a product of first-order 
properties (brains).

Stuart (and maybe yourself) don't/doesn't see that Searle makes the above 
distinction in arguing against computational functionalism (of even the PP 
kind).  Or he (and you) simply don't buy the distinction.

But one can't go from Searle's buying the distinction to a claim that he must 
be a dualist.

I can prolly get at the heart of the mischief in the following way:

1.  Computers (including PP given the Church-Turing thesis) are said to be 
physical when one doesn't want to admit that their heavy lifting is a matter of 
second-order properties--the second-order properties are what functional 
properties amount to, as in "computational properties."  Such properties are 
subject to Kim's "causal exclusion" argument about functional properties.

2.  Computers are said to get at the abstract nature of mind via the abstract 
nature of functional properties.

Searle thinks that 2. amounts to both strong AI and weak AI (or just weak AI if 
one insists on not drawing a distinction) being noncandidates for a theory of 
mind.

Others think weak AI is as good as it gets and if one argues against something 
that is as good as it gets, then one MUST be dualist!

But that doesn't follow because weak AI (or strong) as really not as good as it 
gets.

But there remains a problem for Searle which makes his position look mysterian:

If his position is about getting at a theory in the same way as the germ theory 
of disease, then what one gets first is correlation.

Everyone agrees that there must be correlation between brain processes and 
conscious/thought.

The trouble is that it is hard to get over that hump by getting at causes.  I 
think Colin McGinn is a mysterian simply because he doesn't believe it is 
necessarily possible to ever get causation from even high correlation.  I 
suppose Searle thinks it the best we can do.  He argues that it is better than 
nonstarters, the nonstarters being computational theories.  Not that he argues 
against their utility for simulation.  But one simply has to follow 
Wittgenstein and arrive at the Turing test as a bona fide test, even if no 
computer thus far has passed such a test.  Argue that this test is not 
sufficient and you're going to court all the trouble Neil has in store for me!

The real reason the Turing test is not a sufficient test is because it will 
allow false positives given that some purely functional systems may pass 
it--but such systems are not machine enough to be candidates for theories of 
mind (barring eliminativism and other positions actually properly called 
conceptual dualist positions).

Supposedly, weak AI just sweeps the problem (sometimes called the hard problem) 
under the rug and opts for a dissolution a la Dennett's intentional stance 
which is motivated by strong AI considerations.

When Dennett's strategy is painted as the only game in town, then any argument 
againsts it, again, might be painted dualist or motivated by a conception of 
mind that is dualist.  This doesn't follow.  One can argue with Searle's 
proposed distinction between strong/weak AI and his distinction between 
machines and what are not really machine enough (functional systems).  But that 
obviously falls far short of making any plausible claim as to Searle's closet 
dualism.  It is rhetoric, pure and simple.

So, perhaps Popper would have a field day with Searle and promote his critique 
of induction.  But wouldn't that be a critique of the best we can do?

Is the best the enemy of the good?

But you see the rhetoric gets cheap.


Cheers,
Budd




=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: