[Wittrs] Re: Dancing Dualisms: Searlean Moves and Cartesian Moves--P's @ K's

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sat, 27 Mar 2010 20:12:29 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "iro3isdx" <wittrsamr@...> wrote:
>
>
>
>
>
>
>
>
> --- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@> wrote:
>
> > "Why Searle *Is* a Property Dualist" by Edward Feser
>
> I found Feser's argument quite unpersuasive.
>
> > "Why I am Not a Property Dualist" by John Searle
>
> Searle's argument was also quite unpersuasive.
>
> I'm inclined to think that Searle's "biological naturalism" is
> a variety of vitalism.


One could turn the "tables," however, by arguing thusly:

1.  Computational properties are functional properties.

2.  Functional properties are second-order properties.

3.  Second-order properties don't cause anything.

4.  Computer programs qua programs are defined functionally.

5.  Some claim that computers may cause consciousness in virtue of running 
programs (strong AI).

Ergo 6., Strong AI is a form of vitalism given the distinction between 
first-order properties and functional properties.  It would be magical indeed!  
Or just conflate computational properties with first-order properties, thus 
losing the very idea of what computational properties are good for (symbol 
manipulation which allows for my typing to be processed among oodles of awesome 
apps.).

How about weak AI?

1.  Weak AI does not propose that consciousness (or semantics) happens in 
virtue of running  programs alone.

2.  But some would want to treat weak AI (or strong AI) in the form of PP 
(parallel processing) as being more brain-like.

3.  Assume that weak AI relies on computational properties for it to work.  
Then it too may not serve a naturalistic philosophy of mind in any literal way.

Ergo 4., weak AI as theory of mind is a form of vitalism.  (not QED but merely 
overwhelmingly persuasive if one grants the distinction between first and 
second-order properties and how computation gets done by the latter in the form 
of symbol manipulation powered by any source powerful to sustain the abstract 
program level of description.  Or one doesn't know enough about how computers 
work....

The options (and options within the options):

1. Of course, maybe for some weak AI is the best we can do for modelling the 
mind.  Searle avers it is enormously useful for that, just as it is for 
modelling other types of things.  But it isn't supposed to be a model of the 
brain if the essence of weak AI is necessarily about functional properties.  
This is why computation is conflated with physics by those who want to make 
strong or weak AI _look_ like the only plausible game in town for theory of 
mind--assuming one wants a good one OR assuming one wants to dissolve the issue 
by changing the subject as in "Hey, let's think of mind computationally for a 
change!  And who cares if it doesn't really work because, after all, mind is 
too mysterious to be a subject for science anyway!"  I don't necessarily want 
one while playing tennis, though I wouldn't assume it would hurt my game if we 
had a good one.

2.  The research program Searle prefers as getting at how brains cause 
consciousness is modelled on the germ theory of disease.  First find NCCs 
(neurobiological correlates of consciousness), then hunt for causal mechanisms.

And now for the explanatory gap.  One may want to argue that induction, however 
strong and however difficult it is to define in computational terms, will never 
yield efficient causal mechanisms of the brain because, perhaps, maybe we have 
to go to the quantum level where things get weird--perhaps real brains have to 
be physically connected to some higher than 4-D spacetime for all anyone knows.

I'm supposing we have to live with these options:

1.  Strong and weak AI can't be good theories of mind because they are fleshed 
in second-order properties--vitalism.

2.  No matter how close we inductively get with correlating brain processes 
with bona fide conscious events, there will always be an explanatory gap.

The upshot, though, is that there being (perhaps for ever more) an explanatory 
gap doesn't amount to property dualism--unless you say so.  But anybody can 
play that game.  And no one ever wins.

My take is that one can't argue that Searle's preferred method for discovering 
how brains cause consciousness is necessarily a method that implies vitalism.

Anyone who does think that, though, especially when preferring strong or weak 
AI, is like the pot in the pot/kettle story, given functional properties which 
don't cause anything.

Ergo, Searle need not be a property dualist.  Sticks and stones.  But it seems 
that property dualism (first and second-order properties mind you) is endemic 
to both strong and weak AI.  But we don't mind property dualism in this case 
because the second-order properties are what is necessary to understand how to 
create programs in the first place.  And then we can talk as if the 
computational processes are physical given the hardware and have it both ways.

Searle need not be a property dualist given what he writes.

And the AIers need not think they are doing philosophy of mind..

I don't suggest I have proved any of that, though!  :-)


Cheers,
Budd







=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: