[Wittrs] Real and Fake Issues [WAS: What Is Ontological Dualism?]

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sat, 10 Apr 2010 19:47:05 -0000

Joe writes:

> > Searle's point is that neither the man in the CR nor the CR as a
> > system acquires the subjective experience of understanding just by
> > manipulating symbols according to rules.

Neil writes:

> Searle's argument establishes the first (though that was never  a real
> issue).  It fails to establish the second.

Well, if you understand the rules as syntax/computation/functional 
properties/second-order properties (all are for our purposes here equivalent), 
then it is out of the question that these can cause anything.  For nonproperty 
dualists, first-order properties cause things, whether finkishly or not.

OTOH, if you want to conflate functional properties with first-order 
properties, then, assuming Searle would too, Searle wouldn't argue against 
strong AI.  And he doesn't argue against weak AI.  And there is a distinction 
to be made despite your claim that there isn't.  Strong AIers believe/believed 
that "right program" can do work for "right biological explanation of what the 
brain is doing functionally."  Searle _shows_ that one could have all the 
functional properties in place such that ex hypothesi a system passes a Turing 
test but still doesn't understand.  Strong AI is thus refuted (if one doesn't 
conflate functional prperties with first-order properties).  If you want to, 
then that is your bag.  Probably is indistinguishable from weak AI, which is 
about behavior, regardless of semantics and consciousness.  Indeed, if one is 
an eliminativist, then one perhaps wouldn't want or seem to need a distinction 
between strong and weak AI..  In any case, there is no analogy between Searle's 
refutation of strong AI as defined by him and you analogy wherein his argument 
should be made to be the same as arguing that CD's or DVD's can't work.  They 
work via first-order properties.  Functionalism vis a vis philosophy of mind 
was supposed to be about a computational level spelled out in "prperties" that 
were neither intentional nor simply first-order physical properties--why else 
was it a new program (potentially) for philosophy of mind, whether one was a 
behaviorist or....

I think Searle is right to simply dismiss claims such that his thought 
experiment involves a real-world impossibility a la the humunculus not being 
able to manipulate the symbols fast enough or our having to design a slow 
enough Turing test.  The point is that computation simply doesn't name a type 
of first-order property.  The fact is that the first-order properties of 
electricity are used in a way such that the program-level description does its 
lifting via second-order properties which are abstract, otherwise we would say 
that "right program" and "right efficient cause" are synonymous.  But they 
aren't, so Searle's argument can't be turned into the absurd claim that the 
Churlands and yourself are making vis a vis some known system that works would 
be shown not to work via Searle's argument.  The cases aren't analogous.


Neil writes:

> I claim only that if we stop focussing on the
> CPU, we will see that nothing at all is proved about whether the   > system 
> as a whole has subjective experience.

Depends.  We can see that functional properties (the computational properties 
of a computational system) are second-order properties.  No amount of 
computation per se is going to be a candidate for causing anything via 
first-order properties if it is defined functionally.  Ergo, no argument is 
necessary.  OTOH, if you are going to say that computational properties as such 
can be thought of as potentially first-order properties, one will wonder why we 
want to call such "computational processes" in the first place.  In the latter 
scenario one might as well admit that he is not in disagreement with Searle 
about the issue of first-order properties causing, say, semantics or 
consciousness..  One may, if honest, simply want to say that Searle's notion of 
computation is too simplistic.  Searle might respond by saying that the notion 
of computation, for some, is so ill-defined that it applies to all first-order 
properties under the sun.  In that case, computationalism doesn't name an 
independently motivated research project focussed on the study of the mind.


Joe writes:

> > the so-called 'Systems Reply' does not even attempt a response to
> > Searle's point: there is no subjective experience of understanding
> > Chinese in the CR.


Neil writes:

> The Systems Reply is responding only to Searle's claim that he  has
> disproved that there could be intentionality.


He distinguishes 1st and 2nd order properties, so....  But if you don't want to 
distinguish, then see above.


  The AI folk do  recognize
> that a positive claim of achieving intentionality will  require
> experimental demonstration, and they acknowledge that they  have not
> achieved that.


The AI thesis (strong or weak as a form of philosophy of mind, mind you) would 
be unfalsifiable given eliminativism and would seem to allow for false 
positives a la the Turing test a la Searle's CR.  Ergo, AI in the S/H form is 
absurd as good philosophy of mind; but AI in some nonS/H form is not ruled out 
by Searle.  Now, quick, are brains S/H or nonS/H?  (S/H = software running on 
hardware).


Cheers,
Budd


=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: