[Wittrs] Re: formal arguments/thought experiments/1st and 2nd order props

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 15 Apr 2010 22:57:49 -0000

Gordon writes:

> > Because A1) programs contain nothing but symbols and rules for manipulating 
> > them based on their shapes, and because A2) conscious minds understand the 
> > meanings of symbols, and because A3) no agent can understand what a symbol 
> > means based on only its shape, programs cannot cause conscious minds.
> >
> > -gts


Stuart responds:

>"Changes nothing. However it does reveal some further problems with the CRA:

1) "Programs", in the way AI uses the term, means processes running on
computers, and so the symbols that are being "manipulated" are part of physical
events and thus causal as causal as what is going on in brains. This points up
the idiosyncratic reading Searle often gives his premise about programs being
syntax, to wit, that they are abstract and so without causal efficacy. (Once you
talk about them as physical events they are no more devoid of causal efficacy
than the events going on in brains.)"


And I diagnose:

Yes, once you talk of functional properties as first-order properties, you 
entertain Searle's biological naturalism! (Which is open to weak AI as well as 
nonbrain systems possibly causing consciousness by getting over some as of yet 
unidentified causal hump which second-order properties in principle can't do, 
programs qua programs being entirely second-order regardless of the first-order 
properties of the hardware)  And while forgetting that programs were originally 
thought by some to offer explanations of first-order properties (brains) as if 
they were functional ones (computations).

There is a distinction to be made between brains and computers that you 
consistently fuzz up but with the upshot that you get computers and brains 
wrong at the same time.

Functional properties in the form of computations on the syntactical structure 
of thoughts supervene on the physical alright, so you are right.  But Searle is 
not considering a computational view of what mechanisms are involved in the 
brain for manipulating thoughts; instead, as far as the CRA goes, he is 
considering whether it could make any sense to say that semantics could be 
derived (caused) by symbol manipulation/programs in virtue of their being 
programs (second-order properties).

The answer is that functional properties (programs qua programs) are 
noncandidates given what they are.  And I could quote an expert on this.  Let 
me quote Jaron Lanier from his new book, _You are not a Gadget:  A Manifesto_:

"The antihuman approach to computation [my "grok" here on "antihuman approach 
to computation here is that some think computations are semantically autonomous 
when that makes no sense--Budd] is one of the most baseless ideas in human 
history.  A computer isn't even there unless a person experiences it.  There 
will be a warm mass of patterned silicon with electricity coarsing through it 
[hurray for those first-order properties I went on about six freaking years ago 
for Stuart's benefit for a longish nine months to no apparent effect--Budd], 
but the bits don't mean a anything without a cultured person to interpret them. 
 [new parag]  This is not solipsism.  You can believe that your mind makes up 
the world, but a bullet will still kill you.  A virtual bullet, however, 
doesn't even exist unless there is a person to recognize it as a representation 
of a bullet.  Guns are real in a way that computers are not" (26-27).

The diagnosis is that what you have in mind is correct from even Searle's point 
of view.  What you don't understand is exactly why Searle thought programs to 
be formal to begin with.

Again and again, you get to argue that there is a fuzzy boundary between first 
and second-order properties.  Searle draws a sharper distinction.

Indeed, the very idea of software seems unavailable to you if you are 
persisting in confusing first-order pysical causation with some sort of 
"computational help" as if this help is spelled out in first-order terms as 
well.

You lose the very idea of computation when conflating first with second-order 
properties to the point that your "philosophy" can't (in principle even) 
explain either how brains nor computers work--at least not how brains might 
cause consciousness, because, even Searle agrees that weak AI might be possible 
in simulating the brain a goodly deal, pace Penrose of course, without thinking 
one has a philosophy of mind, pace Dennett, as a result.

If you want to be an eliminativist, then you might take Neil's line that Searle 
ought to have been more honest in denying the possibility of weak AI.  But 
Searle doesn't deny such.  Here's where Neil is also not drawing a distinction 
that Searle does, namely, between a noncomputational theory of perception and a 
computational one.

What if one were to show that both approaches were not competing vis a vis 
behavior?  Neil suggested that Searle's position is supposedly in competition 
with (what Neil calls anyhow) weak AI.  Well, Searle is not a behaviorist if 
that's what he may mean..

Then one might be satisfied with weak AI as a goodly philosophy of mind.

But Searle will insist that simulation is not the same as duplication.

Weak AI is not really motivated to discover how brains cause consciousness nor 
how there may in fact be iconic representations a la Fodor.  Cf.  Fodor: "The 
Given Returns with a Vengeance"  (I might have gotten the title slightly wrong).

Searle does break with Wittgensteinian tradition when offering theories of 
Intentionality and thought experiments to the effect that what may seemingly 
pass as good science about x is perhaps not really about x after all.

For Searle, the research program of biological naturalism is not in conflict 
with weak AI or AI in general.  He is just misunderstood as if this is the case.

To explain why it is not the case, we have to review carefully what he did in 
fact say.  First shot--he makes principled distinctions between those things 
that are candidate minds from those that are not and also between first and 
second-order properties.

It's okay that Stuart would like to fuzz up the distinctions.  And it may be 
because Searle is right in thinking that computation is in the mind of the 
beholder given that it is not the same thing (nor can cause) any first order 
properties of the physical world, including brains.

I think Stuart runs with that and is suggesting that, even if computations are 
in the mind of the beholder, they are causally supervenient on the physical.  I 
think Searle agrees with him!


Cheers,
Budd





=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: