[Wittrs] Re: Bogus Claim 5: Searle Contradicts Himself

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 21 Apr 2010 23:15:17 -0000

Stuart writes:

"And, of course, since we DON'T KNOW WHAT BRAINS DO to achieve consciousness, 
we don't know that computers qua computers are actually missing whatever 
capabilities brains have. That is, we don't know that it isn't computational 
processes running on a platform with sufficient capacity to sustain it after 
all that's needed!"

Hi Stuart,

Real quick now:

What we know about brains:

By abductive inference:  Brains cause consciousness via 1st order physical 
properties of brains.

Computers are defined as systems of both 1st and 2nd order properties.  There's 
the program level of description in 2nd order properties and the electricity 
(1st order property) making the programs run regardless of type of hardware 
except that the hardware be able to sustain the program.

Computations are 2nd order properties.

Searle denies that any amount of 2nd order properties (and any combination of 
1st and 2nd order properties where the 2nd order properties are supposed to be 
causal and not just functionally defined) can get over the causal hump 
necessary for causing consciousness.

If you don't like the distinction, your position still is no different from 
Searle's claim that 1st order physical properties cause consciousness.  Unless 
it is another view in disguise, as we might find out below or not.

If your position is really motivated by Dennett, then you might have the same 
problem as Wittgenstein and Hacker and Dennett.  To wit:  we have no possible 
criteriological account of just what is necessary for brains to cause 
consciousness, ever.  So, the quietist approach is to allow that weak AI may be 
as good as it gets for philosophy of mind.

Pragmatically speaking, Searle is not denying that it is useful to investigate 
weak AI--we want good robots.

Upshot, what is denied to Searle is the empirical study of looking for 
correlates of consciousness.  That would be denying a research project given a 
priori/criteriological commitments that rule out Searle's biological naturalism 
given the position sometimes referred to as conceptual dualism.

Searle is not a property dualist:  mental prperties are physical properties and 
we know already going in that brains get it done.

The reason why Searle argues against the notion of computation doing any 1st 
order property lifting is because computation doesn't name a natural kind.

So, the research project which is about how a system defined in computational 
terms can cause consciousness is based on equivocation/conflation between 1st 
and 2nd order properties.

To say that the brain causes consciousness via information processing trades on 
the same equivocation.

Upshot:  Searle's position is not contradicted by anybody who will conflate 
computation and 1st order physical properties.

It's just that Searle doesn't make this mistake.

OTOH, if one is okay with what Searle calls a mistake here, they can make a 
prima facie case for Searle arguing that some systems (brains) cause 
consciousness via 1st order properties while other systems (remember to 
conflate now) don't/can't cause consciousness give SIMILAR or IDENTICAL 1st 
order properties.

Prima facie for the uninitiated, such a case may seem makeable thus showing up 
a contradiction in Searle.

The best Stuart can do is forget about the distinction Searle makes between S/H 
systems and nonS/H systems.

But if one does, there is good chance that the position they are selling is 
Searle's anyway.

Unless they really mean (strong AI) that computations in concert with hardware 
(read BOTH 1st and 2nd order properties in concert) may, ex hypothesi, cause 
consciousness.

But note that the very idea of "causing consciousness" is something Dennett 
doesn't even bother with from a scientific point of view.  His view is that it 
makes no sense (similarly with Hacker and other conceptual dualists) to think 
we can scientifically find NCCs and then mechanisms for causing ontological 
subjectivity.

Ontological subjectivity simply isn't in the cards from Dennett's point of 
view--that's why he's an eliminativist.

For Searle, ontological subjectivity is a fact (just pinch yourself).  Ergo, 
there must be a story to tell about how it is caused.  And brains are where 
it's at, not computers.  Or one can conflate and argue Searle's physicalist 
position while insisting one has a problem with it via misrepresentation of 
Searle--Stuart's longish story.

Again, with Searle you have two noncompeting research programs.

For Dennett, they may look to be competing because one is eliminativist and the 
other is not.

Now Searle wonders just how someone as intelligent as Dennett could have gotten 
himself into such a pickle as to deny the second premise, of all premises.

That's the premise Dennett denies, ironically.  (that minds have semantic 
contents)

But one wouldn't know this from reading Stuart.  Searle found out by reading 
Dennett's _Consciousness Explained_ (or dissolved).

Just sayin'.

But to be nice, the real rift has to do with Dennett buying into Wittgenstein's 
criteriological account of mind while Searle actually shows that such a view 
may amount to a machine passing a Turing test without the relevant semantics.

The above will seem impossible to show for Dennettians because, after all, 
there is only behavior and if it walks and quacks like a duck....


Cheers,
Budd






=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: