[Wittrs] Re: Dennett's paradigm shiftiness--Reply to Stuart

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 03 Mar 2010 21:46:05 -0000

Stuart writes:

"My contention is that you habitually miss the points I am making."

All I can do is try to respond intelligently by invoking what I know to be 
common knowledge of functionalism as opposed to type-type identity theory.

So far, the list of points you have made include the whopper that Searle holds 
a type of dualism.

Here is your argument:

1.  Computers are physical.

2.  Something physical must cause/realize semantics/consciousness.

3.  Functionalism is a form of physicalism.

5.  Functionalism is about physical processes.

6.  Searle argues that functionalism is at too abstract a level to be counted 
among the possible physical candidates (like brains) which may cause 

Ergo 7.  Searle must presuppose a form of dualism because he supposes that 
brains cause consciousness differently than functional systems and functional 
systems are the only physicalist game in town these days.

Ergo 8?  Perhaps Searle doesn't understand functionalism?


Functionalism as a form of physicalism need not be denied by Searle, what with 
inputs and outputs and electricity all being physically fleshed in order to run 
computer programs.

So when Searle is arguing against strong AI, he is not denying that computers 
need electricity to sustain programs.  He is commenting on the thesis that 
understanding which programs might pass a TT is no good for philosophy of mind 
because he shows a case where a functional system passes a TT without there 
being any semantics.  It is here that I recommend we pursue his target article 
very closely.

In short, we can be fooled by functionalism up to a point.  But the thought 
experiment (CR) allows us to see that functional systems are also understood as 
systems of second order properties.  And second order properties don't cause 
anything.  Functional properties are second order properties and such don't 
cause anything.  If one argues that they do, then they are getting 
functionalism wrong or simply focussing on the heat and noise generated by the 
hardware while running software.

So dilemma for Stuart:

Functionalism is about second order properties or not.

If not, then he doesn't get functionalism right and simply conflates 
functionalism with type-type physicalism.

Today, the holy grail is weak AI and THAT hasn't been solved either, not that 
that is a critique of its very possibility.

It is Penrose who will deny even the possibility of weak AI while Searle 

And Searle actually believes that it is possible to create artificial 
intelligence of the conscious sort since we know that brains can do it and they 
are machines.

Incredibly, Stuart has to argue that Searle doesn't know that his critique of 
strong AI (computational functionalism) amounts to a form of dualism.

But Stuart's mistake is to mistake the very heart of what computational 
functionalism is.

To the extent that he equates such with type-type physicalism, he gets it wrong.

If he only knew what he was talking about, then he would cease conflating 
functionalism with physicalism.

Indeed, if he knows he is conflating functionalism with physicalism, he also 
knows that he is in effect not able to argue for Searle's being a dualist.  
This is because, while Searle, according to him, may be mistaken about 
functionalism (he's not, Stuart is but let that pass for now), being mistaken 
about functionalism doesn't necessarily amount to a form of dualism.

That would be so if functionalism were the only physicalist game in town.

Strong AI marked a fun research project that could be carried out without 
needing to do human brain research.

Stuart likes to mention PP because such is more brain-like.

Well, let him do this and share Searle's view that any physical system that 
causes consciousness has to get over a certain causal threshold as system.

What remains is whether anybody is going to buy either that myself or Searle 
doesn't know what functionalism is or whether Stuart is still talking of 
functionalism when describing PP (parallel processing).

Again, PP is functionalism or not.

All PP can be done serially on a UTM.

The CR is UTM equivalent.

It shows the possibility of the TT issuing false positives.

Ergo, functionalism is not a sufficient test for the mental.

Unless one is eliminativist and the second premise (that minds have semantic 
contents) is denied a la Dennett.

But that's just nuts.  Just pinch yourself!

Not that I've necessarily proved anything.  I just layed out some options in 
the form of whether Searle or Stuart is going to teach better about the subject 
of computational functionalism.

Of course, once one changes the subject via the systems reply, one has gone off 
the reservation of just what Searle means by strong AI.  He admits to focussing 
only on strong AI in the target article.

Ergo, anybody who takes pains to focus on something other than his target 
article (and premises of the CRA are understood best when understanding the 
target article, not to say motivated by the target article..) just might be 
holding Searle's position while attempting to critique it.

And Searle _must_ be a dualist if not a functionalist?  Depends, for 
functionalism may be described in dualist terms (Stuart defines functionalism 
as type-type identity or not, let him figure it out and write a nice paper):

Is the computer doing things brutishly or do we need to program them?

I'll end with an old canard:

"On a clear disc one can seek forever!"


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: