[Wittrs] Re: Are We Actually Going to Discuss Searle's Reasoning Now?

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 20 Apr 2010 22:36:59 -0000

Stuart writes:

"His [Searle's--Budd] argument purports to make a case that they 
[computers--Budd] can't [cause consciousness/semantics--Budd] because of what 
they are,
i.e., they are abstract in nature, just syntax as he likes to say. But computers
are no more abstract, no more syntactical, than brains as far as we know."


I guess this is a case of Stuart dumbing himself down.  Doesn't he know that 
brains contain absolutely no logic gates and computers do?  It is specifically 
that difference that Searle has in mind for the first premise which contains an 
implicit noncausality claim:  Programs are formal.  For years Stuart 
thought/thinks that the noncausality claim has to come from the third premise.  
That would be a mistake.  It would also be part of a long, stupid shell game.

Secondly, Searle isn't arguing against weak AI.

The strong AI claim Searle has issues with is the claim that getting the right 
behavior via programs allows us to get the right explanation of this or that 
brain modality in terms of the program AS explanation.

Searle argues that at best we just get a robot without semantics or 
consciousness.  And that is hard enough--the modern holy grail even.  I'll bet 
some conflate that with a good philosophy of mind.  And some just might be 
Wittgensteinian criteriologists who have dumbed down what it ought to mean to 
have a good philosophy of mind, as the good Fodor will argue without knock down 
argument but with just enough overwhelming plausibility as to make a convincing 
prima facie case against a prioristic attempts to define a field before 
honestly investigating it.

It is then argued that the something extra Searle is aiming at for a good 
philosophy of mind is such as can't be had via behavior alone.

But that's all that is there.

But that's not all there is there..

Once one accepts Wittgenstein's criteriological approach to mental concepts, 
one will view Searle's insistence on the irreducible nature of semantics and 
consciousness as a (potential) thing of the past.

Ironically, much thought has gone into such a position!

Anyway, here's how Stuart's argument is supposed to work:

1.  Computers are physical.

2.  Searle argues against strong AI.

Ergo 3.  Searle can't be a physicalist.


The reason the above is not sound is that computers are systems that are 
defined by essentially (in part of course) in 2nd order functional terms.

Searle is merely arguing that systems so defined are not candidates for causing 
mind.

But notice that the myth of the computer has dumbed down our intentionalist 
notions to the point that mere behavior of a thermostat even has a degree of 
intentionality merely because it can be described in the form of inputs and 
outputs.  Such descriptions can apply to anything under the sun--but we don't 
use this sort of description when we have bona fide explanations, UNLESS THE 
SYSTEM UNDER DESCRIPTION JUST IS THAT SORT OF THING, I.E., A COMPUTER.

So there is a conflation between 1st order descriptions of physical systems per 
se with systems that are not really entirely composed of 1st order properties 
in order to work.

The clue is the logic gates.

So my conclusion is that Stuart doesn't distinguish S/H systems from nonS/H 
systems.  Indeed, on one misreading of computers as nonS/H systems, Stuart is 
in perfect agreement with Searle.

Recently there has been talk of some sort of distinction between system 
properties (Searle) and process properties (Dennett).  Was this a joke or what?

Anyway, it is funny how one can argue a point through ignorance so well as 
Stuart does.  It is an ignoratio elenchi.  Or he's been lying when appearing as 
dumb as he sounds.  Six years ago he had the same choice.


Cheers,
Budd



=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: