[Wittrs] Who beat Kapsaprov?

  • From: Gordon Swobe <gts_2000@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 14:15:48 -0700 (PDT)

--- On Tue, 3/16/10, SWM <wittrsamr@xxxxxxxxxxxxx> wrote:

>> According to his own theory of mind, Dennett has no
>> conscious intentionality of the kind that Searle affirms,
>> i.e., he has no mental contents. 
> That is absolutely not so. I have seen some react to his
> claims by supposing that but it's just not true. He
> recognizes that we have experience and sensation and feeling
> and thought and everything else everyone else who thinks
> about it recognizes. His dispute is with those who then
> elaborate from this to a claim that there are some special
> "mental properties" that ARE these things. 

You say it's not so, but then you confirm my suspicions: Dennett dismisses the 
reality of mental phenomena. We can describe his philosophy as eliminativist. 
From an eliminativist's perspective, everyone looks like a dualist.

As you may recall, about ten years ago IBM pitted a chess computer named Deep 
Blue against the world champion Gary Kasparov. Kasparov lost. Did you know 
Dennett believes Deep Blue actually beat Kasparov at chess -- not Deep Blue's 
designers at IBM? 

I can understand how one might use such an anthropomorphism in casual 
conversation, but Dennett states it as a philosophical truth. He sees no 
important difference between the mind of a chess computer and the mind of a 
conscious human. He assigns personhood to a computer not even considered to 
have strong AI, much less weak AI. Ridiculous, I think.

In any case...

> However, as I have already pointed out, what you are
> referring to as the "third axiom" (which Searle used to call
> a premise in earlier iterations) is, insofar as it is taken
> as an axiom, beyond argument because that's what axioms are.
> You either accept it or you don't. So what kind of argument
> do you think should be marshalled against it?

If Dennett believes that "more of the same" (more syntactic operations over 
formal elements with more cpu's, whatever) will lead to the man or computer 
obtaining semantics from syntax then he should show us how that magic will 
happen. He should give us a logical argument or another thought experiment -- 
something with teeth -- to counter Searle's CRA.

Instead he just says that more of the same could give rise to genuine 
understanding (a hand-waving argument), and calls Searle a dualist for 



Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: