[Wittrs] Re: Further Thoughts on Dennett, Searle and the Conundrum of Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 04 Apr 2010 17:23:50 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Tue, 3/30/10, gabuddabout <wittrsamr@...> wrote:

> > That's why it is difficult to know exactly what Dennett's
> > "intentional stance" amounts to.  Is it really
> > intentionality if at a level below what is the intentional
> > level where we can mean things when we say them? 
>
> In his essay "Analytic Philosophy and Mental Phenomena", Searle points out 
> the incoherency of Dennett's theory about the intentional stance:
>

> "Also, when I adopt the intentional stance is that supposed to be intrinsic 
> or not? Do I really have an intentional stance or is just a case of adopting 
> an intentional stance to my intentional stance? If the former, we are left 
> with intrinsic intentionality; if the latter, it looks like we are in a 
> vicious regress." - Searle
>

> My position with Stuart is that Dennett's functionalism entails a form of 
> eliminativism, and that eliminativism is motivated by fear of (a false 
> understanding of) dualism.
>

Yes, Searle makes arguments, whether we agree with them or not. My position 
with Gordon is that he doesn't but, instead, relies on assertions and 
imputations of motivations which aren't relevant when considering the merits of 
an argument.


> Certainly Dennett's views qualify as eliminativist with respect to 
> qualia/qualities of experience -- I think even Dennett would admit as much -- 
> and they seem to amount to eliminativism (or obscurantism) with respect to 
> intentionality also.
>

I believe he has agreed that he is "eliminativist" where things like "qualia" 
are concerned. But what's your point? That he IS eliminativist in some sense. 
Okay, so what? The issue is what's wrong with that position aside from the fact 
you don't cotton to it, Gordon?


> In that same essay, Searle quotes Dennett admitting that people feel pains, 
> while Dennett also argues that computers cannot experience pain because pains 
> (and other qualia) do not technically exist. Dennett then has the audacity to 
> ask his readers to believe that he has not contradicted himself.
>
> -gts
>

I don't know the text offhand that you are alluding to so it's a little hard to 
comment on whether he has really "contradicted himself" as you allege (i.e., we 
can't judge that from your paraphrases), however it would depend on what we 
mean by "pain", "feel" and "consciousness", wouldn't it?

For instance, if consciousness is just an array of certain system properties 
then where do we draw the line? Was my cat (rest in peace!) conscious and, if 
so, was she conscious as I am or only on a different point of the continuum on 
which humans, cats, lizards, frogs, fish, snails, spiders and so forth are 
situated? And if only on a different point of the common continuum, might we 
not want to say that some entities on the continuum of consciousness have more 
features than others?

Thus it would make sense to say of a computer, that had only certain of the 
features we generally associate with being conscious, that it was conscious in 
some ways but not others.

Presumably, if one could build a conscious machine, one would be building in 
many of the system properties we think of as part and parcel of our 
consciousness and, it might even be possible (possibly even likely) that we 
could give a machine a way of feeling pain because, if it's all about the 
system properties and the "machinery" for producing these, then why not a 
computational machine-based system as well as a naturally occurring organic one?

But to see this you have to break with the idea that consciousness must be a 
bottom line irreducible thing (either an ultimate stuff of some sort of 
ultimate property associated with certain physical events but not others, as 
some have preferred to put this).

I don't know the text of Dennett's you are alluding to (nor have you even given 
us Searle's direct text making that allusion) but, on the face of it, it seems 
perfectly reasonable to agree that machines don't feel pain as of now and that 
a machine could be built that had enough of the features we associate with 
being conscious to call it conscious and still not expect that it must also 
feel pain to qualify as conscious.

After all, even if we equipped a computationally operating machine with sensory 
apparatuses and gave it the ability to perceive and recognize its perceptions, 
and to do things with those perceptions such that we might fairly say that it 
knows what it is "seeing", there is no reason to think its "mental" life would 
have to be precisely like ours. What would be important is that it had the same 
functionalities.

The fact that it is built of different stuff, perhaps operates in different 
ways, etc., would only make its consciousness different than the kind of mental 
lives we have. It wouldn't, thereby, keep it from being conscious UNLESS you 
start by restricting consciousness only to the precise things brains do in the 
way they do it. But that would be odd since even Searle acknowledges that we 
cannot say that only brains could be a source of consciousness. He agrees that 
machines could, in principle, be built to be conscious if they can be found to 
do what brains do (his issue is that he thinks computers don't do what brains 
do and that that closes the door on them) and he further agrees that conscious 
aliens from outer space could come to earth even if we discovered that there 
was nothing inside their skulls (or whatever passed for that) but a nondescript 
"green slime".

Everything, Gordon, finally depends on breaking the presumptive intuition that 
consciousness is an irreducible something in the universe and seeing it as a 
system property instead.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: