[C] [Wittrs] Digest Number 181

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 22 Mar 2010 19:33:21 -0000

Title: WittrsAMR

Messages In This Digest (25 Messages)

1.1.
Dualism Cooties: Ontologically Basic Ambiguity: Causality From: Joseph Polanik
1.2.
Re: Dualism Cooties: Ontologically Basic Ambiguity: Causality From: SWM
2.1.
Dualism Cooties: Ontologically Basic Ambiguity: The Problem From: Joseph Polanik
2.2.
Re: Dualism Cooties: Ontologically Basic Ambiguity: The Problem From: SWM
3.1.
Dualism Cooties: Ontologically Basic Ambiguity: Basicality From: Joseph Polanik
3.2.
Re: Dualism Cooties: Ontologically Basic Ambiguity: Basicality From: SWM
4.1.
Dualism Cooties: Ontologically Basic Ambiguity: Sourcing From: Joseph Polanik
4.2.
Re: Dualism Cooties: Ontologically Basic Ambiguity: Sourcing From: SWM
5a.
Dualism Cooties: Postulates of EPD (Essential Property Dualism) From: Joseph Polanik
5b.
Re: Dualism Cooties: Postulates of EPD (Essential Property Dualism) From: Gordon Swobe
6.1.
Re: Ontologically Basic Ambiguity: Mode of Existence From: Gordon Swobe
6.2.
Re: Ontologically Basic Ambiguity: Mode of Existence From: SWM
6.3.
Re: Ontologically Basic Ambiguity: Mode of Existence From: Gordon Swobe
6.4.
Re: Ontologically Basic Ambiguity: Mode of Existence From: SWM
6.5.
Re: Ontologically Basic Ambiguity: Mode of Existence From: Gordon Swobe
6.6.
Re: Ontologically Basic Ambiguity: Mode of Existence From: Gordon Swobe
6.7.
Re: Ontologically Basic Ambiguity: Mode of Existence From: SWM
6.8.
Re: Ontologically Basic Ambiguity: Mode of Existence From: SWM
6.9.
Re: Ontologically Basic Ambiguity: Mode of Existence From: Gordon Swobe
6.10.
Re: Ontologically Basic Ambiguity: Mode of Existence From: SWM
7.1.
Searle's CRA vs his Biological Naturalism From: Gordon Swobe
7.2.
Re: Searle's CRA vs his Biological Naturalism From: SWM
8.
Human autobiography From: void
9a.
Re: (no subject) Gibberish and NonGibberish From: gabuddabout
10.1.
Re: Who lost to Deep Blue? From: gabuddabout

Messages

1.1.

Dualism Cooties: Ontologically Basic Ambiguity: Causality

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 2:14 am (PDT)



SWM wrote:

>Joseph Polanik wrote:

>>SWM wrote:

>>>And, indeed, when it comes to claims of causality, even he agrees
>>>that one can causally reduce the features of consciousness to
>>>whatever it is brains do. However, he stumbles when he makes a
>>>distinction by confusing causal reduction (which possibility he
>>>affirms) with what he calls ontology when, in fact, the very issue at
>>>hand, causal reduction, IS one of ontological reduction.

this claim asserts an identity between 'causal reduction' and
'ontological reduction'; and, that's questionable. Searle denies it. my
position is that, with respect to subjective experience or
consciousness, a causal explanation does not even count as a causal
reduction let alone an ontological reduction.

what makes you assume that a causal reduction is necessarily an
ontological reduction?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.2.

Re: Dualism Cooties: Ontologically Basic Ambiguity: Causality

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 5:58 am (PDT)



--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:

> SWM wrote:
>
> >Joseph Polanik wrote:
>

> >>SWM wrote:
>
> >>>And, indeed, when it comes to claims of causality, even he agrees
> >>>that one can causally reduce the features of consciousness to
> >>>whatever it is brains do. However, he stumbles when he makes a
> >>>distinction by confusing causal reduction (which possibility he
> >>>affirms) with what he calls ontology when, in fact, the very issue at
> >>>hand, causal reduction, IS one of ontological reduction.
>

> this claim asserts an identity between 'causal reduction' and
> 'ontological reduction'; and, that's questionable. Searle denies it.

It's questionable that there is an ontological causal reduction because Searle denies it or it's questionable that consciousness reduces to brains and Searle denies it?

> my
> position is that, with respect to subjective experience or
> consciousness, a causal explanation does not even count as a causal
> reduction let alone an ontological reduction.
>

I deny it. That has about the same implication, of course, as anything anyone else denies, including Searle.

> what makes you assume that a causal reduction is necessarily an
> ontological reduction?
>
> Joe
>

It involves reducing one thing to another thing (saying X is just Y); it's ontological when you reach a point "below" which you can find no more "things" to reduce it to, i.e., when you're scraping the explanatory bottom and your down to whatever "things" exist without anywhere further to reduce.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.1.

Dualism Cooties: Ontologically Basic Ambiguity: The Problem

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 2:42 am (PDT)



SWM wrote:

>The fact that there is subjectness and objects, the fact that there is
>awareness and that of which we are aware, does not mean that we have
>two ontological basics.

I agree that the fact that, besides physical objects such as mountains
and mole hills, there is the experiencing I and its experiences does not
necessarily mean that there are two ontologically basic *substances*.

would you agree that Searl only recognizes one ontologically basic
substance? if so, then the issue at hand can be stated very simply,
thus:

[1] you classify Searle as a Cartesian dualist despite acknowledging
that he recognizes only one ontologically basic substance.

[2] a Cartesian dualist is an interactive substance dualist; meaning,
that the human individual is composed of a mortal physical body and an
immortal non-physical soul which interact.

[3] no one on any of the mailing lists on which you have peddled this
nonsense has ever been able to understand how the CRA exposes the
presumption of or demonstrate a conclusion that there is an immortal
non-physical soul within the human individual.

do you even understand the problem?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.2.

Re: Dualism Cooties: Ontologically Basic Ambiguity: The Problem

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 6:16 am (PDT)



--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:

> SWM wrote:
>
> >The fact that there is subjectness and objects, the fact that there is
> >awareness and that of which we are aware, does not mean that we have
> >two ontological basics.
>
> I agree that the fact that, besides physical objects such as mountains
> and mole hills, there is the experiencing I and its experiences does not
> necessarily mean that there are two ontologically basic *substances*.
>
> would you agree that Searl only recognizes one ontologically basic
> substance?

I would say that Searle doesn't address this because he doesn't speak in these terms however I would agree that Searle is at least a de facto physicalist in that he thinks the world is largely explainable in physical terms. He even thinks we can say that brains cause minds. But then he gets into this idea of minds representing a first-person ontology rather than the third-person ontology of the observable physical world. Here I think he stumbles into confusion, for all the reasons I've previously given.

> if so, then the issue at hand can be stated very simply,
> thus:
>

> [1] you classify Searle as a Cartesian dualist despite acknowledging
> that he recognizes only one ontologically basic substance.
>

I classify him as someone who is ontologically a dualist despite his denials. And I consider being an ontological dualist to involve thinking the same way about minds as classical substance dualists of whom Descartes is the paradigm in the Western philosophical tradition, making Searle a dualist in the Cartesian way -- despite his denials. And I say this because of the implications of his Chinese Room Argument (the CRA), not because of his own affirmative claims, of course.

> [2] a Cartesian dualist is an interactive substance dualist; meaning,
> that the human individual is composed of a mortal physical body and an
> immortal non-physical soul which interact.
>

Descartes was an ontological dualist (what you call, with traditional philosophy, a "substance dualist") and, indeed, was the first thinker in Western philosophical tradition to explicitly espouse and defend this position. As such this kind of dualism is generally ascribed to him though it has antecedents that precede him and it developed separately in other cultural traditions.

For the record, AGAIN, I do not suggest that in being ontologically dualist, Searle is an avowed follower of Descartes or that he subscribes to the full panoply of Descartes' various doctrines. I only say that he shares with Descartes a certain understanding of mind.

> [3] no one on any of the mailing lists on which you have peddled this
> nonsense has ever been able to understand how the CRA exposes the
> presumption of or demonstrate a conclusion that there is an immortal
> non-physical soul within the human individual.
>

Even if no one in the world understood it, that would be no evidence it is wrong. Nevertheless, I'm certain there are some who have grasped it, either in whole or part. However, when I went back to find that text from Dennett to meet your earlier challenge, I was gratified to see that, in fact, he is making the very points I've made, albeit in different language and with less attention to the logic of the claims. But in the end, what he says amounts to the same as what I've said on the matter, namely that the CR is underspecked and dualism is implicit in the CRA because it takes someone with that way of thinking about mind to agree with the argument's conclusions.

> do you even understand the problem?
>
> Joe
>
>

Well if you don't get my argument, you obviously don't understand it. And if my argument is right, it addresses "the problem", in which case you don't understand it. If you don't, you wouldn't be much of a judge of whether I understand it.

I don't think your failure to understand, therefore, provides much in the way of evidence for its being wrong or even incomprehensible.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.1.

Dualism Cooties: Ontologically Basic Ambiguity: Basicality

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 2:49 am (PDT)



SWM wrote:

>Joseph Polanik wrote:

>>my claim is that, in the phrase 'ontological basics', 'basics' is
>>functioning syntactically as a noun; but, that this is misleading.
>>semantically, 'basics' is an adjective.

>"Basic" is an adjective. "A basic" is a noun.

in colloquial speech where a sentence doesn't always express a complete
thought, 'basic' sometimes appears in the position where a native
speaker of english expects to find a noun; meaning that, 'basic'
sometimes looks like it is functioning syntactically as a noun.

nevertheless, 'basic' is semantically an adjective. this is shown by the
following procedure.

>>when we ask 'ontologically basic what?' we can recover the implicit
>>noun subject of the phrase 'ontologically basic [noun here]'.

>Whatever underlies what we encounter in our experience and what we
>encounter in our experience is very broad.

how does that vague generality clarify the syntactic/semantic confusion
as to basicality?

in your use of 'ontological basic', are there any circumstances under
which one may not ask 'ontologically basic *what*?' to recover the true
noun subject of the phrase?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.2.

Re: Dualism Cooties: Ontologically Basic Ambiguity: Basicality

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 6:27 am (PDT)



--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:

<snip>

> >"Basic" is an adjective. "A basic" is a noun.
>
> in colloquial speech where a sentence doesn't always express a complete
> thought, 'basic' sometimes appears in the position where a native
> speaker of english expects to find a noun; meaning that, 'basic'
> sometimes looks like it is functioning syntactically as a noun.
>
> nevertheless, 'basic' is semantically an adjective. this is shown by the
> following procedure.
>

> >>when we ask 'ontologically basic what?' we can recover the implicit
> >>noun subject of the phrase 'ontologically basic [noun here]'.
>
> >Whatever underlies what we encounter in our experience and what we
> >encounter in our experience is very broad.
>

> how does that vague generality clarify the syntactic/semantic confusion
> as to basicality?
>

We don't always know what we are referring to when we're referring to it. Think of the ordinary English usage of "substance". The guys who steps on something gooey and foul smelling looks down at his show and says, "yikes what's that substance on my shoe?" and then he checks to see if it's some kind of relatively inoffensive stuff or something else. Once he knows what it is he no longer needs to call it a "substance" but can just refer to it by the term for it as in "yikes, it's dog shit".

In the case of "a basic" I am saying that we get to a point where we have no names for things and, indeed, no way of even discerning what's there. Nevertheless we presume something is. So now we're stuck linguistically. If we speak of "substance" at this level, it's misleading because modern physics tells us the universe at its deepest levels isn't substance-like. That doesn't mean we can't speak about such things, even if only in an abstract (non-specific therefore non-concrete) way. In the past Western philosophers have used "substance" whereas today it's better, on my view, not to for the reason already given.

> in your use of 'ontological basic', are there any circumstances under
> which one may not ask 'ontologically basic *what*?' to recover the true
> noun subject of the phrase?
>
> Joe
>
>

Sure. What is "ontologically basic" is whatever it is that underlies the physics of this universe.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.1.

Dualism Cooties: Ontologically Basic Ambiguity: Sourcing

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 2:53 am (PDT)



SWM wrote:

>Whatever underlies what we encounter in our experience and what we
>encounter in our experience is very broad. So the question is whether
>some of what we encounter is traceable to one source and other things
>we encounter are traceable to another.

how many ontologically basic sources does Searle posit, in your opinion?

>Or whether everything can be explained in terms of the same source. Do
>we need to posit one or two sources for the full range of phenomena
>(including minds) that we encounter in the universe?

how many ontologically basic sources does Searle posit, in your opinion?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.2.

Re: Dualism Cooties: Ontologically Basic Ambiguity: Sourcing

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 6:30 am (PDT)



--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:

> SWM wrote:
>
> >Whatever underlies what we encounter in our experience and what we
> >encounter in our experience is very broad. So the question is whether
> >some of what we encounter is traceable to one source and other things
> >we encounter are traceable to another.
>
> how many ontologically basic sources does Searle posit, in your opinion?
>

Searle doesn't speak in terms of ontological basics. However his concept of mind, as revealed in the CRA, implies that mind is ontologically distinct from whatever is ontologically basic to the physics of the universe. Therefore it implies at least one other ontological basic. Since Searle doesn't explicitly claim this however, or agree to it, and, since he seems to stray from it in his discussion of brains causing consciousness, he is in self-contradiction.

> >Or whether everything can be explained in terms of the same source. Do
> >we need to posit one or two sources for the full range of phenomena
> >(including minds) that we encounter in the universe?
>
> how many ontologically basic sources does Searle posit, in your opinion?
>
> Joe
>

Searle doesn't speak in terms of ontological basics. However his concept of mind, as revealed in the CRA, implies that mind is ontologically distinct from whatever is ontologically basic to the physics of the universe. Therefore it implies at least one other ontological basic. Since Searle doesn't explicitly claim this however, or agree to it, and, since he seems to stray from it in his discussion of brains causing consciousness, he is in self-contradiction.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5a.

Dualism Cooties: Postulates of EPD (Essential Property Dualism)

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 4:15 am (PDT)



SWM wrote:

>jPolanik wrote:

>>[as to the classification of Searle], the fact that the brain can
>>cause both measurable effects and experienceable effects suggests that
>>Searle could be a property dualist.

>Property dualism may be rightly ascribed to his explicit positions if
>Walter and others are right about what it entails

I will let Walter speak for himself.

meanwhile, I will list what I consider to be the posulates of EPD
(Essential Property Dualism); and, I invite comment as to whether Searle
can be classified as an implicit/explicit Essential Property Dualist.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Postulates of EPD (Essential Property Dualism)

in the spacetime metric there are objects of mass-energy. [these objects
are called physical objects.]

physicalism is the proposition that properties are attributed to
physical objects to explain how physical objects cause effects
(phenomena). [these properties are called physical properties.]

there are two classes of effects, experienceable phenomena and
measurable phenomena. [the terms of this phenomenological dualism can go
by various names.]

the aim of physicalism within the philosophy of consciousness is to
explain *how* physical objects cause experienceable phenomena.

there is a set of physical properties of physical objects that cause
experienceable phenomena.

there is a set of physical properties of physical objects that cause
measurable phenomena.

these two property sets are not co-extensive.

all of the properties in each property set are properties of physical
objects. [ie. in traditional jargon, EPD is a substance monism.]

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5b.

Re: Dualism Cooties: Postulates of EPD (Essential Property Dualism)

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 5:23 am (PDT)



--- On Mon, 3/22/10, Joseph Polanik <wittrsamr@freelists.org> wrote:

> meanwhile, I will list what I consider to be the posulates
> of EPD (Essential Property Dualism); and, I invite comment...
...

> there is a set of physical properties of physical objects
> that cause experienceable phenomena.

Property dualism as I use the term involves NON-physical properties of matter that cause experienceable phenomena.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.1.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 4:49 am (PDT)



--- On Sun, 3/21/10, SWM <wittrsamr@freelists.org> wrote:

> Since the CRA is Searle's argument and no one else's, and
> Searle defends it and obviously finds it convincing, it's
> pretty clear that Dennett is saying that Searle shares the
> Cartesian dualist conception of mind

Yes clearly Dennett wants his readers to conclude that Searle believes in
ghosts, but notice that he does not actually say it. He *insinuates* it in
a rather under-handed manner.

> because that's what it takes to swallow the argument.

Dennett seems to have convinced you of such, but once again, we should
consider the CRA as a simple logical argument independent of any
considerations about philosophy or even about consciousness:

Because A1) programs are formal (syntactic) and because A2) minds have
mental contents (semantics) and because A3) syntax by itself does not give
semantics, it follows that C1) programs don't cause minds.

Simple, clean, straightforward. The conclusion follows from the premises,
and I see no hidden premises.

As Budd pointed out, Searle observes that Dennett denies A2). As an
eliminativist, Dennett denies the reality of mental contents, and here I
think we see the *true* reason he rejects the CRA. This also explains why
he sees no important difference between Kasparov and Deep Blue.
Intentionality has no objective reality in Dennett's philosophy, so Deep
Blue's lack of it does not matter.

Like all eliminativists and many other materialists, Dennett seems to
believe, wrongly, that he must deny the reality of mental states to avoid
the stigma of Cartesian dualism. He does not realize that his denial of
the mental in favor of the physical presupposes an acceptance of the false
Cartesian mind/matter dichotomy. He accepts the false Cartesian world-view
in order to oppose it, and then calls Searle the dualist.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.2.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 6:58 am (PDT)



--- In Wittrs@yahoogroups.com, Gordon Swobe <wittrsamr@...> wrote:

> --- On Sun, 3/21/10, SWM <wittrsamr@...> wrote:
>
> > Since the CRA is Searle's argument and no one else's, and
> > Searle defends it and obviously finds it convincing, it's
> > pretty clear that Dennett is saying that Searle shares the
> > Cartesian dualist conception of mind
>
> Yes clearly Dennett wants his readers to conclude that Searle believes in
> ghosts, but notice that he does not actually say it. He *insinuates* it in
> a rather under-handed manner.
>

He doesn't say Searle believes in ghosts because that's not his position. He says Searle's argument requires a Cartesian dualist's conception of mind to be considered valid in its conclusion.

He is not being underhanded, as you put it, to make that case. He is merely doing what philosophers do, noting what he sees as a problem in another's argument. You should not personalize this or try to cast aspersions on someone merely because they are arguing against a position you support, especially since you accuse others (Dennett) of doing just that!

> > because that's what it takes to swallow the argument.
>
> Dennett seems to have convinced you of such,

I will repeat what I have said here many times before: I did not come to my position from reading Dennett. I came to it after reading several books by Searle and mulling over his CRA. Only then did I finally read Dennett's Consciousness Explained and realized that he was seeing the same problems I saw in Searle's argument.

So my argument does not depend on Dennett though I generally defer to him because:

1) He got there first;

2) He said more about it and probably said it better; and

3) He has more credibility in the philosophical community and so his opinion counts for more than mine.

But my arguments are my own and often depart from Dennett's positions. For instance, I think he overstates the case for opposing the idea of qualia.

> but once again, we should
> consider the CRA as a simple logical argument independent of any
> considerations about philosophy or even about consciousness:
>

> Because A1) programs are formal (syntactic) and because A2) minds have
> mental contents (semantics) and because A3) syntax by itself does not give
> semantics, it follows that C1) programs don't cause minds.
>

Except that your A3 is not so simple as it seems nor is it trivially true as Searle has sometimes claimed. Because A3 fails, the conclusions fail. And A3's failure is traceable to a conceptual mistake in the construction of the CR which leads to its being underspecked.

>
> Simple, clean, straightforward. The conclusion follows from the premises,
> and I see no hidden premises.
>

That doesn't mean it's not there. After all, if it's "hidden", it wouldn't be easy to spot. The point, again, has to do with WHY Searle asserts that the CR demonstrates that syntax is not sufficient for semantics.

>
> As Budd pointed out, Searle observes that Dennett denies A2).

But Dennett doesn't deny thoughts, ideas, emotions, feelings, beliefs, memories, knowledge, etc., so Searle is wrong to think that Dennett denies mental contacts. He does deny certain ways of thinking about these things but we (and Searle) should not confuse a denial of the efficacy of certain terms with a denial of the phenomena.

> As an
> eliminativist, Dennett denies the reality of mental contents, and here I
> think we see the *true* reason he rejects the CRA.

Can you construct and offer here for consideration his "true" argument then?

> This also explains why
> he sees no important difference between Kasparov and Deep Blue.
> Intentionality has no objective reality in Dennett's philosophy, so Deep
> Blue's lack of it does not matter.

Intentionality is not a thing in the brain in Dennett's philosophy, no. That is quite right. Does that mean it has "no objective reality"? Well, if having that kind of reality means only things we can point to somewhere, then I guess it does. But I wouldn't put it the way you have and I don't recall Dennett ever doing so. On the other hand I recall extensive textual material in Consciousness Explained in which Dennett addresses the reality of his subjectivity in terms of particular experiences he has.

Your comment above, however, on the question of intentionality and Deep Blue misses the point I've repeatedly made about this in related posts. I can't tell if you are simply ignoring my comments on the subject or have forgotten or failed to see them. In any of these cases, I don't see a value in repeating them again. However, note that everything you say above about the intentionality question has already been dealt with in my prior responses.


>
> Like all eliminativists and many other materialists, Dennett seems to
> believe, wrongly, that he must deny the reality of mental states to avoid
> the stigma of Cartesian dualism.

He does not deny their reality. That is a total misreading of him. He explains their occurrence in terms that go beyond their immediate appearance to us.

> He does not realize that his denial of
> the mental in favor of the physical presupposes an acceptance of the false
> Cartesian mind/matter dichotomy. He accepts the false Cartesian world-view
> in order to oppose it, and then calls Searle the dualist.
>
> -gts
>

I thought you said, above, that he only "insinuates it", lacking the courage or whatever to say it outright?

First, he does not deny the mental, he offers a way of accounting for it in terms that are physical.

Second, by doing so he eliminates the need to presume dualism (two ontological basics) to explain the occurrence of minds in the universe.

Third, to think one cannot account for consciousness in physical terms, that one needs something beyond the physical to explain its occurrence IS dualism.

So, fourth, you cannot say that by denying a need for dualism he is affirming dualism. Or, rather, you can say it because you can say anything you like I suppose, but just saying it 'don't make it so.'

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.3.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 8:29 am (PDT)



--- On Mon, 3/22/10, SWM <wittrsamr@freelists.org> wrote:

> [Dennett] doesn't say Searle believes in ghosts because that's not
> his position. He says Searle's argument requires a Cartesian
> dualist's conception of mind to be considered valid in its
> conclusion.

To have a "Cartesian dualist's conception of mind" entails believing in
ghosts! We inherited from Descartes the idea of the "ghost in the
machine".

I must concur with Joe that it makes no sense to consider Searle a
Cartesian, or even to believe that Dennett's insinuations to that effect
have substance. Looks to me like nothing more than subtle name-calling by
a philosopher who cannot offer a legitimate argument to refute the third
axiom of the CRA, except to wave his hands and say "Perhaps understanding
would happen in the CR if it had 'more of the same'." I don't consider that an argument.

Dennett needs to refute the default position - the null hypothesis - which states that more of the same will lead to more of the same. And he needs to do this *without* begging the question of whether the human brain exists as a computer.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.4.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 9:10 am (PDT)



--- In Wittrs@yahoogroups.com, Gordon Swobe <wittrsamr@...> wrote:

> --- On Mon, 3/22/10, SWM <wittrsamr@...> wrote:
>
> > [Dennett] doesn't say Searle believes in ghosts because that's not
> > his position. He says Searle's argument requires a Cartesian
> > dualist's conception of mind to be considered valid in its
> > conclusion.
>
> To have a "Cartesian dualist's conception of mind" entails believing in
> ghosts! We inherited from Descartes the idea of the "ghost in the
> machine".
>

The point is not that Searle ASSERTS a belief in ghosts or that anyone accuses him of doing so but that his conception of mind is consistent with a belief in ghosts in the machine, even though he doesn't explicitly make any such claim or acknowledge that such a claim can be found in the CRA.

So Dennett doesn't accuse Searle of "believing in ghosts" because to believe in something (in a case like this) is to espouse a claim asserting its existence.

This is a verbal problem arising from the distinction between implicit and explicit.

> I must concur with Joe that it makes no sense to consider Searle a
> Cartesian, or even to believe that Dennett's insinuations to that effect
> have substance. Looks to me like nothing more than subtle name-calling by
> a philosopher who cannot offer a legitimate argument to refute the third
> axiom of the CRA, except to wave his hands and say "Perhaps understanding
> would happen in the CR if it had 'more of the same'." I don't consider that an argument.
>

It is if you see the distinction between thinking of understanding as a property of a system rather than as a property of a process.

And that IS Dennett's argument. But I suppose it can look like "hand waving" (a favorite pejorative on lists like these when the writer doesn't like or agree with the claim of another) if one doesn't see the progression of reasons invoked.

Since I've already laid it out, I won't bother to do so again in any detail as it's unlikely to do any more good now than before. But suffice it to say that Dennett's argument hinges on the idea that consciousness (it's full range of features) can be understood as features of a complex system.

Searle's CRA only works if consciousness CANNOT be understood in this way, but THAT has the effect of denying that consciousness can be broken down into constituents of itself which are not, themselves, conscious. And THAT is to say consciousness is ontologically basic, a bottom line feature of the universe not dependent for its occurrence on physical phenomena.

Of course Searle ALSO asserts that consciousness is caused by brains, i.e., a certain kind of physical phenomenon. So either he is saying that it is a caused ontological basic or, if it is in fact reducible to brain processes, then it isn't basic in this way in which case he cannot draw the conclusion from the CR that he does with the CRA, namely that certain physical processes in machines (computational processes running on computers) cannot do what brain processes do BECAUSE, on our consideration of them we can see that they are not conscious in any of their constituent parts.

> Dennett needs to refute the default position - the null hypothesis - which states that more of the same will lead to more of the same. And he needs to do this *without* begging the question of whether the human brain exists as a computer.
>
> -gts
>

He doesn't need to refute any such thing. He only has to show that Searle's argument hinges on a conception of consciousness which is not a given, i.e., that consciousness may very well be explainable as a system property rather than as the process property the CRA presumes it to be.

Searle's argument depends on it being the case that consciousness IS a property of some physical processes (brain processes) but not of others (computer processes).

But there is no evidence that that is the case and it is hardly self-evident, contra Searle's own claims, since it could be the case that consciousness is a system property. Once we see there is at least one other possibility, there's no reason to simply accept Searle's argument.

Moreover, since Searle's argument ONLY addresses one way of understanding consciousness, it can have no implications for the possibilities that relate to a different way of understanding consciousness.

To put some flesh on this theoretical skeleton: Just because the constituents of the CR cannot "cause" (Searle's term here, of course) consciousness in the CR is no reason to assume they cannot do it in any other configuration (a more robustly specked R, i.e., one with more processes doing more things in an integrated/interactive way).

As I have long said, in the end this comes down to competing conceptions of consciousness, of mind. That is why it is so important to see what Searle's conception really means and where it takes us.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.5.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 10:08 am (PDT)



--- On Mon, 3/22/10, SWM <wittrsamr@freelists.org> wrote:

> The point is not that Searle ASSERTS a belief in ghosts or
> that anyone accuses him of doing so but that his conception
> of mind is consistent with a belief in ghosts in the
> machine, even though he doesn't explicitly make any such
> claim or acknowledge that such a claim can be found in the
> CRA.
>
> So Dennett doesn't accuse Searle of "believing in ghosts"
> because to believe in something (in a case like this) is to
> espouse a claim asserting its existence.
>
> This is a verbal problem arising from the distinction
> between implicit and explicit.

In other words you think Searle has an unconscious belief in ghosts, or that he believes in ghosts but keeps it a secret. lol.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.6.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 10:33 am (PDT)



It seems you keep forgetting that the man in the room has everything that
anyone could ever hope for computers to have, yet still he cannot get
semantics from syntax.

In Searle's reply to his systems critics, the man internalizes the program
and BECOMES a complex system... nay even better than that... he becomes a
complex system WITH CONSCIOUSNESS AND INTELLIGENCE, yet still he cannot, by
virtue of implementing a program, understand what the symbols mean.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.7.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 10:48 am (PDT)



--- In Wittrs@yahoogroups.com, Gordon Swobe <wittrsamr@...> wrote:

> --- On Mon, 3/22/10, SWM <wittrsamr@...> wrote:
>
> > The point is not that Searle ASSERTS a belief in ghosts or
> > that anyone accuses him of doing so but that his conception
> > of mind is consistent with a belief in ghosts in the
> > machine, even though he doesn't explicitly make any such
> > claim or acknowledge that such a claim can be found in the
> > CRA.
> >
> > So Dennett doesn't accuse Searle of "believing in ghosts"
> > because to believe in something (in a case like this) is to
> > espouse a claim asserting its existence.
> >
> > This is a verbal problem arising from the distinction
> > between implicit and explicit.
>

> In other words you think Searle has an unconscious belief in ghosts, or that he believes in ghosts but keeps it a secret. lol.
>
> -gts

In other words I think Searle's position on what consciousness is is finally no different than the position of those who believe in ghosts. They think of mind, in the final analysis, in the same way. I have long said that I have no quarrel with dualism per se and even think it could be true but for that to be seen to be the case we would need some evidence, including something like a publicly verifiable occurrence of ghosts, OR an inability to explain the occurrence of minds in any other way.

Of course many believe in ghosts without evidence so believing in them doesn't require that. On the other hand, Searle wouldn't say he believes in ghosts any more than he'd say he's really a dualist. But this isn't about what he says but what his ideas imply.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.8.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 10:59 am (PDT)



--- In Wittrs@yahoogroups.com, Gordon Swobe <wittrsamr@...> wrote:

> It seems you keep forgetting that the man in the room has everything that
> anyone could ever hope for computers to have, yet still he cannot get
> semantics from syntax.
>

But the point is to show what a computer can do, so what the man has is irrelevant. His job in the room is to pretend to be a mindless CPU and go through certain rotes steps like a CPU would. What he understands is irrelevant except with regard to his ability to follow the rote steps (which we all agree even a computer can do).

He can be daydreaming, thinking about the chocolate ice cream cone he just ate, or he can even be trying to decipher the squiggly lines. As long as none of this interferes with the process steps he is performing (as they wouldn't interfere with the computer performing them), he is doing what's required to make the CR do its work.

> In Searle's reply to his systems critics, the man internalizes the program
> and BECOMES a complex system... nay even better than that... he becomes a
> complex system WITH CONSCIOUSNESS AND INTELLIGENCE, yet still he cannot, by
> virtue of implementing a program, understand what the symbols mean.
>
> -gts

Yes, I've seen that reply. It says nothing about the system itself, just the man. So now you have the system inside the man where before the man was inside the system. But if the issue is what is the system, itself, capable of, the man's understanding remains irrelevant because HE isn't THE SYSTEM in either case.

When he's inside he's just a component of the system. When it's inside him he's all the machinery (the hardware, as it were). But he's never the system.

More important is to look at the points I've made about what is implicit in the CR and in Searle's CRA and to understand what it means to be a system property rather than a process property. That's the real reason why Searle's response to the System Reply is beside the point.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.9.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 11:21 am (PDT)



> His job in the room is to pretend to be a mindless CPU and go through
> certain rotes steps like a CPU would.

No, his job as the system, OR as the man in the system, is to try with all
his might and with all his resources to understand what those darned Chinese symbols mean.

He can't understand them, not even when he contains the entire system. Neither he nor anything in him understand the symbols, because there exists nothing in the room that does not exist in him and nothing in room can understand the symbols.

Even if strong AI=true, a strong AI system could not understand Chinese symbols solely by virtue of implementing a program for manipulating them according to syntactic rules.

Think about that Stuart. Even if.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.10.

Re: Ontologically Basic Ambiguity: Mode of Existence

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 11:53 am (PDT)



--- In Wittrs@yahoogroups.com, Gordon Swobe <wittrsamr@...> wrote:
>
> > His job in the room is to pretend to be a mindless CPU and go through
> > certain rotes steps like a CPU would.
>
> No, his job as the system, OR as the man in the system, is to try with all
> his might and with all his resources to understand what those darned Chinese symbols mean.
>

No. Where do you think Searle says anything about his trying to understand Chinese as being a feature of the system? Does a CPU "try" to understand Chinese? If it did, we would say it already has intentionality, one of the features we consider an aspect of what we mean by consciousness, in which case the question of whether the CPU understands would be moot!

> He can't understand them, not even when he contains the entire system. Neither he nor anything in him understand the symbols, because there exists nothing in the room that does not exist in him and nothing in room can understand the symbols.
>

His trying to understand Chinese is irrelevant. He is just playing the role of a rote mechanism, matching symbol to symbol! I'm sorry but this is a complete misreading of the Chinese Room thought experiment.

As to the relevance of there being anything in the room that understands, well that is precisely my point vis a vis the third premise of the CRA! The fact that there is nothing in the room (no constituent or constituent process of the system) that understands
is NOT a demonstration that those constituents or constituent processes (depending on how we characterize this), cannot understanding if combined in the right way.

> Even if strong AI=true, a strong AI system could not understand Chinese symbols solely by virtue of implementing a program for manipulating them according to syntactic rules.
>
> Think about that Stuart. Even if.
>
> -gts
>

I have thought about it, Gordon. That's why I hold the position I now hold. This really hinges on what it means to understand. If understanding is just making connections in different ways, connections that build various pictures and connect them through a network of associated links, then this is nothing a computer could not accomplish, though it would have to be a computer with massive capacity for receiving and retaining information and for performing different functions with that information.

Think of the man in that cartoon looking at the Chinese character for horse and thinking about a horse. What is he doing? He's picturing certain features to himself. He's recalling an image, in a recall process that starts with a link between the abstract symbol and some particular retained mental image and then which branches out in ways the cartoon itself cannot show.

For instance, along with the mental picture of the horse he presumably has some other things, e.g., bits of knowledge about what horses are (mammals, four legged, long faces, rideable, sweaty when you run them, herbivores, etc.) which is associated with that picture, as well as what he has learned in more indiret ways about horses (e.g., in terms of history: mankind tamed them, they came from a smaller animal called eohippus, they were initially brought to North America by the conquistadors, the American Indians learned to ride those that had gone wild and became great plains warriors, Alexander the Great rode a horse called Bucephalus, etc., etc.)

The links are, in principle, endless, and to the extent we share a bunch of them in varying degrees we have common understandings of what the symbol(s) for horse and the word "horse" mean.

Ask someone a general question about horses and you'll get different answers but some the same and to the extent you get a decent amount of answers you share with the person you questioned, there will be common understanding.

So what then is understanding? It's being able to take any input and place it into a network of these associations. There is no reason in principle, that I can see, that a computer could not do this given enough capacity and the right programming (to enable all these different pictures to be built, retained and accessed/used). And, if so, then what we call understanding is nothing more than a feature or property of such a complex system.

The fact that we cannot look at any of the component processes of such a system and say aha, here is the understanding in this particular stand-alone process does NOT preclude our finding it in a combination of such processes.

Searle's CRA depends on the notion that the understanding MUST be a feature of a particular constituent process and completely disregards the possibility that it might be a systemwide funcion.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

7.1.

Searle's CRA vs his Biological Naturalism

Posted by: "Gordon Swobe" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 6:17 am (PDT)



--- On Sun, 3/21/10, SWM <wittrsamr@freelists.org> wrote:

> But the CRA is drawing conclusions about what causes
> understanding (as in 'brains cause consciousness' - Searle),

No, I think you/Dennett conflate Searle's CRA with his argument for biological naturalism.

The CRA tells us almost nothing about what actually causes semantics/consciousness/intentionality. We must consider the CRA as a *negative* argument: it shows simply that formal programs do not and cannot cause minds. Period, end of argument.

Now then after we accept the conclusion of the CRA then Searle has
something interesting to say about brains and consciousness. The
conclusion of the CRA acts as a premise in that new and different
*positive* argument for biological naturalism.

-gts

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

7.2.

Re: Searle's CRA vs his Biological Naturalism

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 7:24 am (PDT)



--- In Wittrs@yahoogroups.com, Gordon Swobe <wittrsamr@...> wrote:

> --- On Sun, 3/21/10, SWM <wittrsamr@...> wrote:
>
> > But the CRA is drawing conclusions about what causes
> > understanding (as in 'brains cause consciousness' - Searle),
>
> No, I think you/Dennett conflate Searle's CRA with his argument for biological naturalism.
>
> The CRA tells us almost nothing about what actually causes semantics/consciousness/intentionality. We must consider the CRA as a *negative* argument: it shows simply that formal programs do not and cannot cause minds. Period, end of argument.
>

The CRA conclusion is about what can't cause consciousness, you're right on that. I was using "what causes" more generally, as a way of referring to the question of causation.

Since Searle DOES take the position that brains cause consciousness IN RELATION to his argument that computer programs cannot BECAUSE of what they are, as seen in the fact that they don't cause it in the CR, he is clearly addressing questions of causation here, which is the point I was making. That is, the CRA is about the causal question insofar as it's about what can't cause it.

Finally, the CRA fails for a number of reasons (already laid out) but the main problem lies in the failure to distinguish between what can be called a system property and what can be called a process property. That is, just because the constituent elements in the CR are not, in themselves, conscious (give no evidence of being conscious) doesn't mean that some combination of them cannot be.


> Now then after we accept the conclusion of the CRA then Searle has
> something interesting to say about brains and consciousness. The
> conclusion of the CRA acts as a premise in that new and different
> *positive* argument for biological naturalism.
>
> -gts
>
>

But the CRA is wrong, so we cannot accept its conclusion.

And what he has to say about brains and consciousness is not interesting because he really says nothing more than what most of us already think we know, namely that brains are the source of consciousness in some as yet unidentified way. He makes no effort to suggest how (rightfully noting that's the job of science, of course) or to consider the implications of ascribing a causal role to one physical platform, the brain, while denying it to another, the computer.

Searle does try to get around this by arguing that what happens on computers is merely abstract and so without causal power. But that's absurd since it is physical events that implement abstract programming just as it's physical events in brains that implement the DNA coding that underlies them. Searle's claims about abstractions are the real distraction.

SWM

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

8.

Human autobiography

Posted by: "void" rgoteti@xxxxxxxxx   rgoteti

Mon Mar 22, 2010 7:24 am (PDT)



Symbols and signs accumulate in the human brain with the help of sound.Now this mixture is moving i,e what we call as thought or thinking.In fact different symbols never mix hence words are attached with feeling,experience, and sensation.Now this mixture is able to produce pictures which in turn cause feeling.
Description is holding the picture.More one speaks outward or inward, one is getting ideas which are the strength for ideals.Mechanism of additions and deletions enables description to frame more number of pictures.Thus knowledge accrues in the head as a learned.
Learned creates his own world of fantasy.So called experienced is a mixture of sound,symbol,experience as impressions.
Thus framed intellect is human perception which is curved,indirect.Since intellect being the sight sees its own pictures in the space created by it self for its entertainment.This is driving human machine from one to the other end.
This to and fro movement continues if one do not aware linguistic mechanism in its totality.Otherwise so called human automata's moving around in its stuffed content of consciousness.

thank you
sekhar

9a.

Re: (no subject) Gibberish and NonGibberish

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 12:03 pm (PDT)





--- In WittrsAMR@yahoogroups.com, "iro3isdx" <wittrsamr@...> wrote:
>
>
>
>
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
>
> What was that gibberish? Or at least it looked like gibberish on the yahoo page.

Hi Neil,

I have no idea why that happened. If induction be reliable, I would think it was neither my fault nor yahoo since most messages go off swimmingly.

I conclude nevertheless that I have no idea why that happened.

By the way, I can agree with you that the CR thought experiment falls short of a more rigorous proof of, say, something supposed to be learned empirically.

Its value, nevertheless, is the one I tried to point out amidst the unintended gibberish. The AIers will be helpful or not in elucidating just what they are about..

From recent discussion I've gleaned that there are two noncompeting research programs in the form of weak AI (how do we get artifacts to do what we want and what programs are necessary for exactly that) and biological naturalism a la Searle such that we look for "neurobiological correlates of consciousness" with our mind meters, say.. (NCCs) in order to find actual causal mechanisms later.

If the debate becomes one in which it is claimed that these two research programs are in conflict, one trying to argue that the other is incoherent, then it will be a matter for some independent thinking (or just reading the literature starting with Searle's target article).

Stuart used to argue that Searle was attempting to kill a possible research program by a logical argument. But, really, Searle doesn't argue against weak AI. And as far as myself, you, and Gordon know, weak AI is the holy grail these days and Searle doesn't argue a priori against its possible success, though I recall you have expressed a bit of reservation as to its possible success--and it may be the same sort of reservation Putnam has in mind considering the problem of abductive reasoning on the part of any AI system. Feel free to comment. The above comment is inspired by Jeff Buechner's _Godel, Putnam, and Functionalism: A New Reading of Representation and Reality_, 2007, wherein he challenges Putnam's arguments against computational functionalism. My sources are indeed fair and balanced!


I've recently learned that it is Hacker and Bennett who try to argue that the very thesis of Searle's biological naturalism is incoherent, i.e., that it is a mereological fallacy to suggest that the brain causes consciousness.

Credit to Stuart who, as far as I can tell, never ever endorsed Hacker's attempt to kill Searle's research program.

Credit to me for explaining why these research programs are not necessarily in conflict.

They can be made to appear that they are in conflict, though, if one is selective in their description of each.

My suggestion, again, is that they are noncompeting research programs.

Anyone arguing otherwise is itching for a debate.

I welcome the debate while believing it to be an unnecessary one if both parties are on the square--it may be that one party simply prefers to study a particular subject matter. And if so, so be it. But if they get too chatty on other matters, they may welcome some critical censure, it is to be hoped..

Foe, er, for example:

From a review of Hacker:

"In the case of Searle, Bennett and Hacker find much with which they agree. Cartesian dualism, behaviorism, identity theory, eliminative materialism and functionalism are all rejected, and rightly so. Searle advocates "biological naturalism," the view that consciousness is a biological phenomenon, a proper subject of the biological sciences (p. 444). Bennett and Hacker serve up no objection here. It is when Searle claims that "mental phenomena are caused by neurophysiological processes in the brain and are themselves features of the brain" (Searle, Rediscovery, p. 1) that Bennett and Hacker demur. Searle's claim commits the mereological fallacy discussed earlier. Brains are no more conscious than they are capable of taking a walk or holding a conversation. True, no animal could do either of these things without a properly functioning brain. But it is the person, not the brain, that engages in these activities."

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

10.1.

Re: Who lost to Deep Blue?

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Mon Mar 22, 2010 12:33 pm (PDT)



Hi Josh,

Your responses are brilliant in that I can tell you have enough command of the issues to warrant such a compliment.

I see why earlier, though, I thought you completely lacked any command at all! And guess why? It's because you are so bald!

I refer you to my reply to Neil today.

I simply see weak AI and Searlean biological naturalism as noncompeting research programs.

Wanna debate this?

Looks like you might.

For example, you write:

> I'm dubious about semantics, but you know what I do like? Intentionality! Though I'm sure you won't like the (deflated) intentionality that I like.<

Prolly not because you speak with forked tongue. You can't be dubious about semantics given your successful speech act about what you like, compared to what you don't like, as well as what you don't admittedly know the meaning of (semantics, syntax), as you earlier pointed out a bout of skepticism about.

Now, if you want your deflated sense of intentionality such that thermostats are intentional systems, then you might end up with a research program that is not necessarily about philosophy of mind per se. It would be about how to get artifacts to behave via computer programs. That's fine. Searle doesn't argue against that research program.

If you want, though, you may argue some incoherence in Searle.

But to do that, you can't be so breazy while demanding an exacting lucidity.

I had no idea that Stuart was inspired by your baldness to offer detailed reasons for the baldness!

Two questions:

1. Can computation be discovered to be a physical process?

2. Is it assigned willy nilly?

One could answer 'no' to the first and 'no' to the second without courting incoherence.

How about other possibilities like 'yes' and 'yes', 'no' and 'yes', as well as 'yes' and 'no'?

Cheers,
Budd

--- In WittrsAMR@yahoogroups.com, "jrstern" <wittrsamr@...> wrote:
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> >
> > I will briefly reply to both Josh and Gordon below.
> >
> >
> > --- In WittrsAMR@yahoogroups.com, Gordon Swobe <wittrsamr@> wrote:
> > >
> > > --- On Fri, 3/19/10, jrstern <wittrsamr@> wrote:
> > >
> > > > Can you show us who ever said syntax *is* either
> > > > constitutive or sufficient for semantics?
> > >
> > > Stuart seems to think it is.
> >
> > Anyone willing to fudge efficient causality with computation is one who may think syntax may be sufficient for semantics. Most don't come out as baldly as Stuart.
>
> Well that would be me, not Stuart.
>
> But I'm not sure what you (or anyone) mean by syntax, or semantics.
>
> I'm pretty clear on causality, but only by disclaiming Humean skepticism. And I'm very clear on computation, though nobody else is.
>
>
> > Stuart simply and openly conflates computation with physics and doesn't understand that it is a fudge to baldly state the premise that "Computers are physical machines."
>
> Again, that would be me. I believe Stuart actually qualifies all of these statements. I support them pretty much straight on.
>
>
> > The fudge-fest way is one way of implying it without being explicit about it.
>
> Please, make it explicit, I support explicit.
>
> Do you have a response, if these are stated explicitly, other than outright rejection by appealing to Searle quotations as final authority - no matter how much they are inconsistent or incoherent by my lights?
>
>
> > See below on a certain baldness in Dennett. Indeed, content similarity is all one gets with parallel processing a la Dennett plus Paul Churchland. But it is shown that it can't do the work of content identity by Fodor in "All at Sea in Semantic Space: Churchland on Meaning Similarity."
>
> http://olddavidhume.rutgers.edu/tech_rpt/MeaningSim46.PDF
>
> There are about nine issues here, but I don't see how they relate.
>
>
> > But it is also difficult to make content identity metaphysically respectable nowadays. So be it. There is still much trouble for the other side too. Externalism may appear to some to imply that one's iphone is literally part of their mind. "Supersizing the Mind" and Fodor's review--google this and see what the famously funny Fodor has to say on this topic just for fun.
>
> Ditto.
>
> http://www.lrb.co.uk/v31/n03/jerry-fodor/where-is-my-mind
>
> I like Fodor's internalism, which he calls Cartesianism, either not meaning dualism or embracing dualism, I'm not quite certain which.
>
> But I don't think Fodor has ever been entirely happy with computation, in part because he has never understood it, never simply tried to make all the physicalist basis for it explicit and sufficient. He derides the Churchlands for trying to do so, but they famously try to do so only by avoiding the Turing model of computation and the "symbolic" approaches to computation, IMHO again "fudging" the basic issues.
>
>
> > Funny enough, it turns out that Dennett baldly (well, maybe considerably late in the pages of _Consciousness Explained_) dismisses the second premise. Cf. Searle's review of Dennett.
>
> None of them are beyond debate.
>
>
> > I suppose that such baldness is treated as philosophical anathema for those who aren't so Wittgensteinian that their philosophy is expressed as a sort of joke with breezy airs of seriousness. This just may account for a very bold title of one of Fodor's more recent papers: "Having Thoughts: A Brief Refutation of the Twentieth Century."
>
> http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/havingconcepts.pdf
>
> I cannot quite follow what you are saying about this.
>
> I'm afraid I completely part company with Fodor here, and also in his latest book, LOT 2, where he says that "procedural semantics" are the worst thing in the world.
>
> Well, if one is going to believe as I do in computation as physical and causal, and if one is going to DISBELIEVE as I do in Fodor's innateness of ideas (like 'horse' and 'carburetor'), then what turns out to work, is pretty much just the kind of "procedural semantics" as Fodor disclaims in the book and in this article. Jerry, if computers are going to turn out to be useful devices in this world, even as computers and not as minds, some account has to be given of how that happens. And if that account just also happens to completely vitiate Searle's idea of what computation is, well, so much the worse for Searle, unless you can wish away the billion or so computer workstations, not to mention smart phones and the like, that we all seem so attached to these days.
>
>
> > Further, I wonder if anybody ever thought to create an analogue of the CR by calling something the HR (human room). We put an humunculus inside the HR and, lo, the poor guy can't make heads or tails out of brute physical happenings such that physics also is insufficient for semantics!
>
> And so much for that HR story.
>
> Or so much for semantics.
>
> I'm dubious about semantics, but you know what I do like? Intentionality! Though I'm sure you won't like the (deflated) intentionality that I like.
>
>
> > Pretend that even the reader doesn't know a language, though, for it to work!
>
> I am playing with an English Room (ER) in which a speaker of English is given questions in writing, and responds to them.
>
>
> > Happy spring March madness!
>
> None of my local schools nor alma mater are in it this year, so what the heck.
>
> Go Lakers!
>
> Josh
>
>
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Get it all here

Breaking news to

entertainment news

Yahoo! Groups

Do More For Cats Group

Connect and share with

cat owners like you

Yahoo! Groups

Mental Health Zone

Bi-polar disorder

Find support

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 181 - WittrsAMR