[C] [Wittrs] Digest Number 96

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 5 Jan 2010 11:07:01 -0000

Title: WittrsAMR

Messages In This Digest (21 Messages)

Messages

1.1.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 3:47 am (PST)



Cayuse wrote:

>Joseph Polanik wrote:

>>J wrote:

>>>That where we choose to draw the boundary is arbitrary relative to
>>>the existing maths is not to deny that there is a boundary nor yet is
>>>it to draw the boundary at the consciousness of the observer. Rather,
>>>it is to show that the (current) maths leave such matters undecided.

>>>The parenthentical insertions of "current" allude to developments
>>>subsequent to von Neumann's text, such as the study of quantum
>>>decoherence, which may yet indicate a non-arbitrary way of drawing
>>>such a boundary. Or rather, if I understand correctly, how seemingly
>>>classical behavior can occur with no such boundary.

>>I think the latter description of the impact of decoherence theory is
>>the more accurate.

>>Hey Joe (Hendrix, anyone?), speaking of decoherence theory, why would
>>anyone choose to discard the idea that any interaction at all will
>>result in the reduction of superposed states in favor of the idea that
>>only conscious experience will do so?

from what I gather, there is disagreement among physicists as to whether
decoherence solves the measurement problem.

"Decoherence does not generate actual wave function collapse. It only
provides an explanation for the appearance of wavefunction collapse. The
quantum nature of the system is simply "leaked" into the environment. A
total superposition of the universal wavefunction still occurs, but its
ultimate fate remains an interpretational issue. Specifically,
decoherence does not attempt to explain the problem of measurement.
Rather, decoherence provides an explanation for the transition of the
system to a mixture of states that seem to correspond to those states we
perceive as determinant. Moreover, our observation tells us that this
mixture looks like a proper quantum ensemble in a measurement situation,
as we observe that measurements lead to the "realization" of precisely
one state in the "ensemble". But within the framework of the
interpretation of quantum mechanics, decoherence cannot explain this
crucial step from an apparent mixture to the existence and/or perception
of single outcomes." [http://en.wikipedia.org/wiki/Quantum_decoherence]

this wikipedia article cites http://arxiv.org/pdf/quant-ph/0312059v4
[Decoherence, the measurement problem, and interpretations of quantum
mechanics by Maximilian Schlosshauer] which opens with one set of quotes
from each side of the dispute and closes with this assessment:

"We may therefore regard collapse models and decoherence not as mutually
exclusive alternatives for a solution to the measurement problem, but
rather as potential candidates for a fruitful unification."

>Seems to me that we don't need to entertain the idea that
>Schrodinger's cat is both alive and dead until we open the box, if it
>is interacting with air molecules all the time.

possibly true. the one thing that decoherence is clearly good for is
explaining why large objects remain in existence as objects even when
not being observed by humans.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.2.

Re: Consciousness and Quantum Mechanics

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 8:12 am (PST)



<snip>

Joe Polanik wrote:

> then, I will undertake to show that your mechanistic,
> Dennett-consistent theory of consciousness can't possibly be true unless
> the von Neumann Interpretation of QM is wrong.
>

Well, yes, that's what I had asked you to do when you raised this and what I thought you were doing.

> >>>On that score I would say you haven't yet made the case.
>
> well, let's see where we are. the first step in making my case consists of establishing that the von Neumann Interpretation of QM is dualistic.
>
> unfortunately, ...
>
> >>when I presented the von Neumann Interpretation of QM (which is as
> >>overtly dualistic as one can get without actually plagiarizing from
> >>Descartes scrapbook), you resist the suggestion that the von Neumann
> >>Interpretation is incompatible with your mechanistic, Dennett-based
> >>theory of consciousness.
>
> >What the "f" are you talking about?
>

> Stuart,
>
> do you understand that the von Neumann Interpretation of QM is
> dualistic?
>

> suspend any belief you may have that I have tweaked or added to or
> subtracted from it; and, just answer that one question.
>
> hint: it's a yes or no question.
>
> Joe

What I have seen, so far, is your claim that von Neumann's thesis (I,II,III) converts into your thesis (1,2,3) by making his II (all the physical instrumentalities of observation) into your 2 (whatever is phenomenal). This latter is certainly dualistic and may well be a fair interpretation of von Neumann though I don't know that it is.

Is von Neumann dualistic in the way you present him? As I've noted, one can recognize an observer in the mix that is reflected in the I,II,III division without presuming that the observer is not physically derived (which is what dualism must be about). Dualism in the philosophical sense (as used by Descartes and even Chalmers, though that is a little more ambiguous)implies that the observer is something separate from the rest of physical reality, an added element in the mix that is not reducible to the rest of the mix.

But in THAT sense one need not read the von Neumann thesis you presented as requiring a dualist account (whether he was personally a dualist in this sense or not). Merely to recognize a dichotomy between observed and observer is NOT dualism. Certainly Dennett acknowledges an observer, a subjective standpoint (which is what he sets out to explain). THAT isn't and doesn't imply dualism.

Sometimes it seems to me, on lists like this, that the term, "dualism", is often confused with any suggestion of a dichotomy. Is there a dichotomy between observed and observer in our ordinary usages? Yes. Is there a dichotomy between subject and object? Yes, again. But the mere existence of such dichotomies does not imply that there is a fundamental divide at an ontological level (in terms of what underlies the two dichotomous elements in any set of pairs).

Dualism, in a philosophical sense, is not simply the claim that there are two elements in a group, two divisions in a given classification. It's the claim that there is a fundamental divide, into two distinct and separate elements, that will be found in whatever it is that underlies everything else.

But the fact that the von Neumann thesis tells us that there is an observer and an observed (and that the observer causes an effect in the observed) says nothing about whether the observer is grounded in something that is, at some ultimate level, different than the observed.

My question to you was to ask you to show why (as you have claimed) von Neumann's thesis, if true, implies a Dennettian model of consciousness must be false. What you have provided, when everything else is boiled away, is a claim that von Neumann's thesis of a separate observer having a causal impact on observed phenomena at the quantum level is a claim about the ontological independence of the observer.

My response, re: this, is that there seems to be nothing in the von Neumann thesis (re: categories I, II, and III), as you have so far presented it, that implies or requires that the observer be considered to be ontologically distinct from the observed phenomena (that is, that the observer be thought of as being derived from something other than whatever it is the observed phenomena is derived from).

Your entire argument appears to hinge on your claim that "consciousness" means an "abstract I" (von Neumann's term) that you tell us has the nature of something like Kant's "transcendental I", an unperceived perceiver, and that this is implied in von Neumann's formulation.

Certainly, such a picture of "consciousness" IS dualist in the classical philosophical sense. But I am suggesting to you that, whatever von Neumann's actual opinion on this subject (and I make no claims to know that), the picture you have given of a world consisting of things in categories I, II, and III, DOES NOT IMPLY ANY SUCH THING.

One can recognize that there is an observer and an observed without assuming each is grounded in an ontologically distinct realm of being. Of course, Dennett's thesis is intended to tell us how we get a subject in what we take to be an otherwise objective universe. It does not deny that there are subjects and, in not doing so, it recognizes that we have these kinds of dichotomies. The point is to see that the mere existence of such a dichotomy does not imply dualism.

One other important thing. Elsewhere YOU note that this is about competing conceptions of what we mean by the term "consciousness" and I agree. However, merely recognizing that there is a subjective standpoint (and that we occupy it), does not imply your concept of an "abstract I" qua unperceived perceiver. The point of Dennett's explanation is to show how we can account for such a subjective standpoint WITHOUT positing (assuming that there is) anything like this unperceived perceiver except as a kind of abstraction, a certain kind of useful fiction.

So you cannot argue against a Dennettian model by claiming that the feature it isn't accounting for (which is what such an argument must identify) is a concept it claims not to need. That's not an argument against this viewpoint, it's just a denial. What you need to do is identify a feature we all agree must be recognized as present in consciousness and show how THAT feature is not accounted for by the model.

SWM

P.S. I have ratcheted down my participation here and perhaps will do so even more, going forward. If you do address the concerns I've raised I will try to find time to reply. But only if it's done civilly. I have no desire to expend my energy in rancour.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.3.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 8:12 am (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>>> Hey Joe (Hendrix, anyone?), speaking of decoherence theory, why
>>> would anyone choose to discard the idea that any interaction at all
>>> will result in the reduction of superposed states in favor of the
>>> idea that only conscious experience will do so?
<snip>
> this wikipedia article cites http://arxiv.org/pdf/quant-ph/0312059v4
> [Decoherence, the measurement problem, and interpretations of quantum
> mechanics by Maximilian Schlosshauer] which opens with one set of
> quotes from each side of the dispute and closes with this assessment:
>
> "We may therefore regard collapse models and decoherence not as
> mutually exclusive alternatives for a solution to the measurement
> problem, but rather as potential candidates for a fruitful
> unification."

Coupled with Zurek's notions of envariance, einselection,
and quantum Darwinism, decoherence looks like a serious
contender to me. I was wondering what role there would
be left for consciousness in QT given this approach.

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.4.

Re: Consciousness and Quantum Mechanics

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 8:24 am (PST)



--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:

<snip>
>
> >As for the theory I subscribe to, it isn't Dennett-based, it is
> >Dennett-consistent.
>
> okay; then, I will undertake to show that your mechanistic,
> Dennett-consistent theory of consciousness can't possibly be true unless
> the von Neumann Interpretation of QM is wrong.
>

Well, yes, that's what I had asked you to do when you raised this and what I thought you were doing.

> >>>On that score I would say you haven't yet made the case.
>
> well, let's see where we are. the first step in making my case consists
> of establishing that the von Neumann Interpretation of QM is dualistic.
>
> unfortunately, ...
>
> >>when I presented the von Neumann Interpretation of QM (which is as
> >>overtly dualistic as one can get without actually plagiarizing from
> >>Descartes scrapbook), you resist the suggestion that the von Neumann
> >>Interpretation is incompatible with your mechanistic, Dennett-based
> >>theory of consciousness.
>
> >What the "f" are you talking about?
>
> Stuart,
>
> do you understand that the von Neumann Interpretation of QM is
> dualistic?
>

You say above I "resist the suggestion that the von Neumann Interpretation is incompatible with your mechanistic, Dennett-based theory of consciousness" when, of course, THAT was precisely the reason I was intrigued by your comment and asked you to explain the von Neumann interpretation and how it implies that a mechanistic model is wrong. What I have seen, so far, is your claim that von Neumann's thesis (I,II,III) converts into your thesis (1,2,3) by making his II (all the physical instrumentalities of observation) into your 2 (the phenomenal). This latter is certainly dualistic and may well be a fair interpretation of von Neumann though I don't know that it is. Is von Neumann dualistic in the way you present him? As I've noted, one can recognize an observer in the mix without presuming that the observer is not physically derived.

Dualism in the philosophical sense (as used by Descartes and even Chalmers) implies that the observer is something separate from the rest of physical reality, an added element in the mix that is not reducible to the rest of the mix.

In THAT sense one need not read the von Neumann thesis you presented as dualist (whether he was personally or not), i.e., recognizing a dichotomy between observed and observer is NOT dualism. Even Dennett acknowledges an observer, a subjective standpoint (which is what he sets out to explain). But THAT isn't and doesn't imply dualism.

SWM

> suspend any belief you may have that I have tweaked or added to or
> subtracted from it; and, just answer that one question.
>
> hint: it's a yes or no question.
>
> Joe

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.5.

Re: Consciousness and Quantum Mechanics

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 2:48 pm (PST)





--- In WittrsAMR@yahoogroups.com, Sean Wilson <whoooo26505@...> wrote:
>
> Bud:
>
> Could you do me a favor from now on and delete the portion of the other person's message that you don't need? This message below (which I have only referenced) has a large segment below your signature that should be cut. We have a block-quoting rule in the system, which was apparently escaped by the way your mail client sent the mail. I stress the need to remove as much as possible so the message board isn't such a hassle to read. The policy is 25 lines per your thought, so it should be pretty easy to follow. The real point is to delete below your signature. After your last comment, get rid of the stuff below.
>
> Regards and thanks. 

I think that is a terrible rule. Allowing the remainder is allowing third parties to see what WASN'T reponded to and allows third parties to see the whole post to which a reply is given.

Turns out that your post here is worse than simply not responding.

Cheers,
Budd

Ps. It shouldn't be a hassle to simply quit reading after the signature!

> Dr. Sean Wilson, Esq.
> Assistant Professor
> Wright State University
> Personal Website: http://seanwilson.org
> SSRN papers: http://ssrn.com/author=596860
> Discussion Group: http://seanwilson.org/wittgenstein.discussion.html
>
>  
> ----- Original Message ----
> From: gabuddabout <gabuddabout@...>
> To: wittrsamr@...
> Sent: Sat, January 2, 2010 3:14:35 PM
> Subject: [Wittrs] Re: Consciousness and Quantum Mechanics
>
>
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.6.

Re: Consciousness and Quantum Mechanics

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 2:55 pm (PST)





--- In WittrsAMR@yahoogroups.com, "iro3isdx" <wittrsamr@...> wrote:
>
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
>
>
> > Philosophy is easy. The hard problem is called such for a reason.
>
> Philosophy is mostly a compendium of fairy tales. The hard problem is
> hard because it does not fit with the accepted plot lines.
>
> Regards,
> Neil

I'm confused. Are you offering that anything you say is to be interpreted as part of a fairy tale? Or are you advancing a thesis as to why the hard problem doesn't fit with the accepted plot lines of a fairy tale with accepted plot lines?

Or are you wanting to hedge what you say by allowing that as serious as you can get, you are always with the fairy tale ace in the hole such that when the dirt flies you can say, "Just kidding!"?

Thanks to the group for all the amazing replies to my riff!

Cheers,
Budd

> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.7.

Re: Consciousness and Quantum Mechanics

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Mon Jan 4, 2010 3:16 pm (PST)



.. ok Bud. Thanks for the input. The experiment with you was in good faith. Sorry to have thrown you out. At least the Analytic list has the norms you prefer.

Regards.   

SW

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.8.

Re: Consciousness and Quantum Mechanics

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 5:05 pm (PST)



JP,

I wrote:
> >Saying that the mathematics work wherever we choose to
> draw the
> >boundary is not equivalent to saying that consciousness
> is necessary
> >for collapsing the wave function.

you replied:
> perhaps not; but, that's why there are different
> interpretations of QM

You say "perhaps not". Are you suggesting that they might be equivalent claims? Are you unsure?

And when you say, "that's why...", are you saying that the non-equivalence between the two claims is why there are different interpretations? Are you saying that the uncertainty about their equivalence is why there are different interpretations? Are you saying that the first claim, viz. "the mathematics work wherever we choose to draw the boundary", is why there are different interpretations.

I am leaning toward that last reading of your remark. But there were other interpretations of quantum mechanics before von Neumann's demonstration of that point, so that can't be right either.

>
> given the collapse postulate, saying that the mathematics
> work wherever
> we choose to draw the boundary makes it necessary to find

"Necessary" here means something like "otherwise the theory is incomplete" (in some sense).

Different scientists have different requirements. They also vary in their scruples concerning the legitimacy of various questions.

> something
> else, something outside (I + II), to cause the collapse of
> the wave
> function during a measurement.
von Neumann postulated that
> this was the
> abstract I, a term for which 'consciousness' is generally
> substituted.

Certainly, others have posited such a thing, e.g. Stapp. Did von Neumann? He used that _expression_ ("abstract ego") in the course of presenting various possible ways of characterizing experimental situations. Does that in itself count as a postulate?

>
> you could choose to deny the collapse postulate (as in the
> Many Worlds
> Interpretation); but, then you'd have branching universes
> to contend
> with.

(I find that way of putting things somewhat amusing. As far as I can tell, the biggest objection to Everett's view is that it involves a seemingly extravagant claim, albeit one with no obvious observable consequences. We wouldn't have to "contend" with them at all! That's part of the problem.)

> until all but one interpretation is ruled out empirically,
> you get to
> pick your poison.

Or decline to drink.

>
> >It may suggest that for some readers and obviously some
> have developed
> >the argument in that direction. But if he did believe
> such a thing
> >himself, von Neumann was far more circumspect in
> admitting such a
> >belief, so it is disingenuous to credit (or blame) him
> for such a view.
>
> >And Nick Herbert is a popularizer, albeit a rather good
> one. But in
> >saying that "von Neumann's world is entirely quantum",
> he grossly
> >oversimplifies.
>
> oversimplifies, how?

For the reasons stated above. To take one possible way of characterizing the experimental situation that von Neumann describes and call it "von Neumann's world" is to overlook the subtlety and restraint von Neumann himself showed.

Moreover, even if we take Stapp's view and call it von Neumann's. then the world described certainly is not "entirely quantum". The abstract ego is part of the world, else it could not interact with the world by causing collapses!

I wrote of:
...the study of
> quantum
> >decoherence, which may yet indicate a non-arbitrary way
> of drawing such
> >a boundary. Or rather, if I understand correctly, how
> seemingly
> >classical behavior can occur with no such boundary.

you replied:
> I think the latter description of the impact of decoherence
> theory is
> the more accurate.

That's my suspicion, though I confess my grasp of these issues is not what I might like.

...as it stands, various
> interpretations of
> >quantum mechanics are underdetermined by the theory.
> They are
> >philosophical positions, not scientific theories.
>
> I disagree;

Note that I wrote "as it stands". I grant that such things can and do change.

Atomic theory.

Was Democritus was engaged in metaphysics? Or science? (setting to one side the anachronistic nature of applying such a distinction to his case)

Was Boyle? Was Dalton?

Brown, Desaulx, Einstein, and Perrin were clearly doing science.

But is there a clear boundary between the cases? We could choose to draw one for a particular purpose.

And does "atom" mean the same thing in these various cases and in subsequent work? There are various points of overlap, but also significant differences between various conceptions. So the connection between what Democritus was doing and later developments.

Still, who would dispute that the ideas of Democritus would later have scientific significance.

although, it's difficult to get alternate
> theories to make
> competing predictions for a technologically feasible
> experiment.

It seems to me that "technological feasibility" would be too high a bar to set for the distinction I would like to make. And your points about EPR, Bell's inequality, and experiments by Alain Aspect and others are well taken on that score.

On the other hand, I have serious reservations about saying, e.g. "Of course Everett's view has experimental consequences: we could easily imagine finding ourselves in a setting where Mr. Spock has a goatee and gives a Nazi-style salute. We just don't currently have the technology!"

And what possible observation could support or falsify the claim that consciousness is necessary for wave function to collapse?

> anyone is free to 'shut up and calculate'. doing so might
> have resulted
> in less conflict between the followers of Copernicus and
> the Roman
> Catholic Church; but, as it turned out, the math that
> better predicted
> the behavior of the world better described the world.

It's not clear to me what you're saying here.

The Copernican system was more economical in its description and it enabled more efficient calculations. As far as accuracy, we have to go to Kepler and Brahe before we can start making distinctions there. But the picture was unquestionably more fruitful, as discoveries of Kepler, Galileo, and Newton demonstrate.

And...?

>
> yes, assuming that the earth revolved around the sun
> simplified
> astronomical calculations by getting rid of some of the
> epicycles
> required by the Ptolemaic system; and, as it turned out,
> the earth does
> in fact revolve around the sun.

We certainly now regard it as right to say that the earth revolves around the sun. And wrong to say the sun revolves around the earth. We have embraced the picture for all sorts of reasons.

But has science settled the matter?

Our most successful theory about such matters rejects the idea that there is any privileged frame of reference. According to General Relativity, it is just as legitimate to describe the earth as stationary as the sun.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.9.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 5:10 pm (PST)



>>so there is really no excuse for you to conflate my taxonomy of
>>reality types with von Neumann's division of the world.

>I'll say it again. You said (and I'll just quote you from your own
>words in this very post, above) that you:

>"undertook to show that [SWM's] mechanistic, Dennett-based theory of
>consciousness can't possibly be true unless von Neumann is wrong."

>To which I responded by asking you what von Neumann's thesis
>specifically claims. Apparently you gave me a hybrid Polanik/von
>Neumann thesis which is NOT what I asked about and not what was in
>question, given your very specific claim (repeated above).

>Now this dispute looks to be more smokescreen than substance, at this
>point. My interest, my only interest in this, has to do with whether
>you have found a sound criticism of a Dennettian model of
>consciousness. Instead of answering THAT question you have now
>sidetracked us with your indignation over whether I should have known
>that you were really always hybridizing the argument from the
>beginning.

>So I'll tell you what, I'll stipulate to that. I shall agree that I
>should have known what you had in mind from the beginning and that I am
>guilty of not having read your past offerings closely enough to have
>been cognizant of the actual nature of your claim, i.e., that it was
>less about von Neumann (despite your repeated references to him!) than
>it was about Polanik's take on him. All right? Feel better?

what you should have known from the beginning is that von Neumann did
not employ my taxonomy of reality types for any purpose whatsoever.

indeed, von Neumann died almost 50 years before my first post about my
taxonomy of reality types.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.10.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Tue Jan 5, 2010 2:57 am (PST)




SWM wrote:

>Joseph Polanik wrote:

>>Stuart, do you understand that the von Neumann Interpretation of QM
>>is dualistic?

>What I have seen, so far, is your claim that von Neumann's
>thesis (I,II,III) converts into your thesis (1,2,3) by making his II
>(all the physical instrumentalities of observation) into your 2 (the
>phenomenal).

Stuart, focus!

my claim is just the opposite of what you say it is.

I claim that von Neumann did not use my taxonomy of reality types in his
analysis of the measurement problem. he did not convert the formula
expressed in his notation, (I + II) | III, into a formula that, when
expressed in the notation I use for reality types, includes reality type
2. I don't make that conversion either. Indeed, I specifically told you
"reality type 2 does not appear in von Neumann's formula". [my post of
2009-12-31 - 01:47 PM, msg #3686 in the Yahoo group arkive]

have I dispelled your confusion on this point? let's check.

Stuart, do you understand that the von Neumann Interpretation of QM is
dualistic? if so, do you understand the von Neumann Interpretation to be
dualistic in the Cartesian sense (interactive substance dualism) or in
the Chalmersian sense (substance monism with property dualism) or in
some other sense.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.1.

Re: SWM and Strong AI

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 8:23 am (PST)



SWM,

> Neither short nor sweet, J.

Obviously, the excerpt wasn't short. I was referring to my own remarks, which were indeed brief. And in my going out of my way to not only give you the reference but to excerpt a good deal of it so you could read it in context as you requested, I was doing you a favor.

A clarification of something misunderstood in our past exchanged. I am not in the least an admirer of Searle. I find him wrong-headed on a number of points. I am also not much a defender of the Chinese Room Argument.

When I said that I was indicating my disagreement with your reading of Searle and of Strong AI so that you would stop bringing it up, I should have made clear that I wanted you to stop bringing it up WITH ME. If you want to criticize Searle (or anyone else) that's your business and I don't care either way. But in bringing the topic into our exchanges, you seemed to be trying to engage me in your ongoing discussion and I wanted you to know that you shouldn't look to me for support on that subject (since I think you misread him) or for debate (since I see debating you on the topic as completely pointless).

Now, I am trying to be civil here, but I would like you to consider the following:

1. You clearly discuss "dualism" quite a lot. You clearly think it's an important issue. And the tenor of your remarks suggest you think it a problematic position.

2. You've been discussing Strong AI and the Chinese Room Argument on various lists since at least 2004, as search of GoogleGroups will confirm.

3. The primary source article for the Chinese Room Argument contains an explicit characterization of Strong AI as evincing dualism, as the excerpt I provided demonstrates.

4. By your own admission, you were "unaware" that Searle had said that.
> > > I am unaware that Searle ever called "strong AI"
> dualism so if you can provide the citation, I'd be
> interested to read it in context.

Now, try to look at this from the perspective of a third party. Can you not see why your reading here might be seriously questioned, given your obvious and ongoing interest in both the Searle arguments and the matter of dualism and your failure to register how his remarks relate the two?

I am really at a loss to see how, given 1-3, you could have been "unaware", as per 4. It is difficult for me not to either seriously question your ability to read attentively or your question sincerity in challenging people to show you the reference.

But perhaps there are more charitable ways of interpreting your behavior and I am just failing to see them. Just as there are more charitable ways to interpret my own reluctance to engage with you on this topic.

(I suspect that the idea that I refuse to do so simply because of some inability is simply laughable to anyone who has engaged with me at length on other topics.)

In any case, perhaps you can see why this sort of thing might make someone reluctant to see any point in trying to pursue discussion with you of these topics.

Your attempt to distinguish your "understanding of Searle's position" from your "recollect(ion of) some particular statement(s)" or of "his precise verbiage" and your attempt to distinguish the fact that you "didn't recollect something he said in the course of making his argument" from whether you "have misunderstood Searle's claims and his argument", are interesting moves. I find them highly suspect, though I suppose we could unpack them and justify them. I've wasted more than enough time with this though.

When it comes down to it, the central problem that seems to plague your various readings is this: where Searle asks, "But could something think, understand, and so on SOLELY IN VIRTUE of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a SUFFICIENT CONDITION of understanding?"(emphases mine), you regularly ignore the "solely in virtue..." and "sufficient condition..." phrases.

You're also obviously unwilling or unable to see that you do this. So there's really no point in continuing with the discussion.

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.2.

Re: SWM and Strong AI

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 10:07 am (PST)



--- In Wittrs@yahoogroups.com, "J" <wittrsamr@...> wrote:

> SWM,

> > Neither short nor sweet, J.
>
> Obviously, the excerpt wasn't short. I was referring to my own remarks, which were indeed brief. And in my going out of my way to not only give you the reference but to excerpt a good deal of it so you could read it in context as you requested, I was doing you a favor.
>

It wasn't pertinent to your claim; it just amounted to providing some text with certain explicit statements by Searle that I had already said I didn't recall.

> A clarification of something misunderstood in our past exchanged. I am not in the least an admirer of Searle. I find him wrong-headed on a number of points. I am also not much a defender of the Chinese Room Argument.
>

> When I said that I was indicating my disagreement with your reading of Searle and of Strong AI so that you would stop bringing it up, I should have made clear that I wanted you to stop bringing it up WITH > ME.

Insofar as you enter a discussion with me, and the matter seems to me to touch on Searle, I will bring it up as I deem it's needed. However, there is no reason for you to enter such discussions to begin with though you have several times inserted yourself into ongoing existing discussions of mine with others about this subject. That's okay with me, but you have used those opportunities to pick fights rather than have a serious exchange and, while I guess that can't be helped in these kinds of forums, if you're going to pick them you ought to back up the things you say.

Until now, all you have done is indicate you view my reading and critique of Searle with some measure of contempt. You're free to do that, of course. But then you ought to say what about it is wrong -- something I see you finally begin to do below and which I will address there.

> If you want to criticize Searle (or anyone else) that's your business and I don't care either way. But in bringing the topic into our exchanges, you seemed to be trying to engage me in your ongoing discussion and I wanted you to know that you shouldn't look to me for > support on that subject

What makes you think I was looking to you for support on any subject? Where did that come from?

>(since I think you misread him) or for debate (since I see debating you on the topic as completely pointless).
>

Yes, you have said that and that is your prerogative as it is anyone's to make such claims. And it is my prerogative to ask you to offer specifics. But you don't have to do so. You don't have to do anything here, in fact, except satisfy Sean's understanding of what is acceptable posting. However, I have pointed out that making charges without backing them up is not a very intellectually respectable way to proceed though there are no rules here that are aimed at ensuring intellectual respectability.


> Now, I am trying to be civil here, but I would like you to consider the following:
>
> 1. You clearly discuss "dualism" quite a lot. You clearly think it's an important issue. And the tenor of your remarks suggest you think it a problematic position.
>

Actually, my critique of Searle does not hinge on the claim that he is an implicit dualist alone which you would know if you had read the history of these discussions, as you implied you had. I have noted that implicit dualism is one issue but that Searle makes logical errors in the original syllogistic argument (though they are somewhat subtle), which have nothing to do with dualism, and that his later argument hinges on a confusion about what computer programs are understood to be in the context of AI. Moreover, I have said time and again that I am not arguing for or against dualism, merely pointing out that an implicit presumption of dualism underlies Searle's idea about consciousness, though he explicitly denies dualism (despite the charge of many of his critics -- other than me -- that he actually is).

> 2. You've been discussing Strong AI and the Chinese Room Argument on various lists since at least 2004, as search of GoogleGroups will confirm.
>

Quite true. Probably earlier than that, too.

> 3. The primary source article for the Chinese Room Argument contains an explicit characterization of Strong AI as evincing dualism, as the excerpt I provided demonstrates.
>

The Chinese Room Argument was initially put forth by Searle back in the early eighties and appears in numerous books he published over the years since then including Minds, Brains and Science (The Reith Lectures); Language, Mind and Society; Consciousness and Language; The Mystery of Consciousness; and some others that don't immediately come to mind. It also appeared in numerous of his papers over the years. Searle changed the formulations of the argument over the years as well. The way it's presented in the Reith Lectures, for instance, is not how it appears later on, nor does it match a very early paper of his that I saw. Note that I don't claim to remember Searle's precise statements every step of the way. I am interested in his argument, first the CRA and later the revised argument about unintelligibility which replaced his CRA as his main attack on computationalism.

> 4. By your own admission, you were "unaware" that Searle had said that.

Yes, I said that right up front. That didn't mean I was denying he said it. It meant I did not recall his explicit statements linking AI to dualism. Having seen the text, I think it is clearly a confusion because it misunderstands the computationalist claim or dualism or both. However, I find myself in agreement with him elsewhere, in another paper we reviewed on Analytic, where he said that the only dualism that means anything is the kind that reduces, at bottom, to substance dualism. I think he is right about that and it strikes me that he was wrong in linking AI to dualism in the text you provided since supposing that the mental can be produced on platforms other than brains is nothing like the kind of dualism Searle asserts is the only kind of dualism that counts.

Now I don't know the date of the paper you excerpted that from and I don't recall your providing it. But it pays to recall that Searle has, over the years, altered some of his claims, positions and expositions. Prolific writer that he is, that is not surprising. So perhaps he no longer would make that claim about AI and dualism in light of the paper in which he asserted that the only real dualism reduces, on analysis, to substance dualism.


> > > > I am unaware that Searle ever called "strong AI"
> > dualism so if you can provide the citation, I'd be
> > interested to read it in context.
>
> Now, try to look at this from the perspective of a third party. Can you not see why your reading here might be seriously questioned, given your obvious and ongoing interest in both the Searle arguments and the matter of dualism and your failure to register how his remarks relate the two?
>

As noted above, Searle has a long history of rhetoric on the subject with lots of variations over the years. I do not claim to recall every word or every formulation he ever used. Nor do I claim to be infallible in my recollections generally. I am interested in his arguments. You stated that my understanding of his arguments was flawed and yet you declined to say what was flawed about it (below I note you start to address that though). His argument against "strong AI" doesn't hinge on a claim that it is a form of dualism.

> I am really at a loss to see how, given 1-3, you could have been "unaware", as per 4.

It's a matter of core points and recollections. His argument against "strong AI" hinges on what he claims cannot be found in his CR scenario, intentionality. It does not hinge on a claim that strong AI is dualistic.

Searle claims that nothing in the CR understands Chinese because there is no intentionality, therefore intentional intelligence, a necessary feature of any ascription of consciousness, is lacking. He then goes on to say that its lack in the CR shows that it would always be lacking in anything constructed with the same constituents as the CR, i.e., on any computational platform.

There are plenty of arguments against this, the most sensible, in my view being the connectionist reply which hinges on a sometimes insufficiently explicated notion that the reason intentionality is absent in the CR is not because of the nature of the constituents of the CR, as Searle asserts, but because it hasn't been specked in in the first place.

Note that the CR has only one basic function going on: rote translation (conversion of inputs in one set of symbols to outputs in another). But nobody thinks that that is what consciousness is. That is, the brain is not a rote translating machine like this and no one argues that it is, not even AI researchers!

The connectionist reply to Searle's CRA proposes that a system built of many different R's, performing a much broader range of functions than just rote translation, and linked together, would qualify as conscious if they were performing the right functions (as with Dehaene's brain model).

And this, finally, brings us to the dualism problem. One has to either see consciousness as separate and distinct from other things or as just an _expression_ of other things that aren't, themselves, conscious.

If one sees consciousness as the latter, then the connectionist model makes sense. If one doesn't, then it doesn't because one is still looking for the consciousness in the underlying mix as Searle is. It is this attachment to an idea of consciousness that conceives it as irreducible to constituents that aren't conscious that is, essentially, dualist.

I have pointed out that when Searle claims that the CR isn't conscious because none of its constituents are, he is relying on a picture of consciousness that presumes consciousness is irreducible and that this is in conflict with certain of his claims: 1) that he is not a dualist; and 2) that brains cause consciousness (because, if they do, they must do so in a physical way unless he wants to presume they do so as agents bringing something new and irreducible into the world which IS dualism).

> It is difficult for me not to either seriously question your ability to read attentively or your question sincerity in challenging people to show you the reference.
>

As I said, you referred to my past alleged mistakes in understanding Searle's argument. That there were some things Searle said in the past that I didn't recall is not evidence of THAT claim. If you think my understanding of his argument is wrong or my counterclaims are, then say where and show it. Pointing out that I have forgotten certain statements that I already said I had forgotten isn't any kind of support for your claim.

> But perhaps there are more charitable ways of interpreting your behavior and I am just failing to see them. Just as there are more charitable ways to interpret my own reluctance to engage with you on this topic.
>

This isn't an issue of considering "more charitable ways." You challenged my critique of Searle based on alleged things I had said in the past but declined to say what they were or what the critique I had made was that you were challenging. Above, in response to some of your comments here, I have begun to bring some of my past points re: this issue onto this list. You can begin to deal with them or not, as you like. But if you're serious about your claim that I don't have Searle right, you should support such claims here and now.

> (I suspect that the idea that I refuse to do so simply because of some inability is simply laughable to anyone who has engaged with me at length on other topics.)
>

The inability I referred to was an inability to find statements I had made that support your claim that you had seen statements of mine that indicated I don't understand the Searlean argument. I wasn't speaking of your intelligence which seems respectable enough. My reference had to do with asking you to specify, to support. If there are such past statements of mine that can be read, in context, in support of your argument, then you ought to be able to find them and present them here. If there aren't then you will be unable to.

> In any case, perhaps you can see why this sort of thing might make someone reluctant to see any point in trying to pursue discussion with you of these topics.
>

You started out "reluctant" albeit without offering anything to back it up.

> Your attempt to distinguish your "understanding of Searle's position" from your "recollect(ion of) some particular statement(s)" or of "his precise verbiage" and your attempt to distinguish the fact that you "didn't recollect something he said in the course of making his argument" from whether you "have misunderstood Searle's claims
> and his argument", are interesting moves.

"Moves"? It's quite obvious that unless we have photographic memory (which I don't pretend to have) we will not recall everything someone has ever said and this will be especially so in cases where someone has written voluminously and changed his ways of stating his case over the years.

> I find them highly suspect, though I suppose we could unpack them and justify them. I've wasted more than enough time with this though.
>
> When it comes down to it, the central problem that seems to plague your various readings is this: where Searle asks, "But could something think, understand, and so on SOLELY IN VIRTUE of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a SUFFICIENT CONDITION of understanding?"(emphases mine), you regularly ignore the "solely in virtue..." and "sufficient condition..." phrases.
>

Here you finally offer some kind of critique of my critique. Note that the issue comes down to what consciousness is. If a functionalist account is sufficient, then the right kind of computer (having sufficient capacity) and the right kinds of programming (performing the right functions) could succeed ("solely in virtue" of being that). But the issue must be whether such a functionalist account is sufficient. I have argued that we can give a full account of what we mean by "consciousness" in such a functionalist way and, if we can, then the Searlean CRA's flaws become evident.

To think a functionalist account can't be sustained, one has to presume that brains do something other than just perform some processes in the right sort of way. While it is not impossible they do, Searle has no account of what that might be while his formulation that brains cause consciousness leaves us with either of two options: 1) they operate in a way that is analogous with what computers do or 2) they act as a deus ex machina that brings something new into the world.

Certainly his CR offers no evidence that a more robustly specked system could not achieve what the CR, itself, cannot.

> You're also obviously unwilling or unable to see that you do this. So there's really no point in continuing with the discussion.
>
> JPDeMouy
>

I note you keep ending these missives by saying you don't intend to continue the discussion.

Anyway, if you think I have the CRA or the later argument wrong, feel free to say where. You have enough information from me now to enable you to move in that direction.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.3.

Re: SWM and Strong AI

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 1:38 pm (PST)



SWM,

The paper from which the excerpt was taken was, as I originally indicated, "Minds, Brains, and Programs". I had not specified that this paper was from 1980, though I did refer to the paper as "seminal", which I thought would make clear that this was the original presentation of the argument.

Well, now I've made that explicit.

>
> Actually, my critique of Searle does not hinge on the claim
> that he is an implicit dualist...

I didn't say that it did. My point was that the issue of "dualism" seems to be of no small interest to you and that I would therefore have thought it likely that a reference to dualism in that paper would have stood out to you. However, pause before you debate this point because shortly I'll acknowledge myself why I was mistaken in that supposition...

The way it's presented in the Reith
> Lectures, for instance, is not how it appears later on, nor
> does it match a very early paper of his that I saw.

Ah, my guess is that the paper you mention here is the one I've called "seminal" and the "primary source". But if that paper is one you only vaguely remember seeing, then it's perfectly understandable that you might not recall the specific dualism reference in that paper.

My mistake was in supposing that you would have very closely studied the original paper. Now whatever the merits of treating that paper as of vital importance in scholarly matters, it is not necessarily what is required in a discussion like this.

Whether he made similar comparisons between "Strong AI" and "dualism" elsewhere, I do not recall. And I do not have the relevant papers at hand to check. So in the absence of that, I'd have to suppose that I was in error in expecting that you'd have been well aware of the dualism comparison.

I think he is right about that
> and it strikes me that he was wrong in linking AI to dualism
> in the text you provided since supposing that the mental can
> be produced on platforms other than brains is nothing like
> the kind of dualism Searle asserts is the only kind of
> dualism that counts.

This strikes me as a reasonable point and it even suggests to me that there may be good reasons not to expect that he'd have made the argument comparing "Strong AI" with dualism in later papers.

By the way, the "supposing that the mental can be..." is not the position of Strong AI. Strong AI is not the acknowledgment of a possibility. Strong AI is the assertion of an equivalence.

But I won't hang you on a clause that was part of a larger point with which I don't generally disagree.

So perhaps he no longer would
> make that claim about AI and dualism in light of the paper
> in which he asserted that the only real dualism reduces, on
> analysis, to substance dualism.
>

Perhaps so, yes.

> There are plenty of arguments against this, the most
> sensible, in my view being the connectionist reply which
> hinges on a sometimes insufficiently explicated notion that
> the reason intentionality is absent in the CR

You do grant that? I'll ask that more directly later on.

is not because
> of the nature of the constituents of the CR, as Searle
> asserts, but because it hasn't been specked in in the first
> place.

Could you elaborate on the contrast between "the constituents of the CR" and the claim that it hasn't "been specked (sic)"? What does it mean to say that it hasn't "been specked" but that it is lacking some relevant constituents?

(I believe the word you intend is "specced" or "spec'd", as it is used in engineering, "built according to spec" (specifications), rather than "spec houses" (where "spec" is "speculation") and certainly not "specked" as we would describe a drinking glass that needs rinsing. But correct me if I misread.)

>
> Note that the CR has only one basic function going on: rote
> translation (conversion of inputs in one set of symbols to
> outputs in another).

Giving answers to questions is not the same as translation.

But nobody thinks that that is what
> consciousness is.

"Thinking" and "being conscious" are two different things. The claim that thinking, properly understood, requires consciousness may be a claim that Searle has made (and I don't care to venture that far afield) but it is a separate claim.

That is, the brain is not a rote
> translating machine like this and no one argues that it is,
> not even AI researchers!

No but the claim that "thinking" should be ascribed to anything that gives the appropriate outputs given the appropriate inputs under appropriate testing conditions is a core tenet of much AI research, taking Turing's paper, "Computing Machinery and Intelligence", as a defining statement of the research program.

>
> The connectionist reply to Searle's CRA proposes that a
> system built of many different R's, performing a much
> broader range of functions than just rote translation, and
> linked together, would qualify as conscious if they were
> performing the right functions (as with Dehaene's brain
> model).

If the algorithm in question can be implemented on any Turing-equivalent architecture, then it can be implemented by the Chinese Room.

If it cannot be, if the specific hardware implementation is a relevant consideration, then the proposal is no longer Strong AI. Searle's argument then is no longer the Chinese Room Argument, per se.

"On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology." (from the same 1980 paper)

> I have pointed out that when Searle claims that the CR
> isn't conscious because none of its constituents are,

Are you saying that the Chinese Room is conscious now? Did you not earlier grant that it lacked intentionality? Are you saying that it is conscious but lacks intentionality? Or did I misread you before?

I'm getting that uneasy feeling.

he is
> relying on a picture of consciousness that presumes
> consciousness is irreducible and that this is in conflict
> with certain of his claims: 1) that he is not a dualist; and
> 2) that brains cause consciousness (because, if they do,
> they must do so in a physical way unless he wants to presume
> they do so as agents bringing something new and irreducible
> into the world which IS dualism).

I'm not going to address this, which is not to say I grant it. If it becomes relevant to the question of your interpretation of Strong AI and the Chinese Room Argument, I may take it up them.

As it stands, I've withdrawn the suggestion that your failure to recollect the dualism remark was an indicator of wider problems in your reading.

> > When it comes down to it, the central problem that
> seems to plague your various readings is this: where
> Searle asks, "But could something think, understand,
> and so on SOLELY IN VIRTUE of being a computer with the
> right sort of program? Could instantiating a program, the
> right program of course, by itself be a SUFFICIENT CONDITION
> of understanding?"(emphases mine), you regularly ignore the
> "solely in virtue..." and "sufficient condition..."
> phrases.
> >
>
> Here you finally offer some kind of critique of my
> critique. Note that the issue comes down to what
> consciousness is. If a functionalist account is sufficient,
> then the right kind of computer (having sufficient capacity)

If "the right kind of computer" and "capacity" mean nothing more than "the capacity to implement the algorithm", that's right.

If "capacity" means something more than Turing-equivalence, then that's wrong.

It's wrong as an account of classical functionalism and it's wrong as an account of the position Searle calls "Strong AI".

"(B)eing a computer with the right sort of program" is what is relevant here. He does not mention "being the right sort of computer with the right sort of program". Once you introduce the requirement that it be "the right sort of computer", the position ceases to be Strong AI! Once you add the requirement, then it is no longer "solely in virtue of..."


> and the right kinds of programming (performing the right
> functions) could succeed ("solely in virtue" of being that).

But you've added a requirement. And in so doing you are no longer describing the position of Strong AI.

> But the issue must be whether such a functionalist account
> is sufficient.

What you describe is no longer functionalism.

http://plato.stanford.edu/entries/functionalism/#MacStaFun
http://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)

Connectionism is not functionalism. Connectionism was a reaction to functionalism.

I have argued that we can give a full account
> of what we mean by "consciousness" in such a functionalist
> way and, if we can, then the Searlean CRA's flaws become
> evident.

While the Chinese Room Argument likely has many flaws, your own arguments completely miss the point because what you're defending is no longer Strong AI. The CRA doesn't apply to what you're defending. Searle likely would object to what you're defending but on grounds other than the Chinese Room Argument.

>
> To think a functionalist account can't be sustained, one
> has to presume that brains do something other than just
> perform some processes in the right sort of way. While it is
> not impossible they do, Searle has no account of what that
> might be while his formulation that brains cause
> consciousness leaves us with either of two options: 1) they
> operate in a way that is analogous with what computers do or
> 2) they act as a deus ex machina that brings something new
> into the world.
>
> Certainly his CR offers no evidence that a more robustly
> specked system could not achieve

Requiring a "more robustly specked (sic) system" means going beyond Strong AI and beyond classical functionalism.

what the CR, itself,
> cannot.
>

Okay, are you now back to acknowledging that the Chinese Room does not think?

I'm getting that uneasy feeling again.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.4.

Re: SWM and Strong AI

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 8:22 pm (PST)



--- In Wittrs@yahoogroups.com, "J" <wittrsamr@...> wrote:

<snip>

The way [the cra] is presented in the Reith
> Lectures, for instance, is not how it appears later on, nor
> does it match a very early paper of his that I saw.

Ah, my guess is that the paper you mention here is the one I've called "seminal"
and the "primary source". But if that paper is one you only vaguely remember
seeing, then it's perfectly understandable that you might not recall the
specific dualism reference in that paper.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If it's the same paper I read it back around 2000 when my son who was then taking a cognitive science course in college brought it to me to help him parse it out. Unfortunately my son didn't retain it after I gave it back to him which is why I proceeded to read a number of Searle's books after that. The paper my son had brought me piqued my interest, especially because, while my initial reaction was to agree with it, I couldn't shake the feeling the argument was flawed. It took me some time, and several of Searle's later books, to figure out why I thought so.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

My mistake was in supposing that you would have very closely studied the
original paper. Now whatever the merits of treating that paper as of vital
importance in scholarly matters, it is not necessarily what is required in a
discussion like this.

Whether he made similar comparisons between "Strong AI" and "dualism" elsewhere,
I do not recall. And I do not have the relevant papers at hand to check. So in
the absence of that, I'd have to suppose that I was in error in expecting that
you'd have been well aware of the dualism comparison.

I think he is right about that
> and it strikes me that he was wrong in linking AI to dualism
> in the text you provided since supposing that the mental can
> be produced on platforms other than brains is nothing like
> the kind of dualism Searle asserts is the only kind of
> dualism that counts.

This strikes me as a reasonable point and it even suggests to me that there may
be good reasons not to expect that he'd have made the argument comparing "Strong
AI" with dualism in later papers.

By the way, the "supposing that the mental can be..." is not the position of
Strong AI. Strong AI is not the acknowledgment of a possibility. Strong AI is
the assertion of an equivalence.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Searle's CRA is an argument against the possibility of what he calls "Strong AI". Note that he actually offered several different formulations of what he had in mind by "Strong AI" over the years (though he never wavered, as far as I recall, on "Weak AI"). I don't have an exact quote of what I take to be his most developed position but basically it is the supposition that one can produce consciousness on a computer by running certain kinds of programs. Note that I see no difference in this formulation with your "assertion of an equivalence", i.e., that consciousness is just the running of certain kinds of programming on computers. I suppose we could argue over the nuances here but it seems to me to be a not especially useful area of debate. The bottom line boils down to this: If consciousness is like programs running on a computer (as Dennett describes it) then that in essence IS an assertion of equivalence. (I see below that you are focusing on something that I have seen others say vis a vis the hardware question so I will address that further down rather than at this point.)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

But I won't hang you on a clause that was part of a larger point with which I
don't generally disagree.

So perhaps he no longer would
> make that claim about AI and dualism in light of the paper
> in which he asserted that the only real dualism reduces, on
> analysis, to substance dualism.
>

Perhaps so, yes.

> There are plenty of arguments against this, the most
> sensible, in my view being the connectionist reply which
> hinges on a sometimes insufficiently explicated notion that
> the reason intentionality is absent in the CR

You do grant that? I'll ask that more directly later on.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Note that by "connectionist" I have in mind the Churchlands' Chinese Gymnasium Reply which holds that one needs more going on than the limited functionality of Searle's Chinese Room. Their example is that even if Searle is right about the Chinese Room, that would say nothing about a vaster network of such rooms linked together and doing many more things. They proposed a bigger room, i.e., a Chinese Gymnasium with many different processing clerks (processors) at many different desks all communicating with one another.

As I noted, this implies that the Chinese Room Searle devised is fundamentally underspecked. (I use that spelling because I find "spec'd" rather odd; but as long as we know what is meant, this should pose no problem if you prefer a different spelling.) In other words, if consciousness has intentionality, there is no reason to think intentionality is realized by automatic processing of certain input symbols from one list of symbols, to outputted symbols taken from another list. Not only does this not look like anything intentional, there is no reason to suppose AI theorists and researchers think it is. Thus an R as in the CR that is only capable of doing this kind of rote responding (whether we focus on the translating, as Searle presents it, or assume a little more is going, including matching up meanings via the symbol selection as Searle generally implies) is not doing enough to be conscious, but a system of many CONNECTED CRs, each doing different kinds of things, could conceivably be conscious, contra the CRA conclusion, if enough of the right kinds of things is being done.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

is not because
> of the nature of the constituents of the CR, as Searle
> asserts, but because it hasn't been specked in in the first
> place.

Could you elaborate on the contrast between "the constituents of the CR" and the
claim that it hasn't "been specked (sic)"? What does it mean to say that it
hasn't "been specked" but that it is lacking some relevant constituents?

(I believe the word you intend is "specced" or "spec'd", as it is used in
engineering, "built according to spec" (specifications), rather than "spec
houses" (where "spec" is "speculation") and certainly not "specked" as we would
describe a drinking glass that needs rinsing. But correct me if I misread.)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I meant the past tense of "spec", yes. The spelling is one I have historically used though I make no claim that it is standard or accepted. I just don't like "spec'd".

Above I have already offered such an elaboration vis a vis the Chinese Gymnasium Reply which is sometimes called the Connectionist Reply because it hinges on connecting multiple Chinese Rooms.

What is not specked into Searle's CR is whatever functions would need to be performed to provide intentionality and other features of consciousness. Over on the Analytic list and, later, on the Philosophy of AI list, which I occasionally frequent, I provided a much more detailed outline of the kinds of things (in a tiered way) that would have to be accomplished on this kind of functionalist conception of consciousness. (I'll address the question of functionalism further down where you address it.)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Note that the CR has only one basic function going on: rote
> translation (conversion of inputs in one set of symbols to
> outputs in another).

Giving answers to questions is not the same as translation.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

That's so though Searle often describes it as translation. But you're right, more is involved, at least including matching meanings via the syntax of the rules of selection the processor follows blindly. Nevertheless, the point is that even if you widen the scope of what is going on by recognizing this extra aspect, it will still not include all that would need to go on to produce the features we generally recognize as being part of what we mean by "consciousness" (intentionality, awareness, understanding, etc.).

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

But nobody thinks that that is what
> consciousness is.

"Thinking" and "being conscious" are two different things. The claim that
thinking, properly understood, requires consciousness may be a claim that Searle
has made (and I don't care to venture that far afield) but it is a separate
claim.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

That is so but here aren't you doing what you said Bruce was doing, shifting from one thing to another when he segued from my point about the causes of wetness to the idea of feeling the wetness that is caused? My reference to "nobody thinks" was not an assertion that thinking is consciousness or vice versa.

As to the role of thinking in this per se, my guess is that there may be many types of things that may classify as thinking but it is pretty clear that Searle wants to argue that a machine that operates by rote, however smart, cannot really be said to be thinking. Thus, to understand what is being asked and to respond with that understanding (what the CR as he has specked it cannot do) involves being able to think about the matter, to get the semantics that is being conveyed via the syntax.

I don't suggest we have an adequate account of what it means to think however. My only point in this is that Searle mistakenly claims that it is impossible that computers should be brought to have intentionality (a part of what we think of as thinking in ourselves) based on his Chinese Room scenario.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

That is, the brain is not a rote
> translating machine like this and no one argues that it is,
> not even AI researchers!

No but the claim that "thinking" should be ascribed to anything that gives the
appropriate outputs given the appropriate inputs under appropriate testing
conditions is a core tenet of much AI research, taking Turing's paper,
"Computing Machinery and Intelligence", as a defining statement of the research
program.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are many approaches in AI. And there are divergent ideas of what should count as thinking. My own opinion on this is that what we call thinking occurs on a continuum and that we are at one place along it and other entities in the animal kingdom at other points. I don't suggest there is a hard and fast line, a threshold once crossed differentiates thinking from not thinking. But certainly, we often have in mind our kind of thinking when we use the term and I believe that is what Searle is thinking about in his argument, too.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

>
> The connectionist reply to Searle's CRA proposes that a
> system built of many different R's, performing a much
> broader range of functions than just rote translation, and
> linked together, would qualify as conscious if they were
> performing the right functions (as with Dehaene's brain
> model).

If the algorithm in question can be implemented on any Turing-equivalent
architecture, then it can be implemented by the Chinese Room.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The issue is whether a process-based system, running computational processes, can be conscious. THAT is Dennett's model and what is implied by the Chinese Gymnasium Reply to Searle. Searle considers both to be mistaken for the same reason that his Chinese Room cannot be conscious, because 'nothing in the Chinese Room' he has specked 'understands Chinese and the Chinese Room doesn't either'. (I use single quote marks rather than double here to indicate that this is a paraphrase and may not exactly replicate his words though I believe it is pretty close.)

By the same token, Dennett in Consciousness Explained reserved an appendix to address Searle's Chinese Room Argument from the perspective of his model.

I have heard many times that there is no difference between a massively parallel computer and a serial processor except, perhaps, for speed. On that basis it has been alleged by some that anything a parallel processor can do a serial processor can also do. And this is true enough in terms of running programs. But speed does matter (as Hawkins notes when discussing how brains do their job) and a machine that ran all the processes in a serial way would be unlikely to produce the results on our timeframe that we would recognize as conscious. Moreover, there is another gain with multiple processing Turing machines: simultaneity of interaction. If the brain is massively parallel (and there is plenty of reason to think it is), then part of what's needed would be hardware that is also massively parallel so that multiple processes could run simultaneously and interface with one another. (Think Dehaene's global neuronal network.) I have also seen another benefit cited for parallelism and that is unpredictability. I would like to cite the source of this paper (which is available on-line) but cannot now recall it though it is something we read on Analytic when arguing the relative merits of parallelism. So speed, simultaneous interactivity and unpredictability are three gains for such a system, one that would be more robust than the CR as specked by Searle.

Now the same individual on Analytic who used to argue that parallelism would make no difference (and who was referenced here by Budd) subsequently shifted his argument to say that if one were talking about parallelism then one was adding to the hardware (even though he never acknowledged the gains I reference above with parallel processing). He continued that, since Searle specifies that he is attacking the idea that consciousness is just programming on a computer, by changing the computer (by insisting on something more robust because it has multiple processors running many different sub-systems in tandem), I was 1) no longer arguing against the CRA because it isn't directed against that and 2) actually arguing for Searle's view because Searle did not deny that some form of AI was possible and that at some point we might figure out a way to build synthetic brains from machines.

This was an especially odd claim since Searle, as I have already noted, makes a point of arguing against Dennett's thesis and Dennett explicitly argues against the CRA. So if this is really in keeping with Searle's view, it looks like someone forgot to tell him.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If it cannot be, if the specific hardware implementation is a relevant
consideration, then the proposal is no longer Strong AI. Searle's argument then
is no longer the Chinese Room Argument, per se.

"On the assumptions of strong AI, the mind is to the brain as the program is to
the hardware, and thus we can understand the mind without doing
neurophysiology." (from the same 1980 paper)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This strikes me as a misreading of Searle. As noted, Searle explicitly argues against Dennett's view on the basis of Searle's claim that programmed processes running on computers cannot be conscious. It's true that Searle does, over time, abandon his active defense of the CRA in favor of his later unintelligibility argument (which I think is even weaker than the CRA). But whichever argument is deployed, Searle makes no distinction between a more robustly specked system (including more processors in operation) and his CRA. In all cases he holds that because computational processes, as we find them in the CR, are manifestly NOT conscious, they could never be in any iteration, as part of any purely programming based system.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

> I have pointed out that when Searle claims that the CR
> isn't conscious because none of its constituents are,

Are you saying that the Chinese Room is conscious now? Did you not earlier
grant that it lacked intentionality? Are you saying that it is conscious but
lacks intentionality? Or did I misread you before?

I'm getting that uneasy feeling.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

What do you mean by "the Chinese room is conscious now"?

I am agreeing with Searle that the Chinese Room is NOT conscious and, contra the System Reply (in its simpler form), I am agreeing that it cannot be.

What I am arguing is that Searle is mistaken in drawing from that fact the conclusion that nothing relying solely on constituents that are on a par with the processes going on in the Chinese Room CAN be.

Here I want to acknowledge a poster on Analytic named Peter Brawley (with whom I was not always in agreement) who picked up on this and made the interesting analogy that, just as we would not build a bicycle and expect it to fly, so we should not expect the Chinese Room, specked as it is, to be conscious. I rather liked that point and took to calling this variant of the System Reply the Bicycle Reply in deference to Brawley's analogy.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

he is
> relying on a picture of consciousness that presumes
> consciousness is irreducible and that this is in conflict
> with certain of his claims: 1) that he is not a dualist; and
> 2) that brains cause consciousness (because, if they do,
> they must do so in a physical way unless he wants to presume
> they do so as agents bringing something new and irreducible
> into the world which IS dualism).

I'm not going to address this, which is not to say I grant it. If it becomes
relevant to the question of your interpretation of Strong AI and the Chinese
Room Argument, I may take it up them.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It is a more complex argument, I'll grant and it is relevant, but it can be left aside for now to avoid unnecessary complications to what we are talking about at this stage.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As it stands, I've withdrawn the suggestion that your failure to recollect the
dualism remark was an indicator of wider problems in your reading.

> > When it comes down to it, the central problem that
> seems to plague your various readings is this: where
> Searle asks, "But could something think, understand,
> and so on SOLELY IN VIRTUE of being a computer with the
> right sort of program? Could instantiating a program, the
> right program of course, by itself be a SUFFICIENT CONDITION
> of understanding?"(emphases mine), you regularly ignore the
> "solely in virtue..." and "sufficient condition..."
> phrases.
> >
>
> Here you finally offer some kind of critique of my
> critique. Note that the issue comes down to what
> consciousness is. If a functionalist account is sufficient,
> then the right kind of computer (having sufficient capacity)

If "the right kind of computer" and "capacity" mean nothing more than "the
capacity to implement the algorithm", that's right.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This assumes a single algorithm but AI does not proceed on a single algorithm nor do I think Searle believes it does. If he does he would be mistaken. (See Minsky's work, for instance -- and he is widely acknowledged as the father of AI.) Indeed, one of the arguments Hawkins deploys against classical AI is that the brain is too slow to operate as a computer does in accomplishing the same things and, therefore, AI must be mistaken in its approach since it aims to develop and run vast numbers of complex algorithms on computers in order to replicate the brain's functionality. The brain (or at least the cortex, his area of interest), Hawkins suggests, relies on a more elegant solution, a fairly simple algorithm that enables neurons to operate in an architectural array to match and alter patterns based on received inputs.

By "capacity", as I have said above, I mean the processing capacity to run multiple algorithms doing multiple things in tandem and in an interactive way. My pc, a merely serial machine, cannot achieve that but massively parallel processing systems could. Thus, the right kind of computer would be the kind with the capacity to run the right kind of algorithmic system. This is not about some single algorithm.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If "capacity" means something more than Turing-equivalence, then that's wrong.

It's wrong as an account of classical functionalism and it's wrong as an account
of the position Searle calls "Strong AI".

"(B)eing a computer with the right sort of program" is what is relevant here.
He does not mention "being the right sort of computer with the right sort of
program". Once you introduce the requirement that it be "the right sort of
computer", the position ceases to be Strong AI! Once you add the requirement,
then it is no longer "solely in virtue of..."

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

See my comments above re: the dispute between Dennett and Searle re: their competing conceptions of consciousness. Whether Searle mentions the "right sort of computer" or not, his dispute with Dennett shows he applies his argument to a massively parallel processing model, too. Moreover, even Searle realizes that one needs the right sort of computer. While telling us that everything can be described as a digital computer (which I consider a bit of a stretch), he acknowledges that not everything can run the same kinds of "programs". While he mainly reserves his notion of the "right sort" to claims about brains as being the right sort of organic machines to do the job, it is implicit in any idea of computers that capacity counts. When personal computers first came out, I recall buying my father a Timex which ran by being hooked up to a tape recorder as the memory source. It had about two bytes of memory as I recall. Now surely you wouldn't want to say that Searle wouldn't have recognized capacity as an issue in that case? Just the same, AI presumes a computer that is sufficiently capacious (and Dennett's thesis presumes massively parallel machines, an extra kind of capacity). All may be Turing Equivalent but if the issue is not the quality of the algorithm but the things we can do with algorithms, then capacity matters and is intrinsic to any idea of AI.

Searle does sometimes seem to speak, at times, as if he were addressing the issue of the quality of the programming, the algorithm (which is why his CRA seems convincing to him no doubt).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

> and the right kinds of programming (performing the right
> functions) could succeed ("solely in virtue" of being that).

But you've added a requirement. And in so doing you are no longer describing
the position of Strong AI.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Yes, I've added robustness. Now if Searle's argument is against the possibility of realizing consciousness on a computational base, then robustness doesn't matter as Searle seems to think when he rejects the Chinese Gymnasium Reply and Dennett's model.

Of course if robustness does matter then it is precisely this that vitiates the CRA as an argument. So one way out of this is to claim, as some have tried, that the CRA is not directed against these more robust system models. But Searle himself undermines this because he DOES reject these more robust models on the basis of his CRA (either the original one or his latter day variant).

Why should robustness matter? This comes down to the underlying conception of what consciousness (or the specific features of consciousness, e.g., intentionality) is. If it is some kind of irreducible property then the fact that it is absent in the constituent processes of the CR is compelling. But if it is, rather, a system property, a feature of some processes performing certain functions, then putting these non-conscious stand-alone algorithmically driven processes together in the right way is precisely what is required.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

> But the issue must be whether such a functionalist account
> is sufficient.

What you describe is no longer functionalism.

http://plato.stanford.edu/entries/functionalism/#MacStaFun
http://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)

Connectionism is not functionalism. Connectionism was a reaction to
functionalism.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I should be a bit clearer I suppose. By functionalism, I am not invoking any particular existing theory but, rather, the claim that this is about what the processes in question do, i.e., their function. I am aware that there is a whole literature addressing theories that are called functionalist but I am not aiming to align myself with any such orthodoxy. However, in reviewing the Stanford URL you provided I noticed this:

3.2 Psycho-Functionalism

". . . just as, in biology, physically disparate entities can all be hearts as long as they function to circulate blood in a living organism, and physically disparate entities can all be eyes as long as they enable an organism to see, disparate physical structures or processes can be instances of memory trace decay ? or more familiar phenomena such as thoughts, sensations, and desires ? as long as they play the roles described by the relevant cognitive theory."

This is in keeping with the notion of functions I am invoking, namely that consciousness (or its many features) may be best understood as so many various physical processes accomplishing particular things, i.e., functions.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I have argued that we can give a full account
> of what we mean by "consciousness" in such a functionalist
> way and, if we can, then the Searlean CRA's flaws become
> evident.

While the Chinese Room Argument likely has many flaws, your own arguments
completely miss the point because what you're defending is no longer Strong AI.
The CRA doesn't apply to what you're defending. Searle likely would object to
what you're defending but on grounds other than the Chinese Room Argument.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I suggest to you that if "Strong AI" has any meaning besides being Searle's pet bogyman, it must accord with what actual AI researchers are engaged in. And what they are aiming to do is replicate the features we recognize as consciousness in ourselves on a computational platform and for that they require machines with sufficient capacity. It is simply false to suppose that Searle was/is arguing against anything less because, if he were, his arrows would be without a real world target. I doubt he believes that of his argument even if some of his defenders do.

Again I refer you to his disputes with people like Dennett and the Churchlands.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>
> To think a functionalist account can't be sustained, one
> has to presume that brains do something other than just
> perform some processes in the right sort of way. While it is
> not impossible they do, Searle has no account of what that
> might be while his formulation that brains cause
> consciousness leaves us with either of two options: 1) they
> operate in a way that is analogous with what computers do or
> 2) they act as a deus ex machina that brings something new
> into the world.
>
> Certainly his CR offers no evidence that a more robustly
> specked system could not achieve

Requiring a "more robustly specked (sic) system" means going beyond Strong AI and beyond classical functionalism.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

That is completely mistaken. Searle repeatedly brings his arguments to bear against an array of different replies to his CRA and they often involve something more robust than the CR (i.e., they add things he excluded).

Note that if Searle's CRA ONLY applies to a limited system like the CR (a translating/response device) then so what? It has no implications beyond its own type. All AI efforts, that are not merely about modeling (as we might simulate a hurricane on a computer) or constructing expert systems to achieve human level or better decision drivers, are about systems with multiple algorithmically driven processes intended to replicate various levels of brain functionalities.

what the CR, itself,
> cannot.
>

Okay, are you now back to acknowledging that the Chinese Room does not think?

I'm getting that uneasy feeling again.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I don't know what you are talking about here. I have repeatedly said the CR is underspecked and that that is a big part of its problem as a basis for drawing conclusions about computationally based efforts to replicate consciousness. Does the CR think? Not as we do or in terms of what we have in mind when we speak of thinking (though thinking could probably be defined in a more limited way for certain applications in which case it might be possible to speak of it as thinking). For the purposes of this discussion, it remains the case that the CR is not intentional and thus lacks at least one critical element we associate with thinking as we do.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.1.

Re: Wittgenstein and Theories

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 9:17 am (PST)



SW,

No rush!

Enjoy your holidays and time with your daughter.

I share your reservations about the value of "justified true belief" as a definition and about the value of such definitions generally.

My only concern was to question the idea that it should necessarily be stigmatized as a theory. That it can be faulted on other grounds when used in other contexts is another matter.

What is true in such a definition might better be captured by pointing out the queerness of saying, e.g. "I know it, though I have no grounds," "I know it but I don't believe it," or "I know it, even though it isn't true."

One might be able to imagine (very unusual) circumstances in which such statements might make sense. Certainly, we could agree that "know" would be being used in an odd way in these cases.

"I don't believe it: I know it!" This makes perfect sense, though another might say, "I don't just believe it..." Or is it more accurate to indicate inverted commas with the first, i.e. "I don't 'believe' it...", which then amounts to, "'Believe' is not the right right way to characterize matters."

Wittgenstein acknowledges that someone might use "know" to characterize a certainty but objects to philosophers doing this, so "I know it, though I have no grounds," may make sense in certain cases.

(This relates to a recent interest of mine: to what extent is Wittgenstein interested in reforming ordinary language, not as some general program, but only for purposes of philosophers wishing to avoid particular muddles? The use of "proposition" and the principle of bipolarity is arguably a case of this.)

> Definitions are only for people who have a "foreign language
> problem." Once you are plugged into the grammar, they are of
> no further use.

Oh, I certainly wouldn't go that far! But I would say that often debating definitions is quite pointless and "formalistic" (or "legalistic") in a decidedly bad way.

As Wittgenstein
> noted, one could make a philosophy (by playing games with
> sense) out of anything

Do you have a specific quotation in mind? Or do you mean that he demonstrated such?

-- what is wishing, intention, law,
> winning, fatherhood, courtesy, etc. etc. Knowledge is not
> special here (at least not to asking what it is).

I would note that discussions of "knowledge", "wishing", "intending", and "law", but not "winning", "fatherhood", or "courtesy", have relevance to examinations of logical questions in Frege and especially Russell, as well as in the Austrian tradition I had previously been emphasizing. While these topics may seem like tangents or like simple examples of how to apply his methods, they are actually quite central. And "winning" becomes central in light of his own leitmotif of games. So they are "special" in a certain sense. They relate to particular puzzles that vexed those concerned with logical problems similar to those with which he struggled.

(Many Wittgenstein students don't read Russell or Frege nearly enough.)

I'm glad you appreciated the information on Gettier. When I was younger, noticing how this man who seemed to be known for nothing else had inspired such a vast literature on the basis of a single short paper had really piqued my curiosity.

> 3. On the value of partial definitions, I'm not sure I
> completely agree. ... The
> key is to avoid traffic accidents in the language game. Not
> to give accounts of words outside of this end.

The idea that a grammatical investigation solely exists to avert a particular misunderstanding is a familiar one and not without merit. Certainly, there are reasons to read Wittgenstein that way. (It is one of the disagreements between Hacker/Baker and later Baker.) But there are also reasons not to. This would make a good subject for a separate thread.

Later in the week, I'll do a
> "Wittgenstein and definitions" mail, and maybe we can talk
> more about it.

I shall look forward to that. If the remarks are from _Lectures_on_the_Foundations_of_Mathematics_, you could point me to the page numbers and I could then post them, as I have the ebook.

>
> 4. I don't think we are seeing eye to eye on
> anthropology and "logic." I think the best way to get
> through that is to get to the level of example. Because
> saying philosophy is or is not logic, or is more cousin to
> anthropology than science, is not going to help until we
> actually see "philosophy" in action.

Again, is anthropology not a science?
>
> 5. You mentioned the Bouwsma book. Just got it and about 7
> other books for Christmas! Great stuff in there
> about seeing Wittgenstein as a prophet-like figure. (I'm
> going to write about that soon, too).

I'll look forward to that as well!

Bouwsma seems like
> a really great person.

He seems to have quite impressed Wittgenstein, a man not easily impressed. More for his honesty and character than logical acumen, I suspect, though at a certain point, getting what Wittgenstein has to say might be more a matter of character. (That's not quite right.)

Take care,
JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4a.

Re: On Time

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 11:14 am (PST)



Cayuse, AB, SW,

These lectures are of some relevance to my previous discussion with Sean concerning the range of Wittgenstein's "transitional period". To Sean's way of thinking, Wittgenstein's work of 1929/1930 reflects a transitional period, but I am more inclined to describe 1929 until 1933 and the completion of "The Big Typescript" (first published in part as _Philosophical_Grammar_) as all "transitional". At the time, _The Blue_and_Brown_Books_ were the subject of dispute and since we both agreed that those were both clearly "later" Wittgenstein, our differences didn't much matter. But this lecture is from 1932/1933, so clearly sits in the disputed period.

I'll be commenting on the excerpts with this in mind.

http://www.marxists.org/reference/subject/philosophy/works/at/wittgens.htm

Suppose the log
> makes a bang on passing me. We can say these bangs are
> separated by equal, or
> unequal, intervals. We could also say one set of bangs was
> twice as fast as
> another set. But the equality or inequality of intervals so
> measured is entirely
> different from that measured by a clock.

We might compare this to listening to music and recognizing a pulse as regular or syncopated, steady or speeding up or slowing down. Also recognizing a rhythmic pattern as iambic, anaepestic, and so forth. Also that the two crotchets are followed by two quavers.

And the judgment of a trained musician or skilled listener may be quite precise.

(In working with electronic music or in analyzing musical recordings with a computer, we do use clocks of finer resolution than the clocks with which we typically keep time in our day to day lives. And so we can then speak of measuring these differences with clocks. But that is a special case. And it is not how we listen.)

A comparison: seeing that there are 3 glasses on the table("just by looking", i.e. without counting them) and counting that there are 11 glasses on the bar.

Or: seeing that there are the same number of knives as forks on the table (each place setting has a knife and a fork) and counting that there are 36 knives and 36 forks in the drawer.

The phrase
> "length of interval" has its
> sense in virtue of the way we determine it,
and differs
> according to the method
> of measurement.

Notice the verificationism here which Sean noted as characteristic of his transitional period.

Compare: "Sameness of number and sameness of length" (section 21 of _Philosophical_Remarks_)

Compare: symptoms and criteria and family resemblances in the later work.

Compare: Part III of _Remarks_on_the_Foundations_of_mathematics_, e.g.

44. Now if a proof is a model, then the point must be what is to count as a correct reproduction of the proof.

If, for example, the sign '| | | | | | | | | |' were to occur in a proof, it is not clear whether merely 'the same number'
of strokes (or perhaps little crosses) should count as the reproduction of it, or whether some other, not too small,
number does equally well. Etc.

But the question is what is to count as the criterion for the reproduction of a proof--for the identity of proofs.
How are they to be compared to establish the identity? Are they the same if they look the same?

I should like, so to speak, to shew that we can get away from logical proofs in mathematics.

Hence the criteria for equality of
> intervals between passing
> logs and for equality of intervals measured by a clock are
> different.

Certainly!

We cannot
> say that two bangs two seconds apart differ only in degree
> from those an hour
> apart,

Of course we can!

for we have no feeling of rhythm if the interval is
> an hour long.

That we cannot apply the feeling of rhythm as a criterion in the hour long interval is not to say that we cannot apply the criterion of the stopwatch to both.

And to
> say that one rhythm of bangs is faster than another is
> different from saying
> that the interval between these two bangs passed much more
> slowly than the
> interval between another pair.

These two expressions are characteristic of different criteria being applied. However, we do speak of "the rhythm of the seasons" and "the rhythm of the week". And these are not simply metaphors.

>
> Suppose that the passing logs seem to be equal
> distances apart. We have an
> experience of what might be called the velocity of these
> (though not what is
> measured by a clock).

Compare: recognizing a gesture as frantic or as nonchalant. Watching a slug or a worm speeding up or slowing down.

Let us say the river moves uniformly
> in this sense. But if
> we say time passed more quickly between logs 1 and
> 100 than between
> logs 100 and 200,

(A quibble: the comparison here should be to the time that passed between logs 101 and 200. At least, I am pretty sure he's not intending to compare intervals between unequal numbers of logs!)

this is only an analogy; really nothing
> has passed more
> quickly.

Anna, I want to emphasize this for you. Earlier he did speak of the interval between one set of bangs passing more quickly than the interval between another. And that does make sense. So, he's not denying an experience of duration here.

But in this case, given the uniform motion of the logs, if we speak of time passing more quickly, we are using an analogy. (It might express something like boredom or impatience during one interval or the other.) But we may become confused because it sounds like we are using two different criteria (the uniform motion of the logs and the the sense of interval that passes between the logs) and arriving at different results!

In fact, the second criterion doesn't apply in this case but the _expression_ characteristic of using that criterion is still being used. Except that we speak of the passage of time itself rather than the passage of an interval. Hence, its use is being called an analogy.

(Calling this an analogy is a bad way of putting it and Wittgenstein would not have spoken quite this way later. Compare the discussion of days of the week being "fat" or "lean" in the PI. Compare also various discussions of how pictures are used.)

To say time passes more quickly, or that time
> flows, is to imagine
> something flowing.

No, it is to use an _expression_ characteristic of one case applied to another. He would later distinguish between making a mental picture (imagining, a mental process having real duration) and making use of a picture. Identifying the picture someone is using - like identifying other sources of philosophical confusion - is more akin to psycho-analysis, but he is more circumspect than many psycho-analysts. "For only if he acknowledges it as such, is it the correct _expression_."

Of course, what he describes here is very probably a common source of philosophical confusion, a picture people commonly make and extend in various ways that lead to mental cramps.

We then extend the simile and
> talk about the
> direction of time. When people talk of the direction of
> time, precisely the
> analogy of a river is before them.

By the time he wrote the part of BTS titled "Philosophy", he would be a lot more circumspect than he is here. Which is not to say that the generalization he makes here would not express a reasonable suspicion in many actual cases.

> "Time" as a substantive is terribly
> misleading. We have got to make the rules of the game
> before we play it.

Compare discussions of games and being bounded by rules on all sides.

> Discussion of "the flow of time" shows how
> philosophical problems arise.
> Philosophical troubles are caused by not using language
> practically but by
> extending it on looking at it.

This is akin to the "idling" and "on holiday" similes, but he would later speak of philosophical troubles arising from various sources, e.g. "craving for generality".

We form sentences and then
> wonder what they can
> mean.

" Philosophers are often like little children who first scribble some marks on a piece of paper at random and now ask the grown-up 'what's that?'--It happened like this: The grown-up had often drawn something for the child &
said: 'this is a man', 'this is a house' etc. And now the child makes some marks too and asks: and what's this then?"

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4b.

Re: On When the New Wittgenstein Arrived (Again)

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Mon Jan 4, 2010 9:08 pm (PST)



J:

1. Philosophical Grammar was the product of work he started in the Fall term of 1930. The ideas that show up in 33 are being first aired in 30 (or at least their incarnation). I would not draw the line at the completion of PG, I would draw it when those ideas are swimming in the head. To really know the specifics of the development, we'd need to see lecture notes from the Michalmas term in 1930, and compare them to Alice Ambrose and others. Are the 1930 notes around somewhere?

2. I don't have the Ambrose lecture book handy right now (its at home), but the preface to that book indicates that the notes from 1932 are the most unreliable on her part, because she only had her own notes as a source. The other years had other sources. Finally, she does mention that the notes reflect only "her understanding" of what was said. (Also, some of the "notes" are actually reconstructions -- she went back and wrote out sentences for them. I think they ought to be regarded as a kind of testimony). 

[Tangent -- Still, my examination of those notes (which were from lectures around 1934) found them immensely helpful. I had never seen Wittgenstein presented so clearly before. However, one of the criticisms might be that the points have been made simple. I haven't read enough to form a judgment, but I do know that I was thoroughly enjoying what I was reading. I'll be getting back to that soon]. 
 
3. I'm not quite getting what you are on some of this. But I'm not looking at it closely (sorry if I'm off base).

> The phrase
> "length of interval" has its
> sense in virtue of the way we determine it,
and differs
> according to the method
> of measurement

I was thinking meaning is use, here.

> We cannot
> say that two bangs two seconds apart differ only in degree
> from those an hour
> apart,

I had understood this to say that one is a psychological estimate, the other isn't. This is the sense of interval. He's taking what are thought to be analytic ideas -- length, interval -- and showing that they have senses which are conveyed only "in action." 

4. Please remember, the point at which the caterpillar turns to a butterfly is not mine. I am repeating what I read in Monk. I'm obviously not saying -- and neither was Monk -- that there was not continued development (which there surely was). What I am saying is that the critical period when he broke from the Tractarian ideas (that he did break from) happened in late 1930. There is all sorts of historical conversation about it. Imagine an iceberg that cracks in half, with part II sailing away. That portion of the ice broke and fell away in late 30. What broke it was the new vision. He spent the next several years trying to explicate, clarify and polish it.

Regards and thanks.     

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Personal Website: http://seanwilson.org
SSRN papers: http://ssrn.com/author=596860
Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.1.

Relationship between brain and mind as conceptual convenience

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 4:39 pm (PST)



BruceD wrote:

>Consider...

>The alternative, I'm proposing, is to view the relationship between
>brain and mind as a conceptual convenience that under certain
>circumstances can be expressed "causally", i.e., the alarm caused me to
>awake, and, at other times, deliberately, "I've trained my brain not to
>hear the alarm when I want to sleep in."

>Wittgenstein puts it this way: (PI, page 180)

>"It's like the relation: physical object -- sense impressions. Here we
>have two different language games and a complicated relationship
>between them. -- If you try to reduce their relations to a simple
>formula you go wrong."

no one denies that there are two vocabularies in use; and, except for a
few eliminative materialists, no one is trying to change that fact.

the question is how do we account for that fact?

perhaps the fact doesn't need accounting for. maybe we just have to
learn to live with the fact that we have two vocabularies that are not
interchangeable. this, you may recall, is merely predicate dualism.

but, if we try to account for predicate dualism, how would we do so? do
we postulate two sets of phenomena (eg measurable phenomena vs
experiencable phenomena)? two sets of properties (eg physical vs
mental)? or two substances (ie two type of 'stuff')?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.

ANGELHOT SENT YOU HER PICTURES

Posted by: "MODELS DATING" zakilis@xxxxxxxxxxxxxxxxxxxx

Mon Jan 4, 2010 6:19 pm (PST)



The Free models community place Check out my pictures and
recorded videos on Zakmodels. Hi,

I set up a Zakmodels profile where I can post my pictures (49
pictures), videos (2 recorded videos) and events and I want to
add you as a friend so you can see it.
Thanks,
AngelHot To view HotAngel profile, follow the link
below:http://www.zakmodels.com/profile.php?id=340
<http://www.zakmodels.com/profile.php?id=340> [My Picture 1]
[My Picture 1] If you do not wish to receive this type of email
in the future, please click here <http://www.zakmodels.com> to
unsubscribe.
7a.

Re: Wittgenstein, Translations & "Queer"

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Mon Jan 4, 2010 9:10 pm (PST)



 
In the new 4th edition of Philosophical Investigations, certain changes were made to the translation of the German text. The reasons why are discussed in the preface. Although we have discussed this before, there is one that I wanted to discuss, but was uncertain about. Here it is:

"Anscombe translated seltsam and merkwurdig by 'queer.' We have translated seltsam by 'odd,' 'strange,' or 'curious,' and merkwurdig by 'remarkable,' 'strange,' 'curious' or 'extraordinary.' " (xiii).

I think the basic idea of the above sentence is to say this: nowhere in the book is the _expression_ "queer" used. For a while, I had wondered whether the use of the _expression_ "queer" in Wittgenstein's works was the result of translations of his German (which the above sentence suggests at least for PI). But recently, I have found an abundance of evidence that Wittgenstein used the word in ENGLISH when lecturing (and otherwise speaking). And now comes what in essence is a small and petty question, even at a theoretical level (I guess): to what extent is this particular judgment a "translation" or a re-write? I'm just concerned here that Anscombe translated it the way she heard it presented to her in 1942-45, which in her eyes overruled anything else.

Here's the issue. You have a strange speaker (Wittgenstein), even in his native language. He then says "queer" to describe various things. He then uses the term seltsam and merkwurdig for, presumably, those same sorts of things. One could take the position that Wittgenstein's relative inexperience with English caused him to use an _expression_ in a weird way ('queer' was queer, so to speak).  And so the translators, who know English better, and who know the gist of the idea, change it accordingly.  

But the other argument is this. If in the 1930s and 40s people sometimes said "queer," and if Wittgenstein picked up upon and deployed this language play, what right or status do scholars claim to have to sanitize and speak for Wittgenstein under the rubric of "translation." This seems almost to venture into copy editing or style revision.  Aside from the fact that it strips away some glimpse of Wittgenstein mannerisms -- which I would argue is always a bad thing to do -- it seems on the merits that one would have to have a rather lofty perch from which to start rewriting Wittgenstein's words.

Now, the truth is that this is such a small matter. I doubt anyone would really care (even Wittgenstein). But I do nonetheless wonder whether it is "correct" for scholars of English, German and Wittgenstein to be telling Wittgenstein, in effect, not to use "queer" when expressing his points in English, because the suggestion is that they are the governors of those points. I'm ok with that if its a school boy. And I might be ok with it for people quite advanced in English/German. But I'm not ok if its Wittgenstein, and I think Anscombe was right to translate it the way she heard it. (But I confess to be unsure as I say it!!!)

Yours arguing over nothing.         

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Personal Website: http://seanwilson.org
SSRN papers: http://ssrn.com/author=596860
Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

7b.

Re: [C] Re: Wittgenstein, Translations & "Queer"

Posted by: "CJ" wittrsamr@xxxxxxxxxxxxx

Mon Jan 4, 2010 9:43 pm (PST)



Sean,

Why should you be unsure?

And it does not take a lofty perch but nothing more than a cowardly spirit to start ---and academic tenure ---is required to become a willing and eager instrument of repression and to rewrite Wittgenstein's words....and indeed all of our words, every day in every way.

Clearly just political correctness run amuck...and what better place for it to run amuck than the academic sanctum where cowardice of spirit is institutionalized to the highest degree. ...and indeed where the folks are "queerer" than most.

On Jan 5, 2010, at 12:10 AM, Sean Wilson wrote:

> it seems on the merits that one would have to have a rather lofty perch from which to start rewriting Wittgenstein's words.

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Fashion News

What's the word on

fashion and style?

Group Charity

California Pet

Rescue: Furry

Friends Rescue

Yahoo! Groups

Going Green

Connect with others

who live green

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 96 - WittrsAMR