[C] [Wittrs] Digest Number 101

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 9 Jan 2010 23:26:35 -0000

Title: WittrsAMR

Messages In This Digest (25 Messages)

Messages

1.1.

Walter's kind remarks

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 4:01 am (PST)



SW and WH,

Thanks to Walter for the kind remarks. They are much appreciated, especially since I was reluctant to even engage with the topic any further.

And a side note to Sean: if someone only goes by their initials, their sex really cannot be assumed. Especially not in a field with such figures as Anscombe, Diamond, Floyd, Moyal-Sharrock, and so forth. I'm just sayin'...

(The Internet permits ideas to be exchanged without regard to age, sex, race, status, and so forth. That's a good thing.)

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.2.

Re: Walter's kind remarks

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Sat Jan 9, 2010 10:21 am (PST)



(J)

... goodness you are right about that. I had assumed you were male. That, indeed, was a prejudice (even if you are).  That's amazing how those things work.

I think there is some relationship between prejudice and "seeing as." In my bedroom, I have broad lines of cut garbage bags taped to the window to block out the sun in the morning. Because the tape job looks like a 4-year old did it, there are cracks and so forth, while some rows double over into others. In the morning, when I look at the window, the mix of hues of dark (from the bags) and of cracks of light and of less dark shades creates interesting imagery. Each morning as I awake, I see different things. Different figures. It's like my brain is on autopilot as to what will be image in the half-awake state that I am in when just waking up. It's quite fascinating. Sometimes there are demon-like things. Other times there are animals. Today, there was a postal package. It reminds me completely of duck-rabbit and of similar sorts of flexible imagery. It seems to me that the brain is doing it.

Anyway, in some sense, I can relate this to the idea of thinking you were male, while having no basis for such an assertion. It was a construction or a picture that developed. 

Amazing how prejudice works. 
 
My apologies to you if you are not -- and to about half the planet if you are.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Personal Website: http://seanwilson.org
SSRN papers: http://ssrn.com/author=596860
Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.3.

apologies, perceptions, and seeing as

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 11:33 am (PST)





--- In Wittrs@yahoogroups.com, Sean Wilson <whoooo26505@...> wrote:
>
> (J)
>
> ... goodness you are right about that. I had assumed you were male. That, indeed, was a prejudice (even if you are).  That's amazing how those things work.
>
> I think there is some relationship between prejudice and "seeing as." In my bedroom, I have broad lines of cut garbage bags taped to the window to block out the sun in the morning. Because the tape job looks like a 4-year old did it, there are cracks and so forth, while some rows double over into others. In the morning, when I look at the window, the mix of hues of dark (from the bags) and of cracks of light and of less dark shades creates interesting imagery. Each morning as I awake, I see different things. Different figures. It's like my brain is on autopilot as to what will be image in the half-awake state that I am in when just waking up. It's quite fascinating. Sometimes there are demon-like things. Other times there are animals. Today, there was a postal package. It reminds me completely of duck-rabbit and of similar sorts of flexible imagery. It seems to me that the brain is doing it.
>
> Anyway, in some sense, I can relate this to the idea of thinking you were male, while having no basis for such an assertion. It was a construction or a picture that developed. 
>
> Amazing how prejudice works. 
>  
> My apologies to you if you are not -- and to about half the planet if you are.
>
> Dr. Sean Wilson, Esq.
> Assistant Professor
> Wright State University
> Personal Website: http://seanwilson.org
> SSRN papers: http://ssrn.com/author=596860
> Discussion Group: http://seanwilson.org/wittgenstein.discussion.html
>
>
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>
SW,

By my reckoning, apologies are quite unnecessary in any event. Apologies are for cases where one has been hurtful, whether deliberately or inadvertently. Anyway who would be hurt by an understandable assumption made without even the slightest hint of ill will is being too sensitive.

("Too sensitive" is often abused to excuse all manner of incivility and even bigotry but that doesn't mean it doesn't sometimes apply.)

Mine was more an observation than a complaint.

Are you familiar with the story about Leonardo getting ideas by staring at a water stained wall and finding various images in them?

Oh nice! The excerpt in question is reproduced online!

http://www.mirabilissimeinvenzioni.com/ing_treatiseonpainting_ing.html

JPDeMouy

PS I'll working on a reply to you on the transitional Wittgenstein sometime this weekend. Having done one lengthy reply on another thread, I am otherwise just making this quick, brief replies at the moment.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.4.

Re: apologies, perceptions, and seeing as

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 11:47 am (PST)



Apologies are also necessary for inadvertently breaking a rule after one has been asked to follow that rule as a matter of courtesy. I thought I'd cut the post to which I was responding. Sorry about that.

I don't know if it's gone out yet, so here's that reply with other material cut.

By my reckoning, apologies are quite unnecessary in any event. Apologies are for cases where one has been hurtful, whether deliberately or inadvertently. Anyway who would be hurt by an understandable assumption made without even the slightest hint of ill will is being too sensitive.

("Too sensitive" is often abused to excuse all manner of incivility and even bigotry but that doesn't mean it doesn't sometimes apply.)

Mine was more an observation than a complaint.

Are you familiar with the story about Leonardo getting ideas by staring at a water stained wall and finding various images in them?

Oh nice! The excerpt in question is reproduced online!

http://www.mirabilissimeinvenzioni.com/ing_treatiseonpainting_ing.html

JPDeMouy

PS I'll working on a reply to you on the transitional Wittgenstein sometime this weekend. Having done one lengthy reply on another thread, I am otherwise just making this quick, brief replies at the moment.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1.5.

Re: apologies, perceptions, and seeing as

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Sat Jan 9, 2010 11:51 am (PST)



... no rush!

I've been looking at Philosophical Grammar myself, and have some thoughts about it. We'll exchange in a few days or so.
 
SW
----- Original Message ----
From: J D <ubersicht@gmail.com>
To: wittrsamr@freelists.org
Sent: Sat, January 9, 2010 2:33:49 PM
Subject: [Wittrs] apologies, perceptions, and seeing as

JPDeMouy

PS I'll working on a reply to you on the transitional Wittgenstein sometime this weekend.  Having done one lengthy reply on another thread, I am otherwise just making this quick, brief replies at the moment.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.1.

Re: kind remarks from Josh

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 4:05 am (PST)



JRS,

Thanks. I take no small pride in being able to present a reading as separate from a position on the content, so your feedback on that score is encouraging.

Honestly, your questions do tempt me but I am reluctant to engage further on these topics. Some reasons are best left unspoken because they might be read as sniping and there's been more than enough of that. But I can say that such topics really aren't of great interest to me at the moment and there are several other topics I've intended to post about but have neglected in order to respond to topics already in play.

Concerning Hacker vis-a-vis Dennett, you ask

> But I wonder if you'd be as pure in your own theory.

I would deny having a theory, though I am sure you didn't mean that pejoratively. But whether Hacker's remarks (some of which strike me as useful insights, others of which seem to verge into dogmatism) can be stigmatized as "theory" is a tricky question. I mention this not to engage in a debate about Hacker's writings, but to foreshadow a topic I've been working on, related to how we distinguish between grammatical investigations and theories and whether the "therapy" metaphor is essential to this question.

I'll also just make the observation that my concern would be less whether the concept of "intentionality" is being reified and more whether a lot of different ideas, grammatical and psychological, are being run together. And then what one needs is not a theory but a grammatical investigation.

have you read "Orrery of Intentionality"?

JPDeMouy


=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3a.

Re: Nietzsche

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 4:07 am (PST)



> What? The guy is fair to
> Nietzsche too?

I certainly don't by into the hype of much so-called Analytic philosophy (By my way of thinking, Analytic philosophy ceased with Quine to be the dominant mode of philosophizing in the Anglo-American world, but that's a minority usage. Not that Analytic philosophy proper didn't have its hype...) regarding certain "Continental" thinkers. I think Hegel's not as completely useless as he appears, I think Husserl was positively brilliant (in his earlier works especially), I'll even give Heidegger his due (though I loathe the man), and I think Schopenhauer, Nietzsche, and Kierkegaard were all profound thinkers.

(Apparently, there's only individual with whom I am not fair.)

I've taken a note of the references for such time as I am reading Nietzsche again. (Being involved with a Nietzsche enthusiast, it may be sooner than I expect.) Was there some particular relevance or are those just papers you enjoyed and wanted to bring to our attention?

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3b.

Re: Nietzsche

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 1:30 pm (PST)





--- In WittrsAMR@yahoogroups.com, "J D" <wittrsamr@...> wrote:
>
> > What? The guy is fair to
> > Nietzsche too?
>
> I certainly don't by into the hype of much so-called Analytic philosophy (By my way of thinking, Analytic philosophy ceased with Quine to be the dominant mode of philosophizing in the Anglo-American world, but that's a minority usage.

Right "it" (I mean something of the type) may have even ended with Kant too, things eternally repeating before starting and ending again!

>Not that Analytic philosophy proper didn't have its hype...) regarding certain "Continental" thinkers. I think Hegel's not as completely useless as he appears, I think Husserl was positively brilliant (in his earlier works especially), I'll even give Heidegger his due (though I loathe the man), and I think Schopenhauer, Nietzsche, and Kierkegaard were all profound thinkers.
>
> (Apparently, there's only individual with whom I am not fair.)

You lost me on that one.

>
> I've taken a note of the references for such time as I am reading Nietzsche again. (Being involved with a Nietzsche enthusiast, it may be sooner than I expect.) Was there some particular relevance or are those just papers you enjoyed and wanted to bring to our attention?
>
> JPDeMouy

They both are simply damned good doctoral dissertations. Anybody interested in perspectivalism would do well to see Nietzsche from the perspective of an accepted dissertation--especially two which take ER to have been offered as a serious metaphysics (at least if one takes into account his unpublished notes) which also allows for creative interpretation, two notions which don't normally square in the same head but, who knows, maybe there are those who can do justice to the notion taken as a literal metaphysical hypothesis without feeling threatened by it.

I imagine that the cost of each dissertation is well over what I paid some twenty years ago (University Microfilms International).

I end with an anecdote:

The Doors played the Ed Sullivan show. Morrison and the band were asked to drop the word "higher" given its illegal overtones. Jim said it anyway. Confronted by Ed as well as the producers after the number, the band was told that they never would play the Ed Sullivan show again. Nietzsche enthusiast Jim was, he sardonically piped something to the effect, "What do you mean? We already played your show!"

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.1.

Re: SWM and Strong AI

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 4:12 am (PST)



SW,

First, let me say that I sympathize with your computer difficulties. It can be incredibly frustrating when such things happen. I've acquired the habit of generally working in a text editor - one that periodically saves drafts automatically as I proceed - rather than in a browser or mail client. That spares me many such difficulties. Perhaps such an approach might be helpful to you as well.

Now, given the difficulties you describe - having written a lengthy reply and lost it before proceeding to write another, less lengthy but hardly brief reply - the nature of your response seems all the more peculiar.

If in fact your view is that nothing I wrote addresses any of your actual views, that my presentation of Searle's argument is simply irrelevant to my allegations of your having misunderstood him, then couldn't you have saved yourself a great deal of typing? If not on the first go, at least on the second, didn't it occur to you to simply write, e.g.

"J, your presentation of Searle's position is flawless as far as I can see. But what of it? What does all of that have to do with me? Since I agree entirely with your interpretation, it obviously cannot show that I have misread him as you've alleged."

Think about it.

No.

Stop.

Seriously. Think about it.

How could I have answered that? I'd either have had to concede, e.g.

"SWM, you do? Then I suppose that I must have misread you before."

Or I'd have had to argue, e.g.

"Well, you're just saying this now but before you said..."

(followed by providing quotes of previous things you've posted, as you've insisted I do).

Now, insisting I do that, that I provide quotations, is but one of the things you did in your reply.

Doesn't your further argument at least suggest that my presentation of Searle's argument doesn't completely fit your own reading?

And doesn't that in turn suggest that my presentation wasn't entirely motivated by a misreading of your own remarks?

Otherwise, you could have saved yourself a lot of typing.

(And I'm not talking about civility here. The same strategy needn't have been polite. You could have said, e.g.

"Look J, I don't know what you're on about. Nothing you've presented Searle as saying disagrees with anything I've said about him, so if you want to claim otherwise, you need to put up or shut up: show me quotes that demonstrate where I disagree with your account or admit that all your talk has all just been blather and a waste of both our times.")

Again: think about this. Don't dash just off a reply. Think about why you argued (beyond saying, "Yes but I never said otherwise!") with what I was saying rather than saving yourself all that work. If what I said really missed the mark so completely, then you must have wasted a great deal of time by responding any further than a few brief remarks.

I suspect you're not willing to give much thought to it, despite my suggestions (which were genuinely meant to be helpful, though I'm sure in light of previous conflicts, they came across as mere condescension). I suspect you're impatiently waiting to say, "It shouldn't matter how I ask. You made a claim. Back it up!"

And that's not an obviously unreasonable demand.

I could also imagine you turning my questions and suggestions around, e.g. "Why, J, have you wasted so much time, arguing this point instead of just providing the quotations?" Now, that would be a way of evading the insight I'm trying to offer you, but that wouldn't make such a response any less warranted. (And besides, whose to say such an "insight" is worthwhile anyway? Me? I'm hardly an unbiased judge of that!)

(Sometimes it can help to cultivate an attitude of reflection. Taking time to think about wasting time may seem like itself a waste of time. But don't be so determined not to waste time that you rush to that conclusion.)

I had considered taking the approach of gathering quotations and even considered whether I should focus on recent posts from this board (presumably more relevant to your current views), older posts from this and other boards (supporting my claim that your misunderstandings have been longstanding) or a sampling from a variety of sources.

It occurred to me however that I would be engaging in a game of "gotcha!", that such an approach wasn't likely to be either helpful or civil (which I am sincerely trying to be), and that whatever I might present could always be explained away or endlessly debated in endless iterations of my readings of your readings...

And taking such an confrontational approach seemed liable to increase the chances that your response would be to dismiss anything I might offer as failing to meet the burden. After all, who is to judge that? You? Me? People with a grudge against you? People who really don't care and wish we'd both just drop it?

Reflections like these led me to think that perhaps the best way to proceed would be to assume good faith. If I patiently presented my reading of Searle's argument, emphasizing those points with which I believe it differs from your own, you would recognize those points of divergence on your own - without my needing to make matters more contentious by spelling them out.

Was I being naive? Might you instead just be disingenuous and refuse acknowledge the relevance of what I was saying?

(But in that case, couldn't you be just as disingenuous if presented with quotations?)

Might you genuinely be forgetful, honestly not recalling your own arguments?

(But in that case, would there be any point to these discussions anyway since whatever point I might make would also be quickly forgotten?)

I made a couple of mistakes, even given these intentions. I still felt followed the urge to express a dismissive attitude, especially in the opening paragraph. That surely didn't help matters.

But I also made the mistake, in talking about possible misreadings, of being too easily read as accusing you of each of them.

I mean, when I emphasized a point or pointed out a way that he could be misread, what else could I have meant to suggest? Was I just tossing things out there to see if anything stuck?

No. Not exactly, anyway.

Setting that mistake aside (I'll return to it shortly.), my hope was that you'd read what I'd written and recognize (whether or not you'd insist that I'd misread you or grant that you'd misread Searle) that I had understandable reasons to read Searle as I do and understandable reasons to take you as reading him differently.

(And what other possible standard could I be held to? "Proof" that you'd misread him? Think about that. If you're the arbiter of whether I've misread you - because generally, your avowals on that are authoritative - and the reading of Searle is the very matter in dispute, then "proof" could only be if I not only changed your mind but persuaded you to admit that you'd changed your mind. A pretty high standard and a pretty unlikely outcome. But I probably should have thought of that before being so dogmatically dismissive!)

But that's what is peculiar here. It seems as if you neither wholly agree with my reading of Searle nor grant that it has any relevance to your own position. But how can that be? Either you do agree with all of it, in which case your response could have been much more brief. Or you disagree with at least some of it, in which case some of it clearly is addressed to differences in our readings.

And in that case, addressing the points you argued in your last response to me would seem more appropriate than digging up older quotations.

But that's not what you've asked.

Peculiar.

There are other reasons I didn't go with the "quote mining" approach.

First, as I continued reviewing your past exchanges, it became increasingly apparent to me that you've spent a lot more time arguing against whatever you take Searle's position to be than you have explicitly stating what you actually think his position is.

(That's not a criticism, just an observation.)

Second, to the extent that you have offered bits of an interpretation here and there, there seems to be a lot of variation. Am I saying that you're imprecise? That you contradict yourself? That you refuse to focus on a clear statement of what you're criticizing?

Well, even if we say more charitably that you're less interested in "details" and more focused on the "big picture" (whatever that may be), which I suspect you'd grant, it still gives a reason it would be difficult to grab a few quotes and treat them as authoritative statements of what you believe Searle's arguments to have been.

Am I then retracting my initial charge that you've misread Searle's Chinese Room Argument and his statement of the position he calls "Strong AI"? I admit, I wish that I honestly could. It would make matters a whole lot easier. It should be clear enough that I am not above admitting to all sorts of mistakes and I suspect that ultimately there would be a lot less grief in doing that.

But my sense, in all that I have reviewed, that there are definite misunderstandings in your readings on these matters will not be shaken.

The difficulty in nailing down a clear statement of your interpretation means instead relying on your arguments and reconstructing from them what I take you to be attacking (or defending against). And these are what have given me that sense I cannot shake. But using that as evidence requires not only the quotations (easy enough) but an argument to demonstrate why I infer this reading rather than that from your use of a particular argument of your own. (And then, we'd argue about that...)

This point brings me back to the mistake of pointing out various possible misreadings and thereby giving the impression that I was accusing you of each of them. Inferring a reading on the basis of an argument leaves some ambiguity.

Finding things you wrote weren't making sense, I found myself thinking, "Ah, maybe this is where he got confused. Or perhaps here..." But that came across as, "So you messed up here. And here. And here...", more accusatory than it actually was.

Having said all of that, I'll still go ahead with what you've requested. But do bear in mind my reservations about the value of this exercise.

I'll also address some of the remarks in your most recent reply that struck me as peculiar for taking issue with my reading while still denying that my reading addressed your own.

JPD, quoting Searle: "'Could a machine think?' The answer is, obviously, yes. We are precisely such machines."

JPD, commenting on Searle: (Here, I agree. For what that's worth. So, to read him as denying that a machine can think, be conscious, and so forth, is simply to misread him.)

SWM, responding: And where do you think I have ever read him in THAT way? If you are as familiar with my past remarks on the subject as you have suggested, you would know that I have often noted that Searle speaks of brains as organic machines and also that it may be possible to build machines some day that can do what brains do.

JPD NEW: I am aware that you have acknowledged and even emphasized the point in some discussions. But you've also insisted that Searle is a dualist.

SWM ARCHIVE: "Personally I think the really important flaw of the Chinese Room argument is that it must assume what it wants to conclude, namely that consciousness cannot be reduced to processes that aren't themselves conscious (a dualist, and thus somewhat suspect, presumption that Searle, himself, has been at pains to disavow)." http://groups.google.com/group/Wittrs/browse_thread/thread/ddfe303b20270aca/bf5b259a2180f49f

JPD NEW: But individual neurons firing and so forth are not conscious while he grants that we are. And we can be rightly described as machines, a usage he explicitly endorses.

JPD NEW: Oddly, you've insisted on a narrower usage of "dualism" when criticizing Searle's remarks equating Strong AI with dualism (and I don't quarrel with your objections there) but in doing so emphasize and endorse Searle's own remarks:

SWM ARCHIVE: "(H)e said that the only dualism that means anything is the kind that reduces, at bottom, to substance dualism. I think he is right about that and it strikes me that he was wrong in linking AI to dualism..." http://groups.google.com/group/Wittrs/browse_thread/thread/c21f820beca027ac/0ee5500b14419bea

JPD, quoting Searle: "'Yes, but could an artifact, a man-made machine, think?'

JPD, quoting Searle: "Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question."

JPD, commenting on Searle: (Note how much he grants here. My own answer would be somewhat different, but that needn't concern us here. The fact is that he does grant the possibility that an artifact, a man-made machine, can think, be conscious, and so forth. He doesn't even limit this possibility to an artificial brain that operated on the same chemical basis. So, to read him as denying the possibility that a man-made machine can think, be conscious, and so forth, is again, a misreading.)

SWM, responding: Again, where do you think I have ever offered THAT reading of him? Once again, any such suggestion is a misreading of ANYTHING I've ever said on this subject and, if imputed to me as part of what you are arguing against, a classic strawman.

SWM ARCHIVE: "Searle argues, via the CRA, that we cannot achieve such consciousness in machines. " http://groups.google.com/group/Wittrs/browse_thread/thread/c21f820beca027ac/aaf92d02ab806997

JPD NEW: Now, does that mean you're inconsistent? Or to salvage consistency, should I assume you misspoke? Can you explain it away as not meaning what it appears to say? And so should we argue about that? Find more quotes? And how long should we pursue this pointless exercise?

JPD, quoting Searle: "'OK, but could a digital computer think?'

JPD, quoting Searle: "If by 'digital computer' we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think."

JPD, commenting on Searle: (I think this is a muddle, but again, that needn't concern us here. He doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking.)

SWM, responding: I agree that there is confusion here. Elsewhere he has suggested that even wallpaper can be described as a digital computer as I recall. If anything can, then the description loses its potency. Of course we are talking about certain very specific kinds of items when we use the term "digital computer" in ordinary language and we don't mean wallpaper or even thermostats (unless they are small scale computers as some, today, are).

JPD NEW: You and I may both quarrel with his usage, though I suspect for different reasons. Disputes about, e.g. the Lowenheim-Skolen proof, Putnam's model-theoretic proof, and the relevance of counterfactuality in such matters would take us too far afield. Less far afield would be to note that Searle's way of speaking is too easly muddled with the separate but relevant issue of Turing-equivalence. But in any case, whether or not we accept his usage, it is his usage. And I repeat: he doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking. And while how "we use the term 'digital computer' in ordinary language" is narrower than his usage here, if we ascribe to him claims about what digital computers can or could do, we have to recognize his usage. Instead you say things like:

SWM ARCHIVE: "Think of Searle's Chinese Room argument, a logical syllogism he developed to make the case that computers cannot ever be intelligent in the conscious way that we are" http://groups.google.com/group/Wittrs/browse_thread/thread/9cf679e9673e0781/c226b1e348178023

JPD NEW: In a remark that was (oddly but not necessarily wrongly) presented as a correction to a quote from Searle himself about Strong AI, you wrote:

SWM ARCHIVE: "Actually, it's about a very particular kind of machine (computational machines) which may, or may not, be like brains in the relevant way. At this stage we just don't know, as even Searle notes (except that we know that computers and brains are made of different materials and operate differently). But Searle's actual argument (the CRA) amounts to a logical claim that computers must be excluded from the class of mind causing machines based on their nature but it is a nature whose similarity or difference to brains is an empirical (not a logical) question. And yet Searle purports to give us a logical argument that addresses what is finally an empirical matter." http://groups.google.com/group/Wittrs/browse_thread/thread/c21f820beca027ac/2bb6f52b12497262

JPD NEW: Now, whatever you may mean by "computational machines", you were correcting Searle on what constitutes the position of Strong AI (odd, as I said), presumably offering what you take to be the proper way to characterize that position, using terms in a way that doesn't fit his own usage just adds to the oddness of it all. By Searle's usage, brains are computational machines! So this characterization of his position is just a complete muddle.

JPD NEW: Note his granting, "I am, I suppose, the instantiation of any number of computer programs." in "Minds, Brains, and Programs".

JPD NEW: Note also the remark I'd previously quoted, where Searle wrote, "And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use." So ascribing to him the view that what distinguishes brains from (other things he would describe as) computers is simply that they are "made of different materials and operate differently" is wrong. An extra-terrestrial's brain or a man-made artificial brain could be "made of different materials and operate differently" and he would still grant that they could be described as digital computers but he would not deny that they could be conscious.

JPD NEW: In an indirect ascription of a position and argument to Searle, you write:

SWM ARCHIVE: "So you are 'saying' that there is no possibility that science could someday produce a machine that has consciousness, has a mind?

And this is because why?

Minds are special and stand apart from what is physical (Chalmers, Strawson)?

Brains aren't computers (Edelman, Searle)?

Minds aren't based in physical processes (Searle, though he doesn't quite admit this because he acknowledges minds are produced by brains[!] though he is never quite willing to hazard a guess as to how)?" http://groups.google.com/group/Wittrs/browse_thread/thread/a9c267d4fa9eb17d/d9ec4a625395a207

JPD NEW: This is an example both of your imputing to him a denial that brains are computers and of a claim (which you "charitably" acknowledge "he doesn't quite admmit" while insinuating that his not engaging in armchair neurobiology saddles him with the claim anyway) that "Minds aren't based in physical processes."

JPD, quoting Searle: "'But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?'"

JPD, commenting on Searle: (Note well: "solely in virtue" and "sufficient condition".)

SWM, responding: Noted. Where do you think I am saying otherwise? (Below we will have a chance to address this in more depth.)

JPD NEW: Every time you start talking about how the Chinese room is "specked" (sic) or start emphasizing "capacity" and emphasizing parallel vs. serial processing, you go beyond, "solely in virtue" by adding additional conditions.

JPD NEW: Now, I am not disputing the possibility that speed is relevant, though as a matter of computer science and based on firsthand experience building Beowulf clusters, doing benchmarks on multi-threaded applications, and so forth, I can tell you that equating "parallel" with speed is a very naive rookie mistake. But that's beside the point. Assume speed is relevant. Assume that parallelism (a different issue) is also relevant. I'll stipulate to those claims for purposes of this discussion. Still, once you do that, you are no longer saying "solely in virtue". (And do I really need to dig out quotes to prove that you do this?)

JPD NEW: Now again there is the issue of benchmarking. It is true that some sort of capacity requirement is involved in being able to run "the right sort of program", if only a storage requirement. (The Universal Turing Machine has a tape of infinite length but real computer had drives are not so blessed, not to mention the issues of swapping vs. RAM, CPU caches, and so forth.) But anyone who has run modern software on hardware generally deemed obsolete (not just whatever the new Windows is on a machine Microsoft has claimed is now fit only for a landfill because it's 5 years old, but running a newly released version of a UNIX kernel like Linux or NetBSD on hardware that was manufactured in the late 80s or early 90s) or who has run the software on a hardware emulator (where the hardware is itself modeled - subtly different from virtualization - by a program running on yet another platform, meaning that the emulated hardware will run dramatically slower than the hosy machine, though it might still be faster than the actual hardware being emulated would be) can attest to the fact that there is a huge difference between being able to run a program and finding it responsive. You're making the requirement one of responsiveness which goes way beyond just being able to run the program. And in so doing, you clearly go beyond the "solely in virtue" requirement.

JPD NEW: For our purposes, I'm not saying it's an illegitimate question once that condition is added. I am just saying it's not the same question. And the issue here is how you've interpreted Searle.

JPD, quoting Searle: "Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI."

JPD, commenting on Searle: He then goes on to construct a scenario resembling the Chinese room in some respects, but whatever the merits of this argument, it is no longer the CRA and it is no longer addressed to Strong AI as he defines it.

SWM, responding: Note that my response to the CRA is not premised (and never has been premised) on this particular reply and I will note, in passing, that I agree with the view that that reply does not answer his argument.

JPD NEW: But you do emphasize analogies between putative parallelism in the brain and your emphasis on the importance of parallel processing in hardware. But by his lights (and since he's the one defining "Strong AI", his coy manner notwithstanding, his lights are the only lights that matter), "we don't need to know how the brain works to know how the mind works." So, in making such an argument, you go beyond Strong AI. (And again, do I really need to provide links and cutting and pasting, or do you acknowledge that you have made such arguments?)

JPD OLD: Does he sometimes criticize positions that do not fit his definition of Strong AI without taking the time to explicitly point that out?

JPD OLD: Yes, he does. Again, in the original essay, regarding the "Robot Reply", he doesn't explicitly spell out that this reply is no longer what he has defined as "Strong AI". He does point out the difference though and if you've followed closely, you'll see that the position does involve a departure from the position he's called "Strong AI".

SWM, responding: Now you proceed at great length to make this case over and over again below, to wit, that not every argument against Searle's CRA really speaks for or supports what Searle calls "Strong AI". And I addressed these in more specificity in my earlier reply. But to save time I will now stipulate to this and just note that MY argument against the CRA is not based on such a non-AI supporting argument but on a variant of the Chinese Gymnasium Reply (sometimes called the Connectionist Reply, though it is not always presented in quite the same way so even this has some variations to it).

JPD NEW: Connectionism, in relying on analogies with the putative functioning of the brain, is not Strong AI, for reasons shown above.

SWM, responding: My argument boils down to the one exemplified by Peter Brawley's analogy on the Analytic list, that you can't build a bicycle and expect it to fly. As such we can call it the Bicycle Reply for convenience. It is grounded in the claim that Searle has underspecked the CR. That is, real AI researchers do not think or claim that a rote responding device like the CR is conscious. What they presume is that more things are going on in consciousness than merely transforming symbols mechanically using look-up tables (or their equivalent) as happens in the CR. Thus their efforts are aimed at producing a computationally based system that has all the things needed.

SWM, responding: In a nutshell, the CR, as specked by Searle, doesn't have enough going on in it to qualify as intentionally intelligent (the proxy for consciousness in this case).

JPD NEW: This is very important. You and I share misgivings about the wide usage Searle gives to "digital computer". But you impute to him denials that a computer could be conscious (as shown above). How do you define "digital computer"? Or a better question, what do you think that a digital computer does in processing inputs and outputs, above and beyond "merely transforming symbols mechanically"? How exactly do you think digital computers work? And what future technology are you imagining, what will it do beyond "merely transforming symbols mechanically", and on what basis will it still be described as simply a digital computer?

JPD NEW: "Transforming symbols mechnically" is what digital computers do. (Or rather, mechanically transforming voltages which represent the symbols "0" and "1", which in combination and in turn represent other symbols, some of which after various manipulations, become control voltages which drive various outputs.

SWM, responding: The thesis of real world AI researchers is that they can use the same sort of operations as exemplified in the CR (Turing equivalent) to perform these other functions in an integrated way, as part of a larger system than the CR, and that THIS would be conscious. If "Strong AI" doesn't represent this claime, then it has nothing to do with the question of whether AI can achieve consciousness.

JPD NEW: Is performing "the same sort of operations" to be read as "transforming symbols mechnically" in order to "to perform these other functions in an integrated way"? You say, "What they presume is that more things are going on," but is it more of the same?

JPD NEW: Whether it's "more of the same" but with, e.g. parallelization or it's some unspecified "something" beyond "merely transforming symbols mechanically", this argument clearly does go beyond "solely in virtue of".

JPD NEW: And it is some unspecified "something" beyond "merely transforming symbols mechanically", which is what the "Bicycle Reply" suggests, Searle's response to the "Many Mansions Reply" is appropriate:

JPD NEW, quoting Searle: "I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to."

SWM, responding: Obviously the AI project, understood in this way, means capacity matters, which could involve more processors as well as faster processes, more memory, etc., all intended to enable more the accomplishment of more tasks by the processes in the system. But note that the processors and the processing would be the same as you find in a CR type apparatus. Thus the "solely in virtue of" criterion is met (unless you want to so narrowly define THAT concept as to again reduce this to being just about a device with no more functionality than the CR).

JPD NEW: So, it is "more of the same". And yet the reference to "merely transforming symbols mechanically" then makes no sense, because that is what digital computers in the ordinary sense of the word actually do.

JPD NEW: Searle would reject the this, and does so with the "Chinese Gymnasium" argument (which owes to Ned Block's argument, actually older that the Chinese Room Argument). If you think that the Chinese Gymnasium as a whole is conscious then... well, okay. That position is not Strong AI but it is still a position with which Searle would disagree. And the fact that he would offer the Chinese Gymnasium as a separate argument is an acknowledgement that the Chinese Room Argument might not be taken to address such a case. (Whether it actually does is another matter and this partly turns on the quite arbitrary decision of how to individuate different permutations of thought experiments. Since the argument goes by a different name, I defer to precedent.)

JPD NEW: In any case, you most certainly are going beyond the "solely in virtue of" in the original question. And no, that is not "just about a device with no more functionality than the CR", which would be a question-begging way to draw the distinction. It is about having the capacity to run "the right program". And I've elaborated on the practical issues of this above.

JPD OLD: Do philosophers whose positions do not qualify as "Strong AI" as Searle defines it still criticize the Chinese Room Argument?

JPD OLD: Yes. The examples above demonstrate this. And undoubtedly, there are other examples of positions that depart from "Strong AI" as Searle defines whose advocates would still take issue with the Chinese Room Argument.

SWM, responding: This was never in dispute between us so I am at a loss to see why you spend so much time on the issue.

JPD NEW: I emphasize it to forestall any argument that appeals to people who hold various positions have criticized the Chinese Room Argument in an attempt to prove that Strong AI must therefore be a wider position that I've indicated. And I actually emphasized various permutations of the relationship between different positions, different arguments, Strong AI, and the Chinese Room Argument. I don't think I was being quite as repetitive as you suggest.

SWM, responding: Note that Searle's CRA aims to prove that the thesis that consciousness can be achieved via computational processes running on a computer is impossible, not that it is unlikely, and my dispute is with THAT claim. It is NOT an effort to prove that, contra the CRA, "strong AI" is true. (Go ahead and check my historical postings if you don't want to take my word for it here.)

JPD NEW: I haven't said that you think Strong AI is true. I don't assume you do think such a thing.

JPD NEW: I do think that you've misstated what the Chinese Room Argument is meant to prove however. He was not making the claim that "the thesis that consciousness can be achieved via computational processes running on a computer is impossible." First of all, a thesis may assert something that is possible or impossible but what would it be for a thesis itself to be impossible? That it is nonsensical? He doesn't make that charge. So your way of putting this is a muddle. Such muddles are common in your posts, which was part of my reluctance concerning the "quote mining" approach. I can't just cut and paste a lot of what you say, I have to break it down.

JPD NEW: So, if you mean that he's denying the possibility that "consciousness can be achieved via computational processes running on a computer" (and I can't think of anything else you might reasonably mean) then that's wrong, given his own usage of "computer" (which we agree is problematic). Given his usage, our the activities in our brains can be described as "computational processes running on a computer".

JPD NEW: But it would also be wrong because it would have him denying a possibility rather than an equivalence. You're right that he's not saying, "That's unlikely." But he's also not saying, "That will never happen!" (This is a mistake you seem to make fairly often and it pops up in various places but the fact that you make it here should suffice, sparing me having to "quote mine".) He's denying that instantiating the right program constitutes being conscious. And he's denying that instantiating the right program is a sufficient condition for being conscious.

JPD NEW: Where he does deny a possibility, it is in answering a question that includes the "solely in virtue of" clause. And he indicates that he intends this to be the same question as the "sufficient condition" question.

JPD NEW: He does unfortunately give too little emphasis to distinctions between conceptual and empirical questions, something endemic to post-Quinean philosophy, but "sufficient condition" makes it clear enough that he's not talking about, e.g. what is scientifically possible. A sufficient condition is one that, if satisfied, assures the truth of the statement for which it is a sufficient condition. He's talking about whether the inference from, "it instantiates the right program" to "it's conscious" is valid. (If it were valid, then being able to pass something like the Turing test, as a standard for determining whether the right program is being instantiated, would be proof of consciousness. Regardless of the hardware.) He's not talking about a claim like "it instantiates the right program, so it might just be conscious".

JPD NEW: Now, we are getting into the area of your response that made a peculiar impression on me, as I mentioned nearly the start of this reply.

JPD OLD: Another example, from the original essay, would be what he calls the "Combination Reply". He acknowledges that the case described would be persuasive unless we looked "under the hood" (and again, I am not addressing the merit of this argument), but he says:

JPD, quoting Searle: "I really don't see that this is any help to the claims of strong AI, and here's why: According to strong AI, instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality."

JPD OLD: Again, the fact that a philosopher presents a counter-argument to the Chinese Room Argument and the fact that Searle rejects that counter-argument do not demonstrate that the position they're debating qualifies as "Strong AI".

SWM, responding: The text you give us above does not reveal that he thinks it does not support "Strong AI". It merely says it fails to undermine the CRA.

JPD NEW: Of course it doesn't merely say that. He denies that the reply is an help to the claims of Strong AI and the reason he gives for that denial isn't the counter-argument about looking under the hood (that comes later), but a restatement of Strong AI. How could he deem a restatement of Strong AI as an explanation for why the "Combination Reply" doesn't help Strong AI, unless he considered the "Combination Reply" not to support Strong AI.

JPD NEW: He holds that it fails to undermine the Chinese Room Argument on the basis of his argument that we would no longer find the case persuasive once we looked "behind the curtain". But he holds that it fails to support Strong AI because of how the position of Strong AI is defined, viz. "instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality." When you add the things that are added in the "Combination reply", you are no longer treating "instatitiating a formal program with the right input and output" as a sufficient condition. To put it in terms with which you are by now familiar, that reply ceases to fit the "solely in virtue of" clause.

JPD NEW: And your failure to see that reflects as much on your understanding of Strong AI (which was the point at issue) as any remarks I might retrieve from the archives. Hence, the peculiarity of it all.

SWM, responding: Note that the Connectionist Reply (as I have given it) is made up of the same internals as the CR and that is what this must finally be about for it to be about anything of significance at all. It's just that the system proposed by the Connectionist Reply has more going on in it and what is going on is doing so as part of an integrated system.

JPD NEW: I note again that there seem to be two claims here. "(M)ore is going on" seems to be something other than "more of the same" since you distinguish that from the emphasis on the "integrated system". Now is the "more" that is "going on" also more than "merely transforming symbols mechanically"?

JPD OLD: Isn't "Strong AI" then a straw man, if it's defined so narrowly that most people who argue with Searle don't count as "Strong AI"?

SWM, responding: A very important point. If all of Searle's responses were just to say "that's not what I mean by Strong AI" then we would have to conclude that his argument wouldn't be worth very much at all because he will be seen to have constructed a strawman claim which no one actually holds. But I see no reason to conclude that he has done that. Searle doesn't assert that the Chinese Gymnasium Reply isn't the sort of thing that he thinks the CRA denies, nor does he take that tack with Dennett's thesis and Dennett's is all about computational processes running on a computer (with the added fact being that the computer is conceived as a massively parallel processor, i.e., just what you would need to implement the Chinese Gymnasium).

JPD NEW: First, let me say that I've found previous references to this very odd. I haven't commented until now because it seemed unimportant. But now it has become unavoidable.

JPD NEW: There is no such thing as the "Chinese Gymnasium Reply" as a counter to the Chinese Room Argument. the Chinese Gymnasium is Searle's counter-argument to the "Connectionist Reply". Searle put it forward! So of course he wouldn't then say that it isn't something the Chinese Room Argument denies. That makes no sense whatsoever! And Searle thinks it utterly obvious that the gymnasium as a whole doesn't understands Chinese any better than any of the individuals in the gymnasium because it doesn't even make sense to say that a building understands.

JPD NEW: Furthermore, you're arguments are based on something you dismissed earlier, accusing me of repetitiveness. I obviously didn't repeat it enough. Being "the sort of thing that he thinks the CRA denies" and being Strong AI are not equivalent.

SWM, responding: I repeat: If Searle's argument is only relevant to the limited system exemplified in the CR, then it has no potency because it applies to nothing but such very specific systems and AI researchers do not think that achieving computationally based consciousness is just a matter of building rote responding devices like the CR.

JPD NEW: Again, the relevance of the Chinese Room Argument and the scope of the position of Strong AI are separate questions. It was created to address Strong AI but has also elaborated and amplified the argument in response to other positions that do not fit the definition of "Strong AI". And sometimes he explicitly makes this distinction but other times he doesn't. I made this point already, citing relevant quotations. You ignored most of them, said they weren't at issue between us, and said I was being repetitive. And yet, here you ignore those points.

JPD NEW: What seemed peculiar is now seeming absurd but fortunately, I'll soon be done with this.

JPD OLD: First, suppose that it is. Searle would not be the first to offer a straw man and he would not be the last. That in itself is no reason to disregard the textual evidence that he did define the position he called "Strong AI" quite narrowly.

SWM, responding: There is no textual evidence I have seen that suggests he was only arguing about a very narrowly defined device like the CR because, if there were, he could not draw the broader conclusions he does draw from the argument about computers generally.

JPD NEW: First of all, the Chinese Room is not a device. Secondly, the Chinese Room is characterized by its Turing-equivalence, so I wouldn't call it "narrow".

JPD OLD: Second, we should consider the historical context. People have offered various responses that seek to distinguish to evade the Chinese Room Argument and in so doing, their positions sometimes no longer qualify as Strong AI. Would that be a demonstration that Strong AI was a strawman? Or could it be evidence that in raising the issue, he has forced others to reconsider their positions and to reject the position he's set out to criticize, whether they acknowledge it or not?

SWM, responding: Nor have I said anything different. If you are as familiar with my past remarks on these lists about this (as you initially suggested you were) you would know that I have expressed respect for Searle in general and even noted that he provided some useful insights into what we mean by consciousness through his CRA.

JPD NEW: The point was not to accuse you of disrespecting Searle (like it would matter to me if you had). The point is to answer the argument that if "Strong AI" is defined as narrowly as I have insisted, it amounts to a strawman. The point is that even defining "Strong AI" so narrowly, the arguments still have value.

JPD OLD: Third, the literature of the Turing test and on machine functionalism written prior to the publication of "Minds, Brains, and Programs" does show positions that could at least be mistaken for what he describes as "Strong AI". If his work has forced the authors of those works to clarify their positions, to make explicit that they are not advocating Strong AI but had merely been mistaken for such, then he has done a service.

SWM, responding: As I said above, I am in agreement with this so, if you think this is the crux of our disagreements here you have misread me again.

JPD NEW: I didn't say that it was anything like the crux. But it does address the argument that by reading "Strong AI" as narrowly as I have, I am reading Searle as presenting a strawman argument.

JPD NEW: And you most certainly have made such arguments. In fact, you made an argument like this just a few lines up, viz. "If all of Searle's responses were just to say 'that's not what I mean by Strong AI' then we would have to conclude that his argument wouldn't be worth very much at all because he will be seen to have constructed a strawman claim which no one actually holds."

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.1.

Re: Consciousness and Quantum Mechanics

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 5:20 am (PST)



According
> to General
> >Relativity, it is just as legitimate to describe the
> earth as
> >stationary as the sun.
>
> and your solution to the twin paradox is ... ?

First, as I'm sure you know, it's not actually a "paradox" in the strictest sense. But it might seem to suggest that we must favor one frame of reference over another. Einstein's own account, involving gravitational time dilation and the equivalence principle, doesn't do that however.

Now, on the legitimacy of broadly Ptolemaic or Copernican pictures, there is one way of characterizing matters that would favor the Copernican picture. That is this: of all the bodies in our solar system, the world-line of the sun as observed from a craft well beyond the solar system would deviate the least from a straight line in space-time.

If someone says that, then they've given a clear sense to the claim that one picture is more accurate.

And my point was never "shut up and calculate". I wish only to be clear about what sorts of questions are being asked, what sorts of points debated.

If an experimental basis for distinguishing between two views is offered then it ceases to be a dispute over two different ways of speaking. Though, as I noted with "atom", the question of whether a view is "the same" view after it has developed to the point that it has experimental consequences does not always have a straightforward answer.

Undoubtedly, entertaining questions that we may as yet have no idea how to answer has been shown in many cases to be a fruitful way to proceed in the long run. But fruitfulness means, in part, leading to theories that do have experimental consequences. So ultimately, such an answer to "shut up and calculate" presupposes the ultimate value of experimental consequences in assessing a theory.

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.2.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 6:58 am (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>> Zurek shows how classical reality may emerge entirely within the
>> formalism of QT.
<snip>
> the measurement problem is still unresolved.

http://en.wikipedia.org/wiki/Quantum_Darwinism

"Along with Zurek's related theory of envariance, quantum Darwinism explains
how the classical world emerges from the quantum world and proposes to
answer the quantum measurement problem, the main interpretational challenge
for quantum theory. The measurement problem arises because the quantum state
vector, the source of all knowledge concerning quantum systems, evolves
according to the Schrödinger equation into a linear superposition of
different states, predicting paradoxical situations such as "Schrödinger's
cat"; situations never experienced in our classical world. Quantum theory
has traditionally treated this problem as being resolved by a non-unitary
transformation of the state vector at the time of measurement into a
definite state. It provides an extremely accurate means of predicting the
value of the definite state that will be measured in the form of a
probability for each possible measurement value. The physical nature of the
transition from the quantum superposition of states to the definite
classical state measured is not explained by the traditional theory but is
usually assumed as an axiom and was at the basis of the debate between Bohr
and Einstein concerning the completeness of quantum theory; perhaps the most
famous debate in the history of physics.

"Quantum Darwinism explains the transition of quantum systems from the vast
potentiality of superposed states to the greatly reduced set of pointer
states[1] as a selection process, einselection, imposed on the quantum
system through its continuous interactions with the environment. All quantum
interactions, including measurements, but much more typically interactions
with the environment such as with the sea of photons in which all quantum
systems are immersed, lead to decoherence or the manifestation of the
quantum system in a particular basis dictated by the nature of the
interaction in which the quantum system is involved. In the case of
interactions with its environment Zurek and his collaborators have shown
that a preferred basis into which a quantum system will decohere is the
pointer basis underlying predictable classical states. It is in this sense
that the pointer states of classical reality are selected from quantum
reality and exist in the macroscopic realm in a state able to undergo
further evolution.

"As a quantum system's interactions with its environment results in the
recording of many redundant copies of information regarding its pointer
states, this information is available to numerous observers able to achieve
consensual agreement concerning their information of the quantum state.
This aspect of einselection, called by Zurek 'Environment as a Witness',
results in the potential for objective knowledge."

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.3.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 9:21 am (PST)



Cayuse wrote:

>Joseph Polanik wrote:

>>Cayuse wrote:

>>>What need is there now to rope-in consciousness as the agent of
>>>state vector reduction?

>>the measurement problem is still unresolved.

>>to a physicist trying to explain measurement of quantum spin of
>>single particles, the fact that decoherence theory can explain why
>>Schrodinger's cat is always either alive or dead (but never both) is
>>of no use. the formalism still says that the particle is in a
>>superposition of up/down in between measurements.

>The particle is in a superposition of states between its interactions
>with other particles.

when measured, the particle is found to be in just one of those states.

some explanation of this fact is needed.

the collapse postulate explains why measuring a particle produces a
definite outcome by postulating that, during measurement, the wave
function collapses to a single definite value for the property being
measured.

if you deny the collapse postulate, you need some other explanation for
why only one result is observed.

do you deny the collapse postulate? if so, how do you explain that
measuring a particle produces a definite outcome?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.4.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 9:25 am (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>> Joseph Polanik wrote:
>>> to a physicist trying to explain measurement of quantum spin of
>>> single particles, the fact that decoherence theory can explain why
>>> Schrodinger's cat is always either alive or dead (but never both) is
>>> of no use. the formalism still says that the particle is in a
>>> superposition of up/down in between measurements.
>
>> The particle is in a superposition of states between its interactions
>> with other particles.
>
> when measured, the particle is found to be in just one of those
> states.
>
> some explanation of this fact is needed.

However you define measurement, it entails interaction, and the
particle is only in a superpositions of states /between/ interactions.

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.5.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 9:57 am (PST)



Cayuse wrote:

>Joseph Polanik wrote:

>>Cayuse wrote:

>>>Zurek shows how classical reality may emerge entirely within the
>>>formalism of QT.

>>the measurement problem is still unresolved.

>http://en.wikipedia.org/wiki/Quantum_Darwinism

this is a good article; but, not especially relevant to someone
questioning the von Neumann Interpretation.

try to think about this logically.

von Neumann claimed that there was no fixed boundary between the quantum
world and the classical world; and, showed that the entire universe
could be described quantum mechanically.

Schrodinger challenged that conclusion. we know intuitively that the cat
is always either dead or alive; it does not revert to a dead/alive state
whenever it is not being observed.

by explaining the emergence of an apparently classical world from the
quantum formalism, decoherence theory supports the 'all-quantum' aspect
of the von Neumann Interpretation.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.6.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 10:32 am (PST)



Cayuse wrote:

>Joseph Polanik wrote:

>>Cayuse wrote:

>>>Joseph Polanik wrote:

>>>>to a physicist trying to explain measurement of quantum spin of
>>>>single particles, the fact that decoherence theory can explain why
>>>>Schrodinger's cat is always either alive or dead (but never both) is
>>>>of no use. the formalism still says that the particle is in a
>>>>superposition of up/down in between measurements.

>>>The particle is in a superposition of states between its interactions
>>>with other particles.

>>when measured, the particle is found to be in just one of those
>>states. some explanation of this fact is needed.

>However you define measurement, it entails interaction,

not only is this claim untrue (try googling 'quantum measurement
interaction free'), it would be irrelevant to your case even if it were
true.

>and the particle is only in a superpositions of states /between/
>interactions.

if you are trying to show that decoherence explains the measurement
problem; then, what you need to show is that environmental 'monitoring'
is an interaction that is identical to or that entails a measurement.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.7.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 10:43 am (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>> http://en.wikipedia.org/wiki/Quantum_Darwinism
>
> this is a good article; but, not especially relevant to
> someone questioning the von Neumann Interpretation.

I'm questioning the need to invoke consciousness
as the agent of state vector reduction.

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.8.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 10:48 am (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>> However you define measurement, it entails interaction,
>
> not only is this claim untrue (try googling 'quantum measurement
> interaction free'),

Interesting.

> it would be irrelevant to your case even if it were true.

Irrelevant to calling into question the role of consciousness in
state vector reduction?

>> and the particle is only in a superpositions of states /between/
>> interactions.
>
> if you are trying to show that decoherence explains the measurement
> problem; then, what you need to show is that environmental
> 'monitoring' is an interaction that is identical to or that entails a
> measurement.

This is what Zurek does in his paper, without recourse to
consciousness as the agent of state vector reduction.
It's still not clear to me what need there is now to rope-in
consciousness as the agent of state vector reduction.
Can you clarify?

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.9.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 12:13 pm (PST)



Cayuse wrote:

>Joseph Polanik wrote:

>>Cayuse wrote:

>>>http://en.wikipedia.org/wiki/Quantum_Darwinism

>>this is a good article; but, not especially relevant to
>>someone questioning the von Neumann Interpretation.

>I'm questioning the need to invoke consciousness as the agent of state
>vector reduction.

I know that you are; and, I've tried to clarify whether your questioning
is vacuous by asking whether you deny the collapse postulate.

obviously, if you deny that there is a collapse of the wave function;
then, you don't need to explain how that happens.

instead, you would need to explain how measuring a particle produces a
definite outcome without a collapse of the wave function.

so, once again, do you deny the collapse postulate?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.10.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 12:45 pm (PST)



J D wrote:

>And my point was never "shut up and calculate". I wish only to be clear
>about what sorts of questions are being asked, what sorts of points
>debated.

>If an experimental basis for distinguishing between two views is
>offered then it ceases to be a dispute over two different ways of
>speaking.

I reject the implied duality: either there is an experimental basis for
distinguishing two views or the dispute is merely about different ways
of speaking.

>Undoubtedly, entertaining questions that we may as yet have no idea how
>to answer has been shown in many cases to be a fruitful way to proceed
>in the long run. But fruitfulness means, in part, leading to theories
>that do have experimental consequences.

in some cases, philosophical positions that were based on 19th century
physics continue to linger long after physical theory changed radically.
for example, determinism for people was based on the idea of determinism
for particles, a theory now known to be scientifically untenable.

it is appropriate to challenge such deadwood even if the aim is not to
construct a better theory of physics.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.11.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 3:09 pm (PST)



Cayuse wrote:

>Joseph Polanik wrote:

>>if you are trying to show that decoherence explains the measurement
>>problem; then, what you need to show is that environmental
>>'monitoring' is an interaction that is identical to or that entails a
>>measurement.

>This is what Zurek does in his paper ... Can you clarify?

precisely where in his paper do you believe Zurek does that?

as far as I can tell, Zurek is running together his analysis of two
processes. both of which are called 'decoherence'.

one use of decoherence explains emergence of an apparently classical
reality. environmental monitoring transfers information about a
macroscopic object into its environment (*in this universe).

the other use of decoherence comes about when Zurek denies the collapse
postulate and develops his version of the Multiple Worlds
Interpretation.

notice that this second use of decoherence *is* an attempt to explain
the measurement problem. the collapse postulate is explicitly denied;
and, all but one of the decohering possibilities end up in alternate
universes each with its own copy of the observer whose measurement
caused the branching to take place.

so we're back to square one.

as I said in my post of 01/03/2010 02:02 PM, if you deny the collapse
postulate; then, you end up with the MWI.

if you accept the collapse postulate, you have to explain what causes
the collapse during a measurement.

pick your poison.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.12.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 3:25 pm (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>> I'm questioning the need to invoke consciousness as the agent of
>> state vector reduction.
>
> I know that you are; and, I've tried to clarify whether your
> questioning is vacuous by asking whether you deny the collapse
> postulate.
>
> obviously, if you deny that there is a collapse of the wave function;
> then, you don't need to explain how that happens.
>
> instead, you would need to explain how measuring a particle produces a
> definite outcome without a collapse of the wave function.
>
> so, once again, do you deny the collapse postulate?

And once again, I'm questioning the need to invoke consciousness
as its agent.

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.13.

Re: Consciousness and Quantum Mechanics

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 3:26 pm (PST)



Joseph Polanik wrote:
> Cayuse wrote:
>> This is what Zurek does in his paper ... Can you clarify?
>
> precisely where in his paper do you believe Zurek does that?
>
> as far as I can tell, Zurek is running together his analysis of two
> processes. both of which are called 'decoherence'.
>
> one use of decoherence explains emergence of an apparently classical
> reality. environmental monitoring transfers information about a
> macroscopic object into its environment (*in this universe).

And without invoking consciousness.

> the other use of decoherence comes about when Zurek denies the
> collapse postulate and develops his version of the Multiple Worlds
> Interpretation.
>
> notice that this second use of decoherence *is* an attempt to explain
> the measurement problem. the collapse postulate is explicitly denied;
> and, all but one of the decohering possibilities end up in alternate
> universes each with its own copy of the observer whose measurement
> caused the branching to take place.
>
> so we're back to square one.
>
> as I said in my post of 01/03/2010 02:02 PM, if you deny the collapse
> postulate; then, you end up with the MWI.
>
> if you accept the collapse postulate, you have to explain what causes
> the collapse during a measurement.

When a particle interacts with another particle and the state vector is
reduced, would you claim consciousness for one of the particles (or
even both of them)? And if not, then where is consciousness implicated
in state vector reduction? It's still not clear to me what need there is
now to rope-in consciousness as the agent of state vector reduction.
Can you clarify?

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6.

Sound

Posted by: "void" rgoteti@xxxxxxxxx   rgoteti

Sat Jan 9, 2010 6:24 am (PST)



Stanford encyclopedia

Auditory perception reveals a new rich territory for philosophical exploration in its own right, but it also provides a useful contrast case to evaluate claims about perception proposed in the visual context. One of the most promising directions for future work concerns the nature of the relationships among perceptual modalities and how these relationships might prove essential to understanding perception itself. Recent philosophical work on auditory perception thus encourages an advance beyond considering modalities in isolation from each other.

We can ask questions about the relationships among modalities in different areas of explanatory concern. Worthwhile areas for attention include the objects, contents, and phenomenology of perception, as well as perceptual processes and their architecture. Crossmodal and multimodal considerations might shed doubt on whether vision-based theorizing alone can deliver a complete understanding of perception and its contents. This approach constitutes an important methodological advance in the philosophical study of perception.

thank you
sekhar

7.

Emotion

Posted by: "void" rgoteti@xxxxxxxxx   rgoteti

Sat Jan 9, 2010 6:29 am (PST)



Stanford encyclopedia

? emotions are typically conscious phenomena; yet
? they typically involve more pervasive bodily manifestations than other conscious states;
? they vary along a number of dimensions: intensity, valence, type and range of intentional objects, etc.
? they are reputed to be antagonists of rationality; but also
? they play an indispensable role in determining the quality of life;
? they contribute crucially to defining our ends and priorities;
? they play a crucial role in the regulation of social life;
? they protect us from an excessively slavish devotion to narrow conceptions of rationality;
? they have a central place in moral education and the moral life.

Most emotions target the outside world, but guilt and shame are exceptions, as they stem from introjected critical figures which target the self. In all cases emotions "color the world" and hence regulate beliefs and desires.

thank you
sekhar

8.1.

Re: Relationship between brain and mind as conceptual convenience

Posted by: "BruceD" wittrsamr@xxxxxxxxxxxxx

Sat Jan 9, 2010 12:50 pm (PST)




--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:

> no one denies that there are two vocabularies in use; and, except for
a
> few eliminative materialists, no one is trying to change that fact.

Thank you for helping me with this stuff. Often I sounds more convinced
that I am. In an event, while I agree, "no one (here, to limited the
generalization) denies that there are two vocabularies", some (here), by
thinking of mind in a causal relation with brain, are in effect,
transforming the vocabulary of reason into one of causation. This is the
way I read SWM. I'm not sure how I read you.

Not clear whether the vocabulary of quantum mechanics is comparable to
that of reason or if that is what you are suggesting.

> the question is how do we account for that fact?
> perhaps the fact doesn't need accounting for. maybe we just have to
> learn to live with the fact that we have two vocabularies that are not
> interchangeable. this, you may recall, is merely predicate dualism.

OK. Though I'm not quite at home with the phrase "predicate dualism."

> but, if we try to account for predicate dualism, how would we do so?

Since I'm not clear what predicate dualism means, I'm not clear how I'd
account for it. How do we account for all our language games other than
reflecting that we live a "complicated life." But perhaps we can make
some account.

> do we postulate two sets of phenomena

First, I take "postulate" to mean theorize, e.g., "dark matter." While
here I'm just recalling the everyday. I naturally describe my computer
as a machine, even if it talks to me, and my mime friend as only
impersonating a machine when he does so. Secondly, I'm not inclined to
call my computer or my friend a phenomena.

> eg., measurable phenomena vs experiencable phenomena)?

Don't think so. For starters, everything is measurable, though
measurements differ in degree of observer agreement. Secondly, I
experience both the computer and my friend. Still, there may be
something here. Normally, we don't attribute "an inner life" to a
machine but we do so for a person. So the two vocabularies must derive
from this distinction. But I see no reason to conceive of...

> two sets of properties (eg physical vs mental)? or two substances (ie
two type of 'stuff')?

what I take to be ontological claims about "what is." I don't see that
we are in any position to say "what is."

I recognize that you have written at length about the application of
quantum theory to these matters. Could you Post a review that highlights
your position?

bruce

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Get it all here

Breaking news to

entertainment news

Cat Fanatics

on Yahoo! Groups

Find people who are

crazy about cats.

Yahoo! Groups

Going Green

Connect with others

who live green

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 101 - WittrsAMR