[C] [Wittrs] Digest Number 99

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 8 Jan 2010 10:41:24 -0000

Title: WittrsAMR

Messages In This Digest (10 Messages)

Messages

1.1.

Consciousness and Quantum Mechanics

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Thu Jan 7, 2010 3:39 am (PST)



J wrote:

>JPolanik wrote:

>>J wrote:

>>>Saying that the mathematics work wherever we choose to draw the
>>>boundary is not equivalent to saying that consciousness is
>>>necessary for collapsing the wave function.

>>perhaps not; but, that's why there are different interpretations of
>>QM

>You say "perhaps not". Are you suggesting that they might be equivalent
>claims? Are you unsure?

read the relevant passage from my last post:

>>given the collapse postulate, saying that the mathematics work
>>wherever we choose to draw the boundary makes it necessary to find
>>something else, something outside (I + II), to cause the collapse of
>>the wave function during a measurement.

it takes another step to get from here to the proposition that
consciousness causes the collapse.

>>von Neumann postulated that this was the abstract I, a term for
>>which 'consciousness' is generally substituted.

>even if we take Stapp's view and call it von Neumann's. then the world
>described certainly is not "entirely quantum". The abstract ego is part
>of the world, else it could not interact with the world by causing
>collapses!

that might depend on what you mean by 'world'. suppose that all
mass/energy constituted the world. you then observe that there is an
interaction between spacetime and mass/energy. virtual particles emerge
from 'empty' space and influence actual particles; and, large masses
distort spacetime.

do you enlarge the world to include spacetime; or, do you conclude that
spacetime is just another physical object?

similarly, do we expand the world to include consciousness; or, do we
conclude that a consciousness is just another physical object?

>And what possible observation could support or falsify the claim that
>consciousness is necessary for wave function to collapse?

experimentally ruling out alternative theories as inadequate might help;
but, stronger evidence would be a correlation between events in
consciousness (ie some psychological variable) and collapse events.

>>anyone is free to 'shut up and calculate'. doing so might have
>>resulted in less conflict between the followers of Copernicus and
>>the Roman Catholic Church; but, as it turned out, the math that
>>better predicted the behavior of the world better described the
>>world.

>It's not clear to me what you're saying here.

>The Copernican system was more economical in its description and it
>enabled more efficient calculations.

that's what I'm saying. the Copernican system gave us more efficient
calculations AND a more accurate description of the world.

this pattern was repeated when Kepler simplified calculations even more.
by assuming that planetary orbits are elliptical he got rid of the last
of the epicycles; AND, as it turns out, planetary orbits *are*
elliptical.

there is no reason to suppose that QM can only give us a means of
calculating outcome probabilities.

>Our most successful theory about such matters rejects the idea that
>there is any privileged frame of reference. According to General
>Relativity, it is just as legitimate to describe the earth as
>stationary as the sun.

and your solution to the twin paradox is ... ?

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.

Knowledge in different schools

Posted by: "void" rgoteti@xxxxxxxxx   rgoteti

Thu Jan 7, 2010 5:14 am (PST)



What is knowledge? How is knowledge acquired? What do people know? How do we know what we know?
In Personal Knowledge, Michael Polanyi articulates a case for the epistemological relevance of both forms of knowledge; using the example of the act of balance involved in riding a bicycle, he suggests that the theoretical knowledge of the physics involved in maintaining a state of balance cannot substitute for the practical knowledge of how to ride, and that it is important to understand how both are established and grounded.
Belief is a subjective personal basis for individual behavior, while truth is an objective state independent of the individual
[edit] Acquiring knowledge
The second question that will be dealt with is the question of how knowledge is acquired. This area of epistemology covers:
Issues concerning epistemic distinctions such as that between experience and apriori as means of creating knowledge.
Further that between synthesis and analysis used as a means of proof
Debates such as the one between empiricists and rationalists.
What is called "the regress problem"
A priori knowledge is knowledge that is known independently of experience (that is, it is non-empirical, or arrived at beforehand).
A posteriori knowledge is knowledge that is known by experience (that is, it is empirical, or arrived at afterward).
[edit] What do people know?
The last question that will be dealt with is the question of what people know. At the heart of this area of study is skepticism, with many approaches involved trying to disprove some particular form of it.
Fallibilism
Main article: Fallibilism
For most of philosophical history, "knowledge" was taken to mean belief that was true and justified to an absolute certainty.[citation needed] Early in the 20th century, however, the notion that belief had to be justified as such to count as knowledge lost favour. Fallibilism is the view that knowing something does not entail certainty regarding it.
The most important contribution made by the Nyaya school to modern Hindu thought is its methodology. This methodology is based on a system of logic that, subsequently, has been adopted by the majority of the other Indian schools, orthodox or not. This is comparable to how Western science and philosophy can be said to be largely based on Aristotelian logic.
Nyaya Logic
However, Nyaya differs from Aristotelian logic in that it is more than logic in its own right. Its followers believed that obtaining valid knowledge was the only way to obtain release from suffering. They therefore took great pains to identify valid sources of knowledge and to distinguish these from mere false opinions. Nyaya is thus a form of epistemology in addition to logic.
According to the Nyaya school, there are exactly four sources of knowledge (pramanas): perception, inference, comparison, and testimony. Knowledge obtained through each of these can, of course, still be either valid or invalid. As a result, Nyaya scholars again went to great pains to identify, in each case, what it took to make knowledge valid, in the process creating a number of explanatory schemes. In this sense, Nyaya is probably the closest Indian equivalent to contemporary analytic philosophy.
Sankhya
According to the Sankhya school, all knowledge is possible through three pramanas (means of valid knowledge)[8] -
Pratyaksha or Drishtam - direct sense perception,
Anumana - logical inference and
Sabda or Aptavacana - verbal testimony.
Sankhya cites two kinds of perceptions: Indeterminate (nirvikalpa) perceptions and determinate (savikalpa) perceptions.
Indeterminate perceptions are merely impressions without understanding or knowledge. They reveal no knowledge of the form or the name of the object. There is only external awareness about an object. There is cognition of the object, but no discriminative recognition.
For example, a baby's initial experience is full of impression. There is a lot of data from sensory perception, but there is little or no understanding of the inputs. Hence they can be neither differentiated nor labeled. Most of them are indeterminate perceptions.
Determinate perceptions are the mature state of perceptions which have been processed and differentiated appropriately. Once the sensations have been processed, categorized, and interpreted properly, they become determinate perceptions. They can lead to identification and also generate knowledge.

LW
Wittgenstein observed, following Moore's paradox, that one can say "He believes it, but it isn't so", but not "He knows it, but it isn't so". [3] He goes on to argue that these do not correspond to distinct mental states, but rather to distinct ways of talking about conviction. What is different here is not the mental state of the speaker, but the activity in which they are engaged. For example, on this account, to know that the kettle is boiling is not to be in a particular state of mind, but to perform a particular task with the statement that the kettle is boiling. Wittgenstein sought to bypass the difficulty of definition by looking to the way "knowledge" is used in natural languages. He saw knowledge as a case of a family resemblance. Following this idea, "knowledge" has been reconstructed as a cluster concept that points out relevant features but that is not adequately captured by any definition.[4]

WIKIPEDIA

3a.

Re: [C] Re: On When the New Wittgenstein Arrived (Again)

Posted by: "Rajasekhar Goteti" rgoteti@xxxxxxxxx   rgoteti

Thu Jan 7, 2010 7:16 am (PST)



Also, see Monk on 325 -- the chapter on philosophy that was in the Big T but did not make it into PG. (Philosophy is confused by asking wrong questions -- like "what is time.") 

Here is where I think I fundamentally differ with you. He dictates the big T in the summer of 32. But the thoughts were already there in 30/31. They just have to be polished and worked out. I think I'm taking a biographical look at this and you are looking at this legalistically (when documents are produced, etc.). Monk does say that as soon as he completed the Big T, he began making extensive revisions of it. And neither I nor you deny he still has work to do on the dinner. You mentioned some things that he had to later clarify and formulate. I don't disagree with that.

But my Wittgenstein came to the earth in late 30. Like Jesus, he came to his students and friends first with "the word."  The date of birth is when the new ideas entered his head, not when he presents a formal document of them. 

I don't know really how much we are disagreeing. 

Regards.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Personal Website: http://seanwilson. org
SSRN papers: http://ssrn. com/author= 596860
Discussion Group: http://seanwilson. org/wittgenstein .discussion. html 
Dear sir
Here real philosophy begins I believe.Since beginning is the end as JK suggested.
thank yousekhar

sekhar

The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/
4.1.

Re: SWM and Strong AI

Posted by: "walto" wittrsamr@xxxxxxxxxxxxx

Thu Jan 7, 2010 9:39 am (PST)



Many thanks for that excellent over-view, JPD. Clearly and handsomely expressed, covering all the bases, patient to a fault. It'd be separately publishable, I think, if so much of it hadn't already been published by Searle. Bravissimo!

(And I say all this as one who doesn't care too much for Hacker--though I admit I've read only one of his books).

Best,

W

PS: I thought non-member noise like this letter was supposed to end up in the non-moderated Wittrs-AMR only, not be cluttering up the pristine halls of wittrs. Based on the official description of AMR, I take it even Bud can post there.

Anyhow, I apologize in advance if my fawning drivel again ends up on a moderated, ostensibly members-only-posting list.

--- In Wittrs@yahoogroups.com, "J" <wittrsamr@...> wrote:
>
> SWM,
>
> For the record, since you seem to take an interest in such matters (though I'm sure you'll now say that you don't really care, even though you keep alluding to it), I do not cleave to Searle's views. Nor yet to Dennett's. If I were to be identified with any other philosopher's positions in matters even tangentially related to this, they would be those of Peter Hacker (best known as a Wittgenstein exegete), in the book he co-authored with neurophysiologist, Maxwell Bennett, _Philosophical_Foundations_of_Neuroscience_ , and in _Neuroscience_and_Philosophy:_Brain,_Mind,_and_Language_, in which Bennett and Hacker debate Searle and Dennett. Based on that, one might say that my own views are orthogonal to the debate between Searle and Dennett. There are fundamental difference between Hacker on the one side and Searle and Dennett on the other that make the differences between Searle and Dennett... I want to say "negligible", but that's not quite right. Suffice it to say, Searle and Dennett are on one side, Hacker and Bennett are on the other, and I am a lot closer to Hacker. And the disputes concern fundamental issues about the nature of philosophy as well as (what Hacker and Bennett take to be) conceptual confusions and misunderstandings common to many in this "field", including Dennett and Searle. I cannot imagine being persuaded to discuss these topics with you further (though obviously I've done equally foolish things already) fo a variety of reasons, though it should suffice to say that it would complicate matters to no real benefit. And saying this much should suffice to address suggestions that I am somehow indignant that you would dare criticize Searle or somehow just uncomfortable with Dennett, suggestions that are simply irrelevant anyway.
>
> Now, I am going to take a different approach here.
>
> In Searle's paper, "Minds, Brains, and Programs", in which the Chinese Room Argument makes its first appearance, we find the following passage, reminiscent of a press conference by former US Secretary of Defense, Donald Rumsfeld, in which Searle poses and answers a series of questions. My own remarks will be in parentheses.
>
>
> "'Could a machine think?' The answer is, obviously, yes. We are precisely such machines."
>
> (Here, I agree. For what that's worth. So, to read him as denying that a machine can think, be conscious, and so forth, is simply to misread him.)
>
> "'Yes, but could an artifact, a man-made machine, think?'
>
> "Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question."
>
> (Note how much he grants here. My own answer would be somewhat different, but that needn't concern us here. The fact is that he does grant the possibility that an artifact, a man-made machine, can think, be conscious, and so forth. He doesn't even limit this possibility to an artificial brain that operated on the same chemical basis. So, to read him as denying the possibility that a man-made machine can think, be conscious, and so forth, is again, a misreading.)
>
> "'OK, but could a digital computer think?'
>
> "If by 'digital computer' we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think."
>
> (I think this is a muddle, but again, that needn't concern us here. He doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking.)
>
> "'But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?'"
>
> (Note well: "solely in virtue" and "sufficient condition".)
>
> "This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
>
> "'Why not?'
>
> "Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."
>
> (I consider this also to be a muddle. But that's not the point here. The point is that this answer is addressed to the preceding two questions and not to the three questions prior to them.)
>
> "The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese."
>
> Now, continuing in a Rumsfeldian vein, I offer some questions and answers of my own:
>
> Is every position that Searle ever criticized therefore an example of the position he calls "Strong AI"?
>
> No. He even explicitly points this out in the original essay. Regarding the "Brain Simulator Reply", he wrote:
>
> "Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI."
>
> He then goes on to construct a scenario resembling the Chinese room in some respects, but whatever the merits of this argument, it is no longer the CRA and it is no longer addressed to Strong AI as he defines it.
>
> Does he sometimes criticize positions that do not fit his definition of Strong AI without taking the time to explicitly point that out?
>
> Yes, he does. Again, in the original essay, regarding the "Robot Reply", he doesn't explicitly spell out that this reply is no longer what he has defined as "Strong AI". He does point out the difference though and if you've followed closely, you'll see that the position does involve a departure from the position he's called "Strong AI".
>
> "The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relations with the outside world (cf. Fodor 1980)."
>
> In proceeding to reply to this, he calls his response "the same thought experiment", though in fact it is a variant. But let us grant that it is "the same". (And again, I am setting aside what I may think of the merits of the argument.) That would demonstrate that he regards the Chinese Room Argument (including this variant) as being able to address some cases that do not strictly count as "Strong AI".
>
> But whether he thinks that the Chinese Room Argument applies to cases that do not count as "Strong AI", it does not follow that he expects it to apply to every position he might oppose. Nor does offering some other position that is not addressed by the Chinese Room Argument but is also not a case of Strong AI count as a refutation of the Chinese Room Argument.
>
> The fact that Searle opposes a view is not evidence that he thinks that the Chinese Room Argument refutes it nor is it evidence that the view he opposes counts as "Strong AI" merely because it is something he opposes!
>
> Do philosophers whose positions do not qualify as "Strong AI" as Searle defines it still criticize the Chinese Room Argument?
>
> Yes. The examples above demonstrate this. And undoubtedly, there are other examples of positions that depart from "Strong AI" as Searle defines whose advocates would still take issue with the Chinese Room Argument. For example, I take issue with the Chinese Room Argument and I don't advocate a position even remotely resembling "Strong AI"! But leaving that aside, I am sure there are many people who think it's just a bad argument. That doesn't prove that their positions count as "Strong AI" nor does it mean that they hold positions to which the Chinese Room Argument is even relevant!
>
> Another example, from the original essay, would be what he calls the "Combination Reply". He acknowledges that the case described would be persuasive unless we looked "under the hood" (and again, I am not addressing the merit of this argument), but he says:
>
> "I really don't see that this is any help to the claims of strong AI, and here's why: According to strong AI, instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality."
>
> Again, the fact that a philosopher presents a counter-argument to the Chinese Room Argument and the fact that Searle rejects that counter-argument do not demonstrate that the position they're debating qualifies as "Strong AI".
>
> Isn't "Strong AI" then a straw man, if it's defined so narrowly that most people who argue with Searle don't count as "Strong AI"?
>
> First, suppose that it is. Searle would not be the first to offer a straw man and he would not be the last. That in itself is no reason to disregard the textual evidence that he did define the position he called "Strong AI" quite narrowly.
>
> Second, we should consider the historical context. People have offered various responses that seek to distinguish to evade the Chinese Room Argument and in so doing, their positions sometimes no longer qualify as Strong AI. Would that be a demonstration that Strong AI was a strawman? Or could it be evidence that in raising the issue, he has forced others to reconsider their positions and to reject the position he's set out to criticize, whether they acknowledge it or not?
>
> Third, the literature of the Turing test and on machine functionalism written prior to the publication of "Minds, Brains, and Programs" does show positions that could at least be mistaken for what he describes as "Strong AI". If his work has forced the authors of those works to clarify their positions, to make explicit that they are not advocating Strong AI but had merely been mistaken for such, then he has done a service.
>
> Now, Mr. Mirsky. I have patiently and carefully elaborated my reading of Searle's usage of "Strong AI" and its relationship to the Chinese Room Argument, I have considered various counter-arguments, and I have shown complete civility in doing so. I consider any obligation to you fully discharged. If you do not, I can only wonder what would satisfy you, short of my dishonestly saying that I'm somehow persuaded that you're right and I'm mistaken. The alternative is for me to engage in endless exchanges with you, addressing each and every point you might raise. I don't think my obligation extends that far.
>
> JPDeMouy
>
>
>
>
>
>
>
>
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.2.

Re: SWM and Strong AI

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Thu Jan 7, 2010 9:47 am (PST)



I just did a long response to this (wasting much of the morning) and lost it before finishing when my finger hit the wrong button. On the supposition that it will never appear here, I'll do it again albeit with a bit less detail.

--- In Wittrs@yahoogroups.com, "J" <wittrsamr@...> wrote:
>
<snip>

> Suffice it to say, Searle and Dennett are on one side, Hacker and Bennett are on the other, and I am a lot closer to Hacker. And the disputes concern fundamental issues about the nature of philosophy as well as (what Hacker and Bennett take to be) conceptual confusions and misunderstandings common to many in this "field", including Dennett and Searle. I cannot imagine being persuaded to discuss these topics with you further (though obviously I've done equally
> foolish things already)

I don't know what your problem is. It seems to be remarkably personal with you and me but I will leave that alone except to reference it in passing here.

> fo a variety of reasons, though it should suffice to say that it would complicate matters to no real benefit. And saying this much should suffice to address suggestions that I am somehow indignant that you would dare criticize Searle or somehow just uncomfortable with Dennett, suggestions that are simply irrelevant anyway.
>

I don't recall saying that you, personally, are indignant at criticism of Searle (though I think some are indignant at any claim that suggests we might be nothing but organic machines and that you may well fit into that category -- however, I reserve judgment pending further statements of your own).

> Now, I am going to take a different approach here.
>
> In Searle's paper, "Minds, Brains, and Programs", in which the Chinese Room Argument makes its first appearance, we find the following passage, reminiscent of a press conference by former US Secretary of Defense, Donald Rumsfeld, in which Searle poses and answers a series of questions. My own remarks will be in parentheses.
>
>
> "'Could a machine think?' The answer is, obviously, yes. We are precisely such machines."
>
> (Here, I agree. For what that's worth. So, to read him as denying that a machine can think, be conscious, and so forth, is simply to misread him.)
>

And where do you think I have ever read him in THAT way? If you are as familiar with my past remarks on the subject as you have suggested, you would know that I have often noted that Searle speaks of brains as organic machines and also that it may be possible to build machines some day that can do what brains do.

What is the point of this "different approach" if it continues to fail to meet the one thing required, to back up what you have claimed?

> "'Yes, but could an artifact, a man-made machine, think?'
>
> "Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question."
>

> (Note how much he grants here. My own answer would be somewhat different, but that needn't concern us here. The fact is that he does grant the possibility that an artifact, a man-made machine, can think, be conscious, and so forth. He doesn't even limit this possibility to an artificial brain that operated on the same chemical basis. So, to read him as denying the possibility that a man-made machine can think, be conscious, and so forth, is again, a misreading.)
>

Again, where do you think I have ever offered THAT reading of him? Once again, any such suggestion is a misreading of ANYTHING I've ever said on this subject and, if imputed to me as part of what you are arguing against, a classic strawman.

> "'OK, but could a digital computer think?'
>
> "If by 'digital computer' we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think."
>
> (I think this is a muddle, but again, that needn't concern us here. He doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking.)
>

I agree that there is confusion here. Elsewhere he has suggested that even wallpaper can be described as a digital computer as I recall. If anything can, then the description loses its potency. Of course we are talking about certain very specific kinds of items when we use the term "digital computer" in ordinary language and we don't mean wallpaper or even thermostats (unless they are small scale computers as some, today, are).

> "'But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?'"
>
> (Note well: "solely in virtue" and "sufficient condition".)
>

Noted. Where do you think I am saying otherwise? (Below we will have a chance to address this in more depth.)

> "This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
>
> "'Why not?'
>
> "Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."
>

> (I consider this also to be a muddle. But that's not the point here. The point is that this answer is addressed to the preceding two questions and not to the three questions prior to them.)
>

And my comments on the CRA have to do with precisely that last question. You're imputing to me things I have never said or held to be the case here vis a vis Searle's argument in order to say that you are demonstrating my alleged misunderstanding. But if these aren't things I have said or held, how do they count as evidence I have got Searle wrong? (Do you have some evidence, some statements I have actually made, which you think show me saying such things?)

> "The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese."
>

> Now, continuing in a Rumsfeldian vein, I offer some questions and answers of my own:
>
> Is every position that Searle ever criticized therefore an example of the position he calls "Strong AI"?
>
> No. He even explicitly points this out in the original essay. Regarding the "Brain Simulator Reply", he wrote:

>
> "Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI."
>

> He then goes on to construct a scenario resembling the Chinese room in some respects, but whatever the merits of this argument, it is no longer the CRA and it is no longer addressed to Strong AI as he defines it.
>

Note that my response to the CRA is not premised (and never has been premised) on this particular reply and I will note, in passing, that I agree with the view that that reply does not answer his argument.

> Does he sometimes criticize positions that do not fit his definition of Strong AI without taking the time to explicitly point that out?
>
> Yes, he does. Again, in the original essay, regarding the "Robot Reply", he doesn't explicitly spell out that this reply is no longer what he has defined as "Strong AI". He does point out the difference though and if you've followed closely, you'll see that the position does involve a departure from the position he's called "Strong AI".
>

Now you proceed at great length to make this case over and over again below, to wit, that not every argument against Searle's CRA really speaks for or supports what Searle calls "Strong AI". And I addressed these in more specificity in my earlier reply. But to save time I will now stipulate to this and just note that MY argument against the CRA is not based on such a non-AI supporting argument but on a variant of the Chinese Gymnasium Reply (sometimes called the Connectionist Reply, though it is not always presented in quite the same way so even this has some variations to it).

My argument boils down to the one exemplified by Peter Brawley's analogy on the Analytic list, that you can't build a bicycle and expect it to fly. As such we can call it the Bicycle Reply for convenience. It is grounded in the claim that Searle has underspecked the CR. That is, real AI researchers do not think or claim that a rote responding device like the CR is conscious. What they presume is that more things are going on in consciousness than merely transforming symbols mechanically using look-up tables (or their equivalent) as happens in the CR. Thus their efforts are aimed at producing a computationally based system that has all the things needed.

In a nutshell, the CR, as specked by Searle, doesn't have enough going on in it to qualify as intentionally intelligent (the proxy for consciousness in this case).

The thesis of real world AI researchers is that they can use the same sort of operations as exemplified in the CR (Turing equivalent) to perform these other functions in an integrated way, as part of a larger system than the CR, and that THIS would be conscious. If "Strong AI" doesn't represent this claime, then it has nothing to do with the question of whether AI can achieve consciousness.

Obviously the AI project, understood in this way, means capacity matters, which could involve more processors as well as faster processes, more memory, etc., all intended to enable more the accomplishment of more tasks by the processes in the system. But note that the processors and the processing would be the same as you find in a CR type apparatus. Thus the "solely in virtue of" criterion is met (unless you want to so narrowly define THAT concept as to again reduce this to being just about a device with no more functionality than the CR).

Now I will snip away at least some of your repetitive stuff below (which I had replied to at some length in my earlier, uncompleted effort) for convenience.

<snip>

> Do philosophers whose positions do not qualify as "Strong AI" as Searle defines it still criticize the Chinese Room Argument?
>
> Yes. The examples above demonstrate this. And undoubtedly, there are other examples of positions that depart from "Strong AI" as Searle defines whose advocates would still take issue with the
> Chinese Room Argument.

This was never in dispute between us so I am at a loss to see why you spend so much time on the issue.

> For example, I take issue with the Chinese Room Argument and I don't advocate a position even remotely resembling "Strong AI"! But leaving that aside, I am sure there are many people who think it's just a bad argument. That doesn't prove that their positions count as "Strong AI" nor does it mean that they hold positions to which the Chinese Room Argument is even relevant!
>

On this very list, Neil is an opponent of "Strong AI" but thinks Searle's CRA fails. I, myself, am not a supporter of "Strong AI" but only one who believes that it is possible, based on what I take to be a viable (because reasonable) conception of consciousness provided by people like Dennett.

Note that Searle's CRA aims to prove that the thesis that consciousness can be achieved via computational processes running on a computer is impossible, not that it is unlikely, and my dispute is with THAT claim. It is NOT an effort to prove that, contra the CRA, "strong AI" is true. (Go ahead and check my historical postings if you don't want to take my word for it here.)

> Another example, from the original essay, would be what he calls the "Combination Reply". He acknowledges that the case described would be persuasive unless we looked "under the hood" (and again, I am not addressing the merit of this argument), but he says:
>
> "I really don't see that this is any help to the claims of strong AI, and here's why: According to strong AI, instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality."
>

> Again, the fact that a philosopher presents a counter-argument to the Chinese Room Argument and the fact that Searle rejects that counter-argument do not demonstrate that the position they're debating qualifies as "Strong AI".
>

The text you give us above does not reveal that he thinks it does not support "Strong AI". It merely says it fails to undermine the CRA.

Note that the Connectionist Reply (as I have given it) is made up of the same internals as the CR and that is what this must finally be about for it to be about anything of significance at all. It's just that the system proposed by the Connectionist Reply has more going on in it and what is going on is doing so as part of an integrated system.

> Isn't "Strong AI" then a straw man, if it's defined so narrowly that most people who argue with Searle don't count as "Strong AI"?
>

A very important point. If all of Searle's responses were just to say "that's not what I mean by Strong AI" then we would have to conclude that his argument wouldn't be worth very much at all because he will be seen to have constructed a strawman claim which no one actually holds. But I see no reason to conclude that he has done that. Searle doesn't assert that the Chinese Gymnasium Reply isn't the sort of thing that he thinks the CRA denies, nor does he take that tack with Dennett's thesis and Dennett's is all about computational processes running on a computer (with the added fact being that the computer is conceived as a massively parallel processor, i.e., just what you would need to implement the Chinese Gymnasium).

I repeat: If Searle's argument is only relevant to the limited system exemplified in the CR, then it has no potency because it applies to nothing but such very specific systems and AI researchers do not think that achieving computationally based consciousness is just a matter of building rote responding devices like the CR.

> First, suppose that it is. Searle would not be the first to offer a straw man and he would not be the last. That in itself is no reason to disregard the textual evidence that he did define the position he called "Strong AI" quite narrowly.
>

There is no textual evidence I have seen that suggests he was only arguing about a very narrowly defined device like the CR because, if there were, he could not draw the broader conclusions he does draw from the argument about computers generally.

> Second, we should consider the historical context. People have offered various responses that seek to distinguish to evade the Chinese Room Argument and in so doing, their positions sometimes no longer qualify as Strong AI. Would that be a demonstration that Strong AI was a strawman? Or could it be evidence that in raising the issue, he has forced others to reconsider their positions and to reject the position he's set out to criticize, whether they acknowledge it or not?
>

Nor have I said anything different. If you are as familiar with my past remarks on these lists about this (as you initially suggested you were) you would know that I have expressed respect for Searle in general and even noted that he provided some useful insights into what we mean by consciousness through his CRA.

> Third, the literature of the Turing test and on machine functionalism written prior to the publication of "Minds, Brains, and Programs" does show positions that could at least be mistaken for what he describes as "Strong AI". If his work has forced the authors of those works to clarify their positions, to make explicit that they are not advocating Strong AI but had merely been mistaken for such, then he has done a service.
>

As I said above, I am in agreement with this so, if you think this is the crux of our disagreements here you have misread me again.

> Now, Mr. Mirsky. I have patiently and carefully elaborated my reading of Searle's usage of "Strong AI" and its relationship to the Chinese Room Argument, I have considered various counter-arguments, and I have shown complete civility in doing so. I consider any
> obligation to you fully discharged.

Your obligation was and is to back up your charges that I do not understand Searle's CRA based on remarks of mine you had allegedly seen via Google lists from the past few years. To date you haven't done that, leading me to believe you overstated your case, for whatever reason. Be that as it may, it's clear you and I have little personal rapport though you have indeed been civil in this last post. Perhaps it will last, perhaps it won't (as it frequently seems not to). At any rate, the only real obligation ever you had to me has NOT been discharged but I won't expect you to address it anymore if you are no longer making that claim.

> If you do not, I can only wonder what would satisfy you, short of my dishonestly saying that I'm somehow persuaded that you're right
> and I'm mistaken.

What I have seen here is that you have a complete misunderstanding of my position. I don't know where you have gotten this understanding from because you have resisted my requests that you post my statements, or links to my statements, that show that I hold such positions as you claim I hold. In fact, much of what you have said in response to me above assumes the exact opposite of things I have actually said on this list and elsewhere over the years. Since you seem reasonably intelligent I must ascribe your misreadings to either something intentional on your part or, perhaps, to your having been misled by something others may have said. (It's also possible that I haven't been clear enough, but if so, all you have to do is present the statements that are allegedly unclear so we can examine them.)

> The alternative is for me to engage in endless exchanges with you, addressing each and every point you might raise. I don't think my obligation extends that far.
>
> JPDeMouy
>
>

As I have repeatedly said, your only obligation is to back up the things you say, especially when they are directed in a pejorative sense at someone (in this case, me). Arguing against things I have never maintained and saying that therefore you have now discharged THAT obligation simply doesn't qualify.

But the truth is, I don't enjoy corresponding with you anymore than you seem to enjoy doing so with me. However, note that I have never attacked you personally, alleged something about your views and then declined to back it up, or pretended to back up my claims about what you said by presenting a strawman case for your positions and then proceeding to knock them down; but you have done all of that with regard to me.

Be that as it may, I am prepared to continue to look in here periodically on the off chance you still may decide to forgo the smoke and mirrors game and actually present the statements I have allegedly made which, you have told us, represents my flawed view of Searle's CRA.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.3.

Re: SWM and Strong AI

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Thu Jan 7, 2010 10:04 am (PST)



Walter:

A couple of things.

1. There is great confusion over AMR and Commons. Let me try to dispell it. Imagine that you want to read Wittrs, but you don't use email. In such a scenario, there ARE NOT TWO LISTS. If, however, you get Wittrs by email -- which means you don't visit the message board (much) or the online archives -- then Wittrs Commons only receives a fraction of the mails. Which mails? The ones that are most Wittgenstein-relevant. And sometimes those that are not, if they are substantive and free of of the telephone-conversation format. (This last rule tends to exclude all of that endless mind talk). If you want to see what lands at Commons, go here: http://www.freelists.org/archive/wittrs/recent

2. Bud or you or anyone are free to send messages to Wittrs so long as you don't violate the block-quoting rule of 25 lines per thought, and trim the unnecessary portion of the message below your signature. This has to do with courtesy for the message board readers. Go look at any topic there. If you read downward, it's much more of a nuisance to have an unedited mass. So if you and Bud want to do that, be my guest.

[Note: I just approved your last mail even though it violated the quoting limit. That was my fault. I didn't see it when I sent it. Next time, I'll catch it.]

If you don't want to be on moderated status (where I approve posts), you can always join at freelists. That will make your posts go through automatically (the 25 line rule is policed by ecartis). For that, go here: http://www.freelists.org/list/wittrsamr

3. I think you've sucked up to JP enough now, don't you? Why not ask him on a date? ;)

Regards.

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Personal Website: http://seanwilson.org
SSRN papers: http://ssrn.com/author=596860
Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.4.

Re: SWM and Strong AI

Posted by: "walto" wittrsamr@xxxxxxxxxxxxx

Thu Jan 7, 2010 10:51 am (PST)





--- In Wittrs@yahoogroups.com, Sean Wilson <whoooo26505@...> wrote:
>
> Walter:
>
> A couple of things.
>
> 1. There is great confusion over AMR and Commons. Let me try to dispell it. Imagine that you want to read Wittrs, but you don't use email. In such a scenario, there ARE NOT TWO LISTS. If, however, you get Wittrs by email -- which means you don't visit the message board (much) or the online archives -- then Wittrs Commons only receives a fraction of the mails. Which mails? The ones that are most Wittgenstein-relevant. And sometimes those that are not, if they are substantive and free of of the telephone-conversation format. (This last rule tends to exclude all of that endless mind talk). If you want to see what lands at Commons, go here: http://www.freelists.org/archive/wittrs/recent

Thank you. I *THINK* I've got it now.

>
> 2. Bud or you or anyone are free to send messages to Wittrs so long as you don't violate the block-quoting rule of 25 lines per thought, and trim the unnecessary portion of the message below your signature.

Sorry about that: I usually do trim. But that post of JPdM's was just so damn good, I was temporarily struck dumb.

>

> 3. I think you've sucked up to JP enough now, don't you? Why not ask him on a date? ;)
>

I don't know about a date, but if I ever start my own group--which I'm thinking of doing--I'd certainly invite him. Hmmmmm. What's even higher than "Lords"--"Golden Age Hawkman Group" maybe?

Best,

W

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.

I'll Second Walter's Compliments to JP Demouy

Posted by: "gabuddabout" gabuddabout@xxxxxxxxx   gabuddabout

Thu Jan 7, 2010 5:06 pm (PST)



I second Walter! Nicely done, JP DeMouy.

By coincidence, I just received today through interlibrary loan _Neuroscience and Philosophy: Brain, Mind and Language_, 2007.

It would be way nice also to get a feel for what you might think of an Italian author I just found out Searle endorses as having gotten his ideas most clearly understood in print:

Vicari, Guiseppe. _Beyond Conceptual Dualism: Ontology of Consciousness, Mental Causation, and Holism in John R. Searle's Philosophy of Mind_, 2008.

Cheers,
budd

6.

Comments and Questions on Stuart's Understanding/My Understanding

Posted by: "gabuddabout" gabuddabout@xxxxxxxxx   gabuddabout

Thu Jan 7, 2010 5:32 pm (PST)



Stuart writes:

"The thesis of real world AI researchers is that they can use the same sort of
operations as exemplified in the CR (Turing equivalent) to perform these other
functions in an integrated way, as part of a larger system than the CR, and that
THIS would be conscious."

Is not the CR equivalent to a universal Turing machine already? Can PJ Demuoy add something here?

Stuart continues:

"If "Strong AI" doesn't represent this claim, then it
has nothing to do with the question of whether AI can achieve consciousness."

Searle claims in the target article that if you make the question a question not of strong AI but one of future technology, then he is not in disagreement. One has simply changed the subject. It would be smoke and mirrors to both change the subject and refuse that one changed it.

Stuart continues:

"Obviously the AI project, understood in this way, means capacity matters, which
could involve more processors as well as faster processes, more memory, etc.,
all intended to enable more the accomplishment of more tasks by the processes in
the system."

Is it not true that anything that can be done by parallel processing can be done by serial processing? If there is to be a distinction here, is it really a computational distinction? If not, is it really something Searle is in disagreement with vis a vis the target article?

Stuart continues:

"But note that the processors and the processing would be the same as
you find in a CR type apparatus. Thus the "solely in virtue of" criterion is met
(unless you want to so narrowly define THAT concept as to again reduce this to
being just about a device with no more functionality than the CR)."

By "functionality" do you mean computational capacity or something more akin to brute force? Are you relying on parallel processing as having more "functionality" than serial processing?

Maybe PJ Demouy can help answer my questions too.

Cheers,
Budd

7a.

[C] Re: Re: Wittgenstein, Translations & "Queer"

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Thu Jan 7, 2010 6:07 pm (PST)



What? The guy is fair to Nietzsche too?

Thank you, PJ Demouy!

Ref. (Doctoral Dissertations both):

Reeves, Sandra Junkin. _Eternal Recurrence: The Most Scientific of all Possible Hypotheses_, 1984.

Moles, Alistair(e?). _The Metaphysical Principles of Nietzsche's Cosmology_, 1984 (I think).

Cheers,
Budd

--- In WittrsAMR@yahoogroups.com, "J" <wittrsamr@...> wrote:
>
> Kirby,
>
> Thanks for the very interesting information regarding Kaufmann's translations and explanations of same.
>
> One quibble:
>
> (Nietzsche was
> > Austrian, like LW, wasn't proto-Nazi in any way -- would be Kaufmann's
> > brief on the guy).
>
> Surely not. Surely, Kaufmann would know that Nietzsche was born and died in Saxony, Prussia, that he studied at Bonn, that he taught in Switzerland, that after that he summered in Switzerland, but spent his winters in Italy and France on different occasions, but on no account was Austrian. He would know that Nietzsche had been a citizen of Prussia, a part of the German Confederation, but had that annulled to teach at Basel and was thenceforth officially stateless. He'd also know that Nietzsche insisted on his descent from Polish noblemen.
>
> Moreover, he'd know that Hitler himself was Austrian, so being Austrian would not preclude being a Nazi, proto- or otherwise.
>
> That Nietzsche ended his friendship (and his hero-worship) of Wagner on learning of the latter's anti-Semitism, considering such bigotry to be contrary to his overman ideal would be far more relevant as a brief way of dispensing with the "proto-Nazi" charge. (Similarly, his break with friend and editor, Ernst Schmeitzner, for similar reasons.)
>
> (And the history of his sister's selection and redaction of her brother's work on the basis of her own Nazi sympathies would serve in part to address why people might have taken him as such.)
>
> JPDeMouy
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Fashion News

What's the word on

fashion and style?

Yahoo! Groups

Going Green

Connect with others

who live green

Yahoo! Groups

Mental Health Zone

Find support for

Mental illnesses

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 99 - WittrsAMR