[C] [Wittrs] Digest Number 104

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 11 Jan 2010 10:38:11 -0000

Title: WittrsAMR

Messages In This Digest (5 Messages)

1a.
Re: Essences versus Framework versus Causal From: jrstern
2a.
Re: Essences versus Framework From: Rajasekhar Goteti
2b.
Re: Essences versus Framework From: Rajasekhar Goteti
3.1.
Re: SWM and Strong AI From: SWM
3.2.
Re: SWM and Strong AI From: J D

Messages

1a.

Re: Essences versus Framework versus Causal

Posted by: "jrstern" wittrsamr@xxxxxxxxxxxxx

Sun Jan 10, 2010 9:47 pm (PST)



--- In Wittrs@yahoogroups.com, "J D" <wittrsamr@...> wrote:
>
> JRS,
>
> > But, just perhaps, the now common practices of computer
> > programming are beyond anything that Wittgenstein ever saw or
> > understood.
>
> I suspect he understood an amount that might surprise our contemporaries - from discussions with Turing, from Ramsey's exposing him to ideas from Peirce (such as the type/token distinction) whose ideas have been a big influence on computer science terminology, and from exposure to ideas in the Brentanian tradition, the Vienna Circle, et al, which also have influenced computer science.

Do you have much experience with computer programming?

I must point out that nobody then did, not even Turing.

> > Have you read Ruth Garrett Millikan?
>
> Not firsthand but only by way of references elsewhere. Is "Biosemantics" the term you were searching for?

No, I think it must be "proper function", although I was recalling
it as "proper type". Millikan is entirely teleological in her
philosophy. I find it quaint and entirely without merit, at least
without major reinterpretation. Again, if I was at home, I could
pull out notes and books. A quick Google to her site doesn't
answer it for me. It's quite ontological, and about as far from
Wittgenstein as one can get. I quite liked it for about a week
after first reading it, though I never for a moment accepted
the teleological aspects.

> > Now, is type/token significantly different from "grammar"? Well,
> > it's a question of the programming language grammar for such
> > things. Then what?
>
> I'm not sure I understand the question. The type/token distinction in semiotics can be used to draw distinctions we might wish to make in a grammatical investigation.

Can you really make a type/token distinction without using
language? And having made the distinction with language, is there
anything more to say about it than you already have?

Josh

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2a.

Re: Essences versus Framework

Posted by: "Rajasekhar Goteti" rgoteti@xxxxxxxxx   rgoteti

Sun Jan 10, 2010 10:12 pm (PST)



If logic is hell, language is a bitch.       

Dr. Sean Wilson, Esq.

Assistant Professor

Wright State University

How true what you say.
Some essay basing on frame work.Extracted from Stanford encyclopaedia

After having presented a number of paradoxes of self-reference and
discussed some of their underlying similarities, we will now turn to a
discussion of their significance. The significance of a
paradox is its indication of a flaw or deficiency in our understanding
of the central concepts involved in it. In case of the semantic
paradoxes, it seems that it is our understanding of fundamental
semantic concepts such as truth (in the liar paradox and
Grelling's paradox) and definability (in Berry's and
Richard's paradoxes) that are deficient. In case of the set-theoretic
paradoxes, it is our understanding of the concept of a
set. If we fully understood these concepts, we should to be
able to deal with them without being led to contradictions. To
illustrate this, consider the case of Zeno's classical paradox on
Achilles and the Tortoise (see the entry
Zeno's paradoxes
for details). In this paradox we seem able to prove that the tortoise
can win a race against the 10 times faster Achilles if given an
arbitrarily small head start. Zeno used this paradox as an argument
against the possibility of motion. It has later turned out that the
paradox rests on an inadequate understanding of infinity. More
precisely, it rests on an implicit assumption that any infinite series
of positive reals must have an infinite sum. The later developments of
the mathematics of infinite series has shown that this assumption is
invalid, and thus the paradox dissolves. The original acceptance of
Zeno's argument as a paradox was a symptom of the fact that the
concept of infinity was not sufficiently well understood at the
time. In analogy, it seems reasonable to expect that the existence of
semantic and set-theoretic paradoxes is a symptom of the fact that the
involved semantic and set-theoretic concepts are not yet sufficiently
well understood.

http://plato.stanford.edu/entries/self-reference/

sekhar

The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/
2b.

Re: Essences versus Framework

Posted by: "Rajasekhar Goteti" rgoteti@xxxxxxxxx   rgoteti

Sun Jan 10, 2010 10:32 pm (PST)



Auditory PerceptionFirst published Thu May 14, 2009

Auditory perception raises a host of challenging philosophical
questions. What do we hear? What are the objects of hearing? What is
the content and phenomenology of audition? Is hearing spatial? How
does audition differ from vision and other sense modalities? How does
the perception of sounds differ from that of colors and ordinary
objects? This entry presents the main debates in this developing area
and discusses promising avenues for future inquiry. It discusses the
motivation for exploring non-visual modalities, how audition bears on
theorizing about perception, and questions concerning the objects,
contents, varieties, and bounds of auditory perception.http://plato.stanford.edu/entries/perception-auditory/

sekhar

The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/
3.1.

Re: SWM and Strong AI

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Sun Jan 10, 2010 11:21 pm (PST)



Since you finally give some specifics I'm going to skip your lengthy preamble in the interests of time and of sparing others here the burden of continuing to follow some of the ongoing gotchas. Suffice it to say I initially responded to all of that but, on noting half way through that you finally get down to some brass tacks I realized there was no reason to respond to all your initial caveats and hemming and hawing. (I've preserved my responses to those though if they should be needed.)

--- In Wittrs@yahoogroups.com, "J D" <wittrsamr@...> wrote:
>
<snip>

>
> JPD, quoting Searle: "'Could a machine think?' The answer is, obviously, yes. We are precisely such machines."
>
> JPD, commenting on Searle: (Here, I agree. For what that's worth. So, to read him as denying that a machine can think, be conscious, and so forth, is simply to misread him.)
>

> SWM, responding: And where do you think I have ever read him in THAT way? If you are as familiar with my past remarks on the subject as you have suggested, you would know that I have often noted that Searle speaks of brains as organic machines and also that it may be possible to build machines some day that can do what brains do.
>
> JPD NEW: I am aware that you have acknowledged and even emphasized the point in some discussions. But you've also insisted that Searle is a dualist.
>

> SWM ARCHIVE: "Personally I think the really important flaw of the Chinese Room argument is that it must assume what it wants to conclude, namely that consciousness cannot be reduced to processes that aren't themselves conscious (a dualist, and thus somewhat suspect, presumption that Searle, himself, has been at pains to disavow)." http://groups.google.com/group/Wittrs/browse_thread/thread/ddfe303b20270aca/bf5b259a2180f49f
>
> JPD NEW: But individual neurons firing and so forth are not conscious while he grants that we are. And we can be rightly described as machines, a usage he explicitly endorses.
>

And my point in the above was that Searle is implicitly a dualist even while he is denying he is one (one needn't claim to be a dualist to actually be one, of course).

Why did I say that Searle is implicitly dualist? Because his denial that the CRA can be conscious hinges on the idea that it is inconceivable that the operations occurring in a system like the CR can be intentional (because, as he notes, on examination 'nothing in the Chinese Room understands Chinese and the room doesn't understand Chinese either').

If intentionality is a core or essential or critical (take your pick) feature of what we mean by "conscious", its occurrence in the CR is necessary for the CR to be conscious. But there is no reason to assume that the combination of CR constituents "observed" in the CR cannot be intentional in any conceivable way, absent evidence that these particular constituents can't do it, unless one assumes from the outset that intentionality is irreducible to anything else. The failure to observe the requisite intentionality in the CR as he has specked it is not evidence, let alone logical proof, that no arrangement of the constituents of the CR can be intentional under any circumstance, i.e., a differently configured R.

> JPD NEW: Oddly, you've insisted on a narrower usage of "dualism" when criticizing Searle's remarks equating Strong AI with dualism (and I don't quarrel with your objections there) but in doing so emphasize and endorse Searle's own remarks:
>
> SWM ARCHIVE: "(H)e said that the only dualism that means anything is the kind that reduces, at bottom, to substance dualism. I think he is right about that and it strikes me that he was wrong in linking AI to dualism..." http://groups.google.com/group/Wittrs/browse_thread/thread/c21f820beca027ac/0ee5500b14419bea
>

What's so odd with agreeing with him at times and disagreeing at others? I happen to think Searle is confused about this, probably best seen in his claim that consciousness has a first person ontology but a third person explanation. My guess is "ontology" needs some explanation here (it is possible to use that term in more than one way).

> JPD, quoting Searle: "'Yes, but could an artifact, a man-made machine, think?'
>
> JPD, quoting Searle: "Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question."
>
> JPD, commenting on Searle: (Note how much he grants here. My own answer would be somewhat different, but that needn't concern us here. The fact is that he does grant the possibility that an artifact, a man-made machine, can think, be conscious, and so forth. He doesn't even limit this possibility to an artificial brain that operated on the same chemical basis. So, to read him as denying the possibility that a man-made machine can think, be conscious, and so forth, is again, a misreading.)
>

> SWM, responding: Again, where do you think I have ever offered THAT reading of him? Once again, any such suggestion is a misreading of ANYTHING I've ever said on this subject and, if imputed to me as part of what you are arguing against, a classic strawman.
>
> SWM ARCHIVE: "Searle argues, via the CRA, that we cannot achieve such consciousness in machines. " http://groups.google.com/group/Wittrs/browse_thread/thread/c21f820beca027ac/aaf92d02ab806997
>

Ah yes, I suspected this recent remark was what you had in mind. I took the liberty of taking the full quote on Friday and putting it aside so I can answer this by placing it into its original context:

http://groups.yahoo.com/group/Wittrs/message/3780

"The issue is not whether they are identical as in being precisely the same (A=A)
but whether they can achieve the same kind of results. No one says that a
computer is built like or performs its operations like a brain or that a machine
consciousness would have to be precisely like ours. For instance, relying on
different sensory apparatus, a machine of this type may have different
perceptions. What is at issue, rather, is: 1) whether all that our consciousness
is is such machine-like operations; and 2) whether, because of this, some
machines can be made to have a kind of functionality that includes the features
we recognize as consciousness in ourselves (e.g., intentionality, understanding,
etc.).

"Searle argues, via the CRA, that we cannot achieve such consciousness in
machines."

Note above the word "such" and see how it relates back to the points which address the questions of #1 and #2. In the last sentence "such" refers back to the description above and in #1 "such" links back to the ongoing point I had been making about understanding consciousness as so many computational processes running on computers.

When I wrote the text above, I broke the last sentence off from the rest so it would stand out. I should have realized that a cursory reading of it might lead some, like you, to miss the relation to the preceding lines and within the larger context of the ongoing discussion, i.e., that this was about a machine producing consciousness via computational processes.

> JPD NEW: Now, does that mean you're inconsistent? Or to salvage consistency, should I assume you misspoke? Can you explain it away as not meaning what it appears to say? And so should we argue about that? Find more quotes? And how long should we pursue this pointless exercise?
>

I agree it is misleading if taken out of context but it should be readily understandable in context or, if one is confused or has lost the context, a simple question as to what was meant would have elicited the response I have now given, i.e., that the reference is to computational processes running on computers, a la Dennett, the Connectionist Reply, etc.

>
> JPD, quoting Searle: "'OK, but could a digital computer think?'
>
> JPD, quoting Searle: "If by 'digital computer' we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think."
>
> JPD, commenting on Searle: (I think this is a muddle, but again, that needn't concern us here. He doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking.)
>
> SWM, responding: I agree that there is confusion here. Elsewhere he has suggested that even wallpaper can be described as a digital computer as I recall. If anything can, then the description loses its potency. Of course we are talking about certain very specific kinds of items when we use the term "digital computer" in ordinary language and we don't mean wallpaper or even thermostats (unless they are small scale computers as some, today, are).
>

> JPD NEW: You and I may both quarrel with his usage, though I suspect for different reasons. Disputes about, e.g. the Lowenheim-Skolen proof, Putnam's model-theoretic proof, and the relevance of counterfactuality in such matters would take us too far afield. Less far afield would be to note that Searle's way of speaking is too easly muddled with the separate but relevant issue of Turing-equivalence. But in any case, whether or not we accept his usage, it is his usage. And I repeat: he doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking. And while how "we use the term 'digital computer' in ordinary language" is narrower than his usage here, if we ascribe to him claims about what digital computers can or could do, we have to recognize his usage. Instead you say things like:
>

> SWM ARCHIVE: "Think of Searle's Chinese Room argument, a logical syllogism he developed to make the case that computers cannot ever be intelligent in the conscious way that we are" http://groups.google.com/group/Wittrs/browse_thread/thread/9cf679e9673e0781/c226b1e348178023
>

How do you think the above relates to the question of whether it makes sense to say, as Searle does, that because everything can be understood as a digital computer it isn't substantive (useful) to liken the brain to a digital computer for the purpose of trying to figure out how it works? (My comments about the logical syllogism aspect of his CRA goes to my claim that there are some logical deficiencies in the formal syllogism that once constituted the core of his argument -- since changed, around the time he wrote The Mystery of Consciousness.)

> JPD NEW: In a remark that was (oddly but not necessarily wrongly) presented as a correction to a quote from Searle himself about Strong AI, you wrote:
>
> SWM ARCHIVE: "Actually, it's about a very particular kind of machine (computational machines) which may, or may not, be like brains in the relevant way. At this stage we just don't know, as even Searle notes (except that we know that computers and brains are made of different materials and operate differently). But Searle's actual argument (the CRA) amounts to a logical claim that computers must be excluded from the class of mind causing machines based on their nature but it is a nature whose similarity or difference to brains is an empirical (not a logical) question. And yet Searle purports to give us a logical argument that addresses what is finally an empirical matter." http://groups.google.com/group/Wittrs/browse_thread/thread/c21f820beca027ac/2bb6f52b12497262
>
> JPD NEW: Now, whatever you may mean by "computational machines",

Computers or any machine relying on a computational platform for operation like our modern day computers. Note that referring to the "class of mind causing machines" as I did, I was clearly acknowledging that some machines, on Searle's view, cause minds which you seemed to challenge me on earlier when you alleged that I was taking the position that Searle says no machine can cause mind.

> you were correcting Searle on what constitutes the position of Strong AI (odd, as I said),

In the above it looks like I was commenting on the way his argument works (logical vs. empirical). I have certainly corrected Searle on his reading of what AI (or "strong AI") actually entails insofar as he ascribes the term to the actual work of AI researchers. I do think he misreads the computationalist thesis at times. I also think he is insufficiently clear on his use of terms like "syntax", "semantics," "programs" and "sufficient for". I have dealt with all of these issues elsewhere. But none of this is to say Searle doesn't mean by "Strong AI" (or any other term) what he claims to mean. It is just to say that what he has in mind may not match ordinary usage, the usages of actual AI researchers, cognitive scientists, etc., and that, insofar as there is such a failure his usage is questionable. We can't stipulate meanings for others. Even Searle can't do that!


> presumably offering what you take to be the proper way to characterize that position, using terms in a way that doesn't fit his own usage just adds to the oddness of it all. By Searle's usage, brains are computational machines! So this characterization of his position is just a complete muddle.
>

Searle is on record as saying that brains are organic machines and that they are digital computers (of a sort), etc. But what does he mean by this? Searle does NOT say that brains just "are computational machines." He says that brains ARE machines that can do computation and that, on a certain level, everything can be seen as a digital machine including brains and wallpaper. These are two different claims. In fact, he says, are not fully describable as computers. There is something else in them that is not to be found in computers, something biological, he argues, albeit as yet undiscovered. We know this something, whatever it is, is missing in computers because, when we consider the CR, we see there is no intentionality anywhere!

What does this boil down to? His claim is that consciousness must be something more than mere programs running on computers and that we know it must be something more because it is nowhere to be found in the CR, the archetypical computational processor (the TM). Thus he concludes that the absence of something in the CR is evidence for the inability of the CR's constituents to produce it (a logical non-sequitor) and evidence, as well, that brains MUST have something else going on than what we see in the CR.

All of this hinges on a tacit dualist presumption, that intentionality isn't conceivable as a process-based system property (because, if it is so conceivable, none of his about this conclusions follow).

> JPD NEW: Note his granting, "I am, I suppose, the instantiation of any number of computer programs." in "Minds, Brains, and Programs".
>
> JPD NEW: Note also the remark I'd previously quoted, where Searle wrote, "And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use." So ascribing to him the view that what distinguishes brains from (other things he would describe as) computers is simply that they are "made of different materials and operate differently" is wrong. An extra-terrestrial's brain or a man-made artificial brain could be "made of different materials and operate differently" and he would still grant that they could be described as digital computers but he would not deny that they could be conscious.
>

You are confusing his claim that anything can be described as a digital computer with his claim that whatever it is brains do it must include something more than we find in computers running programs. The first claim is a trivial one and has to do with how we use the term as a descriptor. The second claim is intended as a substantive one that we can discover by performing the Chinese Room thought experiment.

> JPD NEW: In an indirect ascription of a position and argument to Searle, you write:
>
> SWM ARCHIVE: "So you are 'saying' that there is no possibility that science could someday produce a machine that has consciousness, has a mind?
>
> And this is because why?
>
> Minds are special and stand apart from what is physical (Chalmers, Strawson)?
>
> Brains aren't computers (Edelman, Searle)?
>
> Minds aren't based in physical processes (Searle, though he doesn't quite admit this because he acknowledges minds are produced by brains[!] though he is never quite willing to hazard a guess as to how)?" http://groups.google.com/group/Wittrs/browse_thread/thread/a9c267d4fa9eb17d/d9ec4a625395a207
>

Note my point that he holds contradictory positions here! ("Searle, though he doesn't quite admit this"). The first reference to Searle reflects his view that brains do what they do by virtue of something more than anything computational and the second reference is to my point that Searle is implicitly dualist. If intentionality is not realizable in the CR because of the nature of the constituents of the CR, then he seems to hold that it must be because it cannot be conceived as reduced to such non-intentional constituents -- though he elsewhere asserts that brains cause consciousness in some as yet undiscovered way thus implying that some physical processes do something to bring intentionality about.

How do we reconcile the apparently discrepant position? Well, if Searle is really a dualist then he must be saying that intentionality somehow springs full-blown into the universe (via the instigation of brains) as an irreducible property; if he isn't saying that, then he cannot sustain a conclusion that whatever the constituents of the CR cannot accomplish in the CR they cannot accomplish in any other R either because there would then be no reason to assume that those constituents are constitutionally incapable of producing intentionality.

Thus he is in a bind. If he is dualist about intentionality, then the CRA argument can be sustained. If he isn't, then there is no reason to think that, in principle, the CRA conclusion can imply anything for any other R. So Searle appears to want to have his cake and eat it. In the second reference above I am alluding to the fact that he is in this bind.

> JPD NEW: This is an example both of your imputing to him a denial that brains are computers and of a claim (which you "charitably" acknowledge "he doesn't quite admmit" while insinuating that his not engaging in armchair neurobiology saddles him with the claim anyway) that "Minds aren't based in physical processes."
>

His idea that brains are digital computers is not meant to say that that is all they are. I happen to think that his view on this is a very confused one since he both says that everything is a digital computer (including brains, of course) and that computers cannot produce minds while brains can (see my excerpts from wikipedia below). Therefore the obvious fact is that in the relevant way under discussion here, he doesn't think brains are merely computers.

> JPD, quoting Searle: "'But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?'"
>
> JPD, commenting on Searle: (Note well: "solely in virtue" and "sufficient condition".)
>
> SWM, responding: Noted. Where do you think I am saying otherwise? (Below we will have a chance to address this in more depth.)
>
> JPD NEW: Every time you start talking about how the Chinese room is "specked" (sic) or start emphasizing "capacity" and emphasizing parallel vs. serial processing, you go beyond, "solely in virtue" by adding additional conditions.
>

As I have responded in the past: Searle himself takes Dennett's view to run counter to his CRA (The Mystery of Consciousness) and Dennett takes the same position (Consciousness Explained)! There is NOTHING in Searle's description of the CR or his CRA, and his related commentary, that excludes a more robust system from the conclusion of the CRA (allows that the CRA does not apply to such a system). If THAT were Searle's view, then the CRA would be pointless because it would not have any implication for computationalism as a thesis about consciousness beyond rote response devices. BUT NO ONE IN THE AI FIELD THINKS THAT THAT IS WHAT A SUCCESSFUL AI IMPLEMENTATION WOULD LOOK LIKE.


> JPD NEW: Now, I am not disputing the possibility that speed is relevant, though as a matter of computer science and based on firsthand experience building Beowulf clusters, doing benchmarks on multi-threaded applications, and so forth, I can tell you that equating "parallel" with speed is a very naive rookie mistake.

"Equating"? Certainly we can get more done in a limited time frame using parallel processing than serial processing operating at the same rate.

> But that's beside the point. Assume speed is relevant. Assume that parallelism (a different issue) is also relevant. I'll stipulate to those claims for purposes of this discussion. Still, once you do that, you are no longer saying "solely in virtue". (And do I really need to dig out quotes to prove that you do this?)
>

"Solely in virtue" must apply to the processes themselves, not to the number of things going on or the way they mix together. Why? Just look at Searle's take on Dennett's model and vice versa. Now it is possible to suppose that Searle really meant nothing more than what he presents as the CR. But, again, that would mean the CR does not have the general implication Searle repeatedly claims for it.

> JPD NEW: Now again there is the issue of benchmarking. It is true that some sort of capacity requirement is involved in being able to run "the right sort of program", if only a storage requirement. (The Universal Turing Machine has a tape of infinite length but real computer had drives are not so blessed, not to mention the issues of swapping vs. RAM, CPU caches, and so forth.) But anyone who has run modern software on hardware generally deemed obsolete (not just whatever the new Windows is on a machine Microsoft has claimed is now fit only for a landfill because it's 5 years old, but running a newly released version of a UNIX kernel like Linux or NetBSD on hardware that was manufactured in the late 80s or early 90s) or who has run the software on a hardware emulator (where the hardware is itself modeled - subtly different from virtualization - by a program running on yet another platform, meaning that the emulated hardware will run dramatically slower than the hosy machine, though it might still be faster than the actual hardware being emulated would be) can attest to the fact that there is a huge difference between being able to run a program and finding it responsive. You're making the requirement one of responsiveness which goes way beyond just being able to run the program. And in so doing, you clearly go beyond the "solely in virtue" requirement.
>

I see we clearly disagree on this score. Do you have some evidence that Searle would agree that his CRA does not apply to a computationalist implementation of a consciousness system (or programmed processes running on a computer) on a parallel processing system then?

At this stage all we have is you saying one thing and me another. That isn't going to settle this. I suggest you offer some evidence that Searle really does so narrowly construe his CR. (I have already referenced his comments in The Mystery of Consciousness and Dennett's comments in Consciousness Explained -- see his appendix concerning the Chinese Room.)

> JPD NEW: For our purposes, I'm not saying it's an illegitimate question once that condition is added. I am just saying it's not the same question. And the issue here is how you've interpreted Searle.
>

I presume the quote below is intended as evidence of this?

> JPD, quoting Searle: "Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI."
>
> JPD, commenting on Searle: He then goes on to construct a scenario resembling the Chinese room in some respects, but whatever the merits of this argument, it is no longer the CRA and it is no longer addressed to Strong AI as he defines it.
>

As I recall this referred to the brain simulator reply. But that isn't relevant to the question of a more robust CR because it relates to a claim that the CR is undone by the possibility of building a machine that replicates all the connections and operations of brains, cell for cell, brain part for brain part. This is a claim that it is at least theoretically possible to build an analog machine out of synthetic materials that could do what brains do and Searle concedes that if we can achieve the same causal events we can achieve the same causal effects.

Searle's CR is clearly not doing that because it's designed to be a "machine" that runs like computers, programmed operations performing pre-established processes. But its problem is that it is nothing but a rote responding device.

If consciousness involves thinking about things (intentionality), having awareness, understanding and the like, no one suggests that the CR is conscious because none of that is going on in it. The question is whether the processes being used by the CR can be deployed to produce these other features and thus be conscious.

The AI thesis is that, if we can replicate the various processes, via a programmed approach (not a brain reproduction approach), that go into features like intentionality, then there would be no reason why such a system would not also be intentional, REGARDLESS OF THE PHYSICAL PLATFORM OR THE MECHANICS INVOLVED. There would be no need to presume a connection for connection, neuron for neuron, brain component for brain component replication. All that is to be replicated are the processes, the things brains do (not the way they do them). Multiple realizability (consciousness can be realized on any platform that could do the same things brains do to achieve consciousness). The reason it is "Strong AI" (computationalism) is that the process(es), the mechanism employed, is computational in nature, i.e., algorithms running on machines through computational processes.

> SWM, responding: Note that my response to the CRA is not premised (and never has been premised) on this particular reply and I will note, in passing, that I agree with the view that that reply does not answer his argument.
>
> JPD NEW: But you do emphasize analogies between putative parallelism in the brain and your emphasis on the importance of parallel processing in hardware. But by his lights (and since he's the one defining "Strong AI", his coy manner notwithstanding, his lights are the only lights that matter), "we don't need to know how the brain works to know how the mind works." So, in making such an argument, you go beyond Strong AI. (And again, do I really need to provide links and cutting and pasting, or do you acknowledge that you have made such arguments?)
>

Why wouldn't I acknowledge making such arguments. Rather you need to provide links that show that Searle excludes from his CRA's conclusions computers working like those in Dennett's model. As I have stressed before, Searle and Dennett differ on the implications of Searle's CRA for Dennett's model. Searle does not say, well Dennett's model isn't applicable to what I mean by the CRA and he doesn't say that Dennett could be right. He says Dennett is wrong because of the implications of his CRA and that means that the Connectionist Reply (and especially the Bicycle variant) are applicable challenges to his CRA.

> JPD OLD: Does he sometimes criticize positions that do not fit his definition of Strong AI without taking the time to explicitly point that out?
>
> JPD OLD: Yes, he does. Again, in the original essay, regarding the "Robot Reply", he doesn't explicitly spell out that this reply is no longer what he has defined as "Strong AI". He does point out the difference though and if you've followed closely, you'll see that the position does involve a departure from the position he's called "Strong AI".
>
> SWM, responding: Now you proceed at great length to make this case over and over again below, to wit, that not every argument against Searle's CRA really speaks for or supports what Searle calls "Strong AI". And I addressed these in more specificity in my earlier reply. But to save time I will now stipulate to this and just note that MY argument against the CRA is not based on such a non-AI supporting argument but on a variant of the Chinese Gymnasium Reply (sometimes called the Connectionist Reply, though it is not always presented in quite the same way so even this has some variations to it).
>
> JPD NEW: Connectionism, in relying on analogies with the putative functioning of the brain, is not Strong AI, for reasons shown above.
>

There are a number of variants of connectionism that I have seen. But as presented by the Churchlands it is entirely about Turing equivalent processors and nothing else. What is different is that more is going on at the same time and there is interaction of these computational processes.

> SWM, responding: My argument boils down to the one exemplified by Peter Brawley's analogy on the Analytic list, that you can't build a bicycle and expect it to fly. As such we can call it the Bicycle Reply for convenience. It is grounded in the claim that Searle has underspecked the CR. That is, real AI researchers do not think or claim that a rote responding device like the CR is conscious. What they presume is that more things are going on in consciousness than merely transforming symbols mechanically using look-up tables (or their equivalent) as happens in the CR. Thus their efforts are aimed at producing a computationally based system that has all the things needed.
>
> SWM, responding: In a nutshell, the CR, as specked by Searle, doesn't have enough going on in it to qualify as intentionally intelligent (the proxy for consciousness in this case).
>
> JPD NEW: This is very important. You and I share misgivings about the wide usage Searle gives to "digital computer". But you impute to him denials that a computer could be conscious (as shown above).

What he does not deny is that machines might be conscious if we can discover whatever it is about brains that does consciousness and can implement this in them. But the fact that he sometimes agrees that we can call brains computers is not relevant to his claim that computers running programs can't be conscious.

> How do you define "digital computer"? Or a better question, what do you think that a digital computer does in processing inputs and outputs, above and beyond "merely transforming symbols mechanically"? How exactly do you think digital computers work? And what future technology are you imagining, what will it do beyond "merely transforming symbols mechanically", and on what basis will it still be described as simply a digital computer?
>

My reference to "transforming symbols mechanically" was not to programming but to the fairly mechanical process being carried out by the CR, i.e., receipt of a Chinese symbol and mindless conversion of it to another Chinese symbol without any comprehension going on. Now this is an important point. Human cells and organs in general perform certain functions mindlessly, too. In so doing they are following certain coded information carried in the genome. As such, this is their program code. In performing their functions, the cells, tissues, organs, etc., of the body act in what may best be described as a mechanical way, at various levels, albeit via the mechanics of organic dynamics not that of inanimate matter. (Of course, at bottom they are controlled by the same rules of operation as control inanimate matter, as far as we know, though they add something at the level of organic chemistry, biology, etc.) Now the question is what is it that the CR is doing when it receives one input and pulls and issues an output. Like the cells and other body parts it is following its instructions, its "genetic" coding but the process it is performing is a rote (mindless) process. The question is where does the mind come in? The CR trades on the idea that there is no mind (no intentionality) evident anywhere in the CR during the symbol transformation process. But if you open the brain, there isn't any of that there either. There are only various physical processes going on. These brain processes manage somehow to become intentionality, the experience of being a knowing subject (at least some of the time). The question then is how do the mechanical operations, visible in the CR as the transforming of one symbol to another, become intentional, become more than a merely mechanical operation? So perhaps now you can see that my reference to "transforming symbols mechanically" was not to the issue of the programming but to the issue of the operation even if we can speak of "transforming symbols mechanically" on both programming levels and on operational levels. It's just that the phrase refers to different things.

> JPD NEW: "Transforming symbols mechnically" is what digital computers do. (Or rather, mechanically transforming voltages which represent the symbols "0" and "1", which in combination and in turn represent other symbols, some of which after various manipulations, become control voltages which drive various outputs.
>

See above. We seem to be using the phrase in question differently. Speaking about "transforming symbols mechanically" as I did in the text at issue is NOT to reference the underlying programming but, rather, the process being performed. Indeed, the CR's program lies in the set of steps the little clerk at his desk routinely performs every time a new symbol comes in. Where are those steps? Well they may be in his head or they may be on a sheet of paper on the desk which our little man follows whenever an input is received (in which case he would need still more programming to tell him what to do to follow the procedure/algorithm for handling each new symbol). But his rote actions are programmed. They aren't the programming in this case. My use of "transforming symbols mechanically" in the case in which I used it referred to his rote actions, not to whatever processes underlie and make him take the incoming symbol, consult the rules set, and then act according to the rules set in order to turn the inputted symbol into an outputted one.

> SWM, responding: The thesis of real world AI researchers is that they can use the same sort of operations as exemplified in the CR (Turing equivalent) to perform these other functions in an integrated way, as part of a larger system than the CR, and that THIS would be conscious. If "Strong AI" doesn't represent this claim, then it has nothing to do with the question of whether AI can achieve consciousness.
>
> JPD NEW: Is performing "the same sort of operations" to be read as "transforming symbols mechnically" in order to "to perform these other functions in an integrated way"? You say, "What they presume is that more things are going on," but is it more of the same?
>

Of the same nature (programmed "mechanical" processes -- if X then do Y, etc.) but they are performing different functions. Some of the processors are collecting and ordering information from various background fields of input, others accessing/retrieving past inputs that have been organized and filed away, others zeroing in on different kinds of associations, etc. Some are setting up some representational mappings or networks while others are doing other things like this. The question at issue for any AI effort of this type is what is required for intentionality to occur? What is intentionality? An AI model would likely have it as involving various levels of connections, associating different symbols with different representations already constructed and retained and continuously being updated/altered. My guess is that to have intentionality the system would have to have various representational pictures, constructed over time and overlapping, interleaved, etc., of the background world (both external, via inputs from sensors or some other conduit[s] and internal [about itself]).

Such a picture of intentionality does not require the intentionality be exactly like ours to be that, but it does imply that it have the potential to be like ours (to have a full range of intentional awareness if it could be supplied with enough inputs and modalities).

> JPD NEW: Whether it's "more of the same" but with, e.g. parallelization or it's some unspecified "something" beyond "merely transforming symbols mechanically", this argument clearly does go beyond "solely in virtue of".
>

Not at all. Parallel processing is about the capacity to run enough processes doing enough things in an integrated way. But the underlying platform, the mechanics of operation are strictly Turing equivalent. Now this doesn't mean that brains work just this way. What it means is that the idea of brain produced consciousness (in whatever manner they produce it) is explicable as just so many processes within a particular kind of system.

But if you can show me that Searle means, by "solely in virtue of" nothing more than the rote response system which his CR is capable of, I will consider that I have got him wrong. That won't change my position that an artificial consciousness on a Dennettian model is at least not impossible, but it will convince me that Searle's whole CRA is clearly beside the point and that he ought to have seen that from the first because, if his CRA applies to nothing but rote responding devices like the CR, what's the big deal? It really has nothing of moment to say to the AI community or to anyone else for that matter!

> JPD NEW: And it is some unspecified "something" beyond "merely transforming symbols mechanically", which is what the "Bicycle Reply" suggests, Searle's response to the "Many Mansions Reply" is appropriate:
>
> JPD NEW, quoting Searle: "I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to."
>

So on this view, if your interpretation is correct, Searle should simply say that Dennett's thesis may be possible after all, his CRA notwithstanding. Yet he doesn't say that at all to my knowledge. Perhaps he has revised his view in recent years?

As I have said, not only does Searle continue to deploy his CRA (old and new versions) against the claims of people like Dennett, the argument itself would be of little note if it applied to nothing more than instances of the CR.

It seems to me we have a fundamental issue here: Is Searle addressing real world AI efforts by his attack on "Strong AI" or is his argument merely trivial in that it is only about an example so limited as to be of little moment in actual research circles? One must be hard pressed to accept the latter on the evidence of two or three decades of debate over this. After all if Searle wasn't (and isn't) attempting to say what could or could not be achieved via the efforts of AI researchers why would anyone on either side of the debate have thought this worth arguing about? I refer you again to Minsky's new book, The Emotion Machine, which is about building an intentional intelligence that can do the things people can.

> SWM, responding: Obviously the AI project, understood in this way, means capacity matters, which could involve more processors as well as faster processes, more memory, etc., all intended to enable more the accomplishment of more tasks by the processes in the system. But note that the processors and the processing would be the same as you find in a CR type apparatus. Thus the "solely in virtue of" criterion is met (unless you want to so narrowly define THAT concept as to again reduce this to being just about a device with no more functionality than the CR).
>
> JPD NEW: So, it is "more of the same". And yet the reference to "merely transforming symbols mechanically" then makes no sense, because that is what digital computers in the ordinary sense of the word actually do.
>

See my comments above as to that reference.

> JPD NEW: Searle would reject the this, and does so with the "Chinese Gymnasium" argument (which owes to Ned Block's argument, actually older that the Chinese Room Argument). If you think that the Chinese Gymnasium as a whole is conscious then... well, okay.

The System Reply argues that the whole CR understands Chinese even if its constituent parts, including the little man in the room, do not. But that is a theoretical argument because the CR, as it is specked, lacks the functionalities associated with understanding. Rote responding is all that the system has been specked to do. The Chinese Gymnasium Reply introduces to the system the capacity to do more things and thus, more things are going on within it. While the System Reply is right in principle, it is defeated by the fact that the particular system in question, the CR, still lacks understanding. But the Connectionist approach, if it is specked to incorporate the necessary additional functionalities and to permit the requisite interaction among them, will thus be a system that understands Chinese. But everything depends on getting the added functionalities and how they work together right which is what the Bicycle analogy makes clear.

> That position is not Strong AI but it is still a position with which Searle would disagree.

I dispute that it is not what he has in mind by "Strong AI". It is precisely what real AI research of this type is about. If you say Searle excludes that from what he means by "strong AI " I think that either you are mistaken or (if you can show you are not) then his argument is even weaker than it initially looked because it is built on opposition to a position no one holds (i.e., no one in the AI community thinks intentional intelligence is achievable by just programming a very powerful rote responding machine). Look, if Searle's CRA isn't about what can be done with computers in general vis a vis building artificial intelligences, then it's of little interest to anyone in the field. Of course the evidence of the past three decades suggests it is not of little interest so either everyone has misread it or you are misreading it now.

> And the fact that he would offer the Chinese Gymnasium as a separate argument is an acknowledgement that the Chinese Room Argument might not be taken to address such a case. (Whether it actually does is another matter and this partly turns on the quite arbitrary decision of how to individuate different permutations of thought experiments. Since the argument goes by a different name, I defer to precedent.)
>

I see absolutely no evidence that we should read this as Searle concluding that the Chinese Gymnasium Reply is not a "Strong AI" reply. But this calls up an interesting issue. So far you seem to be maintaining that all the replies you have cited aren't really examples of what Searle means by "Strong AI". Aside from the fact that Searle doesn't say this in each of the cases (though he says it in some) I wonder if there is ANY reply you think is really an example of "Strong AI". Or are we now at the point where Searle is said by you to be defending his views by defining all the opposing views offered as falling outside the implications of his CRA? And then what is the point of making the CRA?

> JPD NEW: In any case, you most certainly are going beyond the "solely in virtue of" in the original question. And no, that is not "just about a device with no more functionality than the CR", which would be a > question-begging way to draw the distinction.

No, it is question-begging to argue for such a distinction in the first place because it makes the CRA about nothing but itself, i.e., the CR. But the CRA only has conceptual value for us if it offers an argument that is generalizable to the class of all purely computational platforms.

> It is about having the capacity to run "the right program". And I've elaborated on the practical issues of this above.
>

And I've responded.

> JPD OLD: Do philosophers whose positions do not qualify as "Strong AI" as Searle defines it still criticize the Chinese Room Argument?
>
> JPD OLD: Yes. The examples above demonstrate this. And undoubtedly, there are other examples of positions that depart from "Strong AI" as Searle defines whose advocates would still take issue with the Chinese Room Argument.
>
> SWM, responding: This was never in dispute between us so I am at a loss to see why you spend so much time on the issue.
>
> JPD NEW: I emphasize it to forestall any argument that appeals to people who hold various positions have criticized the Chinese Room Argument in an attempt to prove that Strong AI must therefore be a wider position that I've indicated. And I actually emphasized various permutations of the relationship between different positions, different arguments, Strong AI, and the Chinese Room Argument. I don't think I was being quite as repetitive as you suggest.
>

The only arguments for the broad interpretation I have made are 1) that Searle himself makes it and 2) if one narrowed the interpretation as you have indicated, it would make the argument pointless and all the evidence of the past thirty years suggests that the community of professional discussants in many fields do not think it is pointless.

> SWM, responding: Note that Searle's CRA aims to prove that the thesis that consciousness can be achieved via computational processes running on a computer is impossible, not that it is unlikely, and my dispute is with THAT claim. It is NOT an effort to prove that, contra the CRA, "strong AI" is true. (Go ahead and check my historical postings if you don't want to take my word for it here.)
>
> JPD NEW: I haven't said that you think Strong AI is true. I don't assume you do think such a thing.
>

Good. Take it as a clarification offered in advance of future possible misunderstandings then.

> JPD NEW: I do think that you've misstated what the Chinese Room Argument is meant to prove however. He was not making the claim that "the thesis that consciousness can be achieved via computational processes running on a computer is impossible." First of all, a thesis may assert something that is possible or impossible but what would it be for a thesis itself to be impossible? That it is nonsensical? He doesn't make that charge. So your way of putting this is a muddle. Such muddles are common in your posts, which was part of my reluctance concerning the "quote mining" approach. I can't just cut and paste a lot of what
> you say, I have to break it down.

In the interest of getting beyond this he said/he said stuff I've taken the following from wikipedia which, while not always a totally reliable source looks like a good place for us to start in determining which of us has Searle right here:

http://en.wikipedia.org/wiki/Chinese_room

Strong AI
Searle identified a philosophical position he calls "strong AI":

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[9]

The definition hinges on the distinction between simulating a mind and actually having a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."[10]

The Formal Argument:

Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first "excessively crude"[69] version in 1984. The version given below is from 1990.[70]

The only premise or conclusion in the argument which should be controversial is A3 and it is this point which the Chinese room thought experiment is intended to prove.[71]

He begins with three axioms:

(A1) "Programs are formal (syntactic)."

A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.

(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.

(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."

This is what the Chinese room argument is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.

Searle posits that these lead directly to this conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore programs are not minds.

This much of the argument is intended to show that artificial intelligence will never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct?[72] He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds:

(A4) Brains cause minds.

Searle claims that we can derive "immediately" and "trivially"[73] that:

(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it "causal powers". "Causal powers" is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have "equivalent causal powers". "Equivalent causal powers" is whatever else that could be used to make a mind.
And from this he derives the further conclusions:

(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
This follows from C1 and C2: Since no program can produce a mind, and "equivalent causal powers" produce minds, it follows that programs do not have "equivalent causal powers."

(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
Since programs do not have "equivalent causal powers", "equivalent causal powers" produce minds, and brains produce minds, it follows that brains do not use programs to produce minds.

>
> JPD NEW: So, if you mean that he's denying the possibility that "consciousness can be achieved via computational processes running on a computer" (and I can't think of anything else you might reasonably mean) then that's wrong, given his own usage of "computer" (which we agree is problematic). Given his usage, our the activities in our brains can be described as "computational processes running on a computer".
>

No. See his formal argument taken from wikipedia above. In fact he argues that describing anything, including brains as digital computers, is trivial and has no implication for what he means by "Strong AI".

> JPD NEW: But it would also be wrong because it would have him denying a possibility rather than an equivalence. You're right that he's not saying, "That's unlikely." But he's also not saying, "That will never happen!" (This is a mistake you seem to make fairly often and it pops up in various places but the fact that you make it here should suffice, sparing me having to "quote mine".) He's denying that instantiating the right program constitutes being conscious. And he's denying that instantiating the right program is a sufficient condition for being conscious.
>

http://plato.stanford.edu/entries/chinese-room/

"The Chinese Room argument, devised by John Searle, is an argument against the POSSIBILITY [emphasis mine] of true artificial intelligence . . . . Searle's argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind.. . ."

> JPD NEW: Where he does deny a possibility, it is in answering a question that includes the "solely in virtue of" clause. And he indicates that he intends this to be the same question as the "sufficient condition" question.
>

This isn't relevant to what I've been saying.

> JPD NEW: He does unfortunately give too little emphasis to distinctions between conceptual and empirical questions, something endemic to post-Quinean philosophy, but "sufficient condition" makes it clear enough that he's not talking about, e.g. what is scientifically possible. A sufficient condition is one that, if satisfied, assures the truth of the statement for which it is a sufficient condition. He's talking about whether the inference from, "it instantiates the right program" to "it's conscious" is valid. (If it were valid, then being able to pass something like the Turing test, as a standard for determining whether the right program is being instantiated, would be proof of consciousness. Regardless of the hardware.) He's not talking about a claim like "it instantiates the right program, so it might just be conscious".
>
>

It's an argument that claims it is impossible to achieve intentional intelligence ("consciousness" in other venues) using programs running on computers. See the information I have reproduced above. As such, it is an argument that AI researchers not only won't succeed but that they cannot succeed. It's about possibility.

> JPD NEW: Now, we are getting into the area of your response that made a peculiar impression on me, as I mentioned nearly the start of this reply.
>
> JPD OLD: Another example, from the original essay, would be what he calls the "Combination Reply". He acknowledges that the case described would be persuasive unless we looked "under the hood" (and again, I am not addressing the merit of this argument), but he says:
>
> JPD, quoting Searle: "I really don't see that this is any help to the claims of strong AI, and here's why: According to strong AI, instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality."
>

> JPD OLD: Again, the fact that a philosopher presents a counter-argument to the Chinese Room Argument and the fact that Searle rejects that counter-argument do not demonstrate that the position they're debating qualifies as "Strong AI".
>

> SWM, responding: The text you give us above does not reveal that he thinks it does not support "Strong AI". It merely says it fails to undermine the CRA.
>

> JPD NEW: Of course it doesn't merely say that. He denies that the reply is an help to the claims of Strong AI and the reason he gives for that denial isn't the counter-argument about looking under the hood (that comes later), but a restatement of Strong AI. How could he deem a restatement of Strong AI as an explanation for why the "Combination Reply" doesn't help Strong AI, unless he considered the "Combination Reply" not to support Strong AI.
>

Lots of people make their arguments by restating them. But you seem to want to say that say he thinks that Dennett's model isn't an instance of what he calls "strong AI" and that, therefore it can be possible even if the CRA as he has given it is correct? Is that your claim?

> JPD NEW: He holds that it fails to undermine the Chinese Room Argument on the basis of his argument that we would no longer find the case persuasive once we looked "behind the curtain". But he holds that it fails to support Strong AI because of how the position of Strong AI is defined, viz. "instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality." When you add the things that are added in the "Combination reply", you are no longer treating "instatitiating a formal program with the right input and output" as a sufficient condition. To put it in terms with which you are by now familiar, that reply ceases to fit the "solely in virtue of" clause.
>

As I have said over and over here, if it's just about the limited CR then it isn't very important. The only thing that makes it important enough to attend to is if it has implications for the larger AI effort. Now note that not all AI efforts are directed toward achieving intentional intelligence. Some AI efforts are directed at simulational modeling (his "weak AI"), which is something Dehaene makes use of in his brain research, too. There is also AI as expert systems and AI addressed to certain limited capacities, i.e., intelligence without intentionality. But the issue is conceptual, i.e., what is a mind in the final analysis. If a mind is just the features achieved by certain physical processes performing certain functions running on a physical platform then if computers can provide the right platform to run processes that can do those functions, then a computationally based mind is possible. As noted Minsky is pursuing a model along these lines, albeit by developing the constituents module by module. If Searle is right, Minsky cannot ever succeed and Dennett's model of mind must be wrong. Unless, as you want to say, Searle is only talking about the limitations of a rote response device like the CR (in essence a kind of expert system). But I don't think there is ANY evidence for this interpretation at all and plenty of evidence against it.

> JPD NEW: And your failure to see that reflects as much on your understanding of Strong AI (which was the point at issue) as any remarks I might retrieve from the archives. Hence, the peculiarity of it all.
>

Take another look at the definition of "Strong AI" I've provided from wikipedia above. I'm sure we can find many more on the Internet if you want to try. Then we can examine who has a better understanding of what Searle means by this.

> SWM, responding: Note that the Connectionist Reply (as I have given it) is made up of the same internals as the CR and that is what this must finally be about for it to be about anything of significance at all. It's just that the system proposed by the Connectionist Reply has more going on in it and what is going on is doing so as part of an integrated system.
>
> JPD NEW: I note again that there seem to be two claims here. "(M)ore is going on" seems to be something other than "more of the same" since you distinguish that from the emphasis on the "integrated system". Now is the "more" that is "going on" also more than "merely transforming symbols mechanically"?
>

This all seems to be coming down to whether you are right about what Searle means by "solely in virtue of", etc., i.e., does he only have in mind exactly the same system he has specked or does he think his argument applies to any system made of the same constituents as we find in the CR? I'm tired of going back and forth on it. Can you find anything to support your view that Searle defines "Strong AI" so narrowly as to exclude anything but an expert system type of device such as the CR?

Note that "transforming symbols mechanically" can refer to the underlying programming of any machine, organic or inorganic, that operates on its own (without an operator pressing the buttons or pulling the levers, etc.) or it can refer to the particular operation being performed. In this latter case we can have a computer or a human operator (both programmed in their own way) performing the rote response function according to the programming (the rule set). We have to be careful not to confuse the two references though they are represented by the same terms.

>> JPD OLD: Isn't "Strong AI" then a straw man, if it's defined so narrowly that most people who argue with Searle don't count as "Strong AI"?
>
> SWM, responding: A very important point. If all of Searle's responses were just to say "that's not what I mean by Strong AI" then we would have to conclude that his argument wouldn't be worth very much at all because he will be seen to have constructed a strawman claim which no one actually holds. But I see no reason to conclude that he has done that. Searle doesn't assert that the Chinese Gymnasium Reply isn't the sort of thing that he thinks the CRA denies, nor does he take that tack with Dennett's thesis and Dennett's is all about computational processes running on a computer (with the added fact being that the computer is conceived as a massively parallel processor, i.e., just what you would need to implement the Chinese Gymnasium).
>

> JPD NEW: First, let me say that I've found previous references to this very odd. I haven't commented until now because it seemed unimportant. But now it has become unavoidable.
>
> JPD NEW: There is no such thing as the "Chinese Gymnasium Reply" as a counter to the Chinese Room Argument. the Chinese Gymnasium is Searle's counter-argument to the "Connectionist Reply". Searle put it forward! So of course he wouldn't then say that it isn't something the Chinese Room Argument denies. That makes no sense whatsoever! And Searle thinks it utterly obvious that the gymnasium as a whole doesn't understands Chinese any better than any of the individuals in the gymnasium because it doesn't even make sense to say that a building understands.
>

It doesn't matter who provided the cuter name. The point is that it is formally known as the Connectionist Reply as already noted.

"It's intuitively utterly obvious, Searle maintains, that no one and nothing in the revised 'Chinese gym' experiment understands a word of Chinese either individually or collectively. Both individually and collectively, nothing is being done in the Chinese gym except meaningless syntactic manipulations from which intentionality and consequently meaningful thought could not conceivably arise." http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/chineser.htm

This clearly exemplifies the underlying presupposition that guides Searle's entire argument: He is looking for an intentional agent inside the system when the point, as I've already noted, is that it is the system, though the system that's required is necessarily more robust than the limited "expert" system of the CR. The reason he thinks it's intuitively obvious that there is no understanding of Chinese is that he is looking for the manifestation of intentionality within the system in order to conclude that it is present in the system. This betrays a dualistic idea, i.e., that intentionality cannot be a construct of constituents that aren't themselves intentional which this approach presumes would be the case.

> JPD NEW: Furthermore, you're arguments are based on something you dismissed earlier, accusing me of repetitiveness. I obviously didn't repeat it enough. Being "the sort of thing that he thinks the CRA denies" and being Strong AI are not equivalent.
>

See what follows.

> SWM, responding: I repeat: If Searle's argument is only relevant to the limited system exemplified in the CR, then it has no potency because it applies to nothing but such very specific systems and AI researchers do not think that achieving computationally based consciousness is just a matter of building rote responding devices like the CR.
>
> JPD NEW: Again, the relevance of the Chinese Room Argument and the scope of the position of Strong AI are separate questions. It was created to address Strong AI but has also elaborated and amplified the argument in response to other positions that do not fit the definition of "Strong AI". And sometimes he explicitly makes this distinction but other times he doesn't. I made this point already, citing relevant quotations. You ignored most of them, said they weren't at issue between us, and said I was being repetitive. And yet, here you ignore those points.
>

Again, his CRA is directed against all purely computational systems. It is possible that he really only means, as you claim, the very limited CR system but all the evidence I have seen is against that. But no, I am not going to recite that evidence again. I've already presented it. If you have evidence to the contrary in terms of quotes from Searle with links for contest or commentaries you think add something, please produce them. Otherwise I find it utterly unconvincing to insist as you do that Searle did not mean to draw a broader conclusion re: computationalism than just for systems that duplicate the CR in terms of functionalities.

> JPD NEW: What seemed peculiar is now seeming absurd but fortunately, I'll soon be done with this.
>
> JPD OLD: First, suppose that it is. Searle would not be the first to offer a straw man and he would not be the last. That in itself is no reason to disregard the textual evidence that he did define the position he called "Strong AI" quite narrowly.
>
> SWM, responding: There is no textual evidence I have seen that suggests he was only arguing about a very narrowly defined device like the CR because, if there were, he could not draw the broader conclusions he does draw from the argument about computers generally.
>
> JPD NEW: First of all, the Chinese Room is not a device. Secondly, the Chinese Room is characterized by its Turing-equivalence, so I wouldn't call it "narrow".
>

I used "device" in a metaphorical sense. That should be clear enough. After all, given the right technology, it could be produced as that.

If all it does is mindless responses it is, by definition, not mindful. If the CRA says that a mindless response machine doesn't have a mind it is a silly claim, utterly trivial. But any serious reading of Searle will show that it is about a hell of a lot more than that, i.e., it is intended to address any computational system aimed at producing intentionality. But the CRA only works if intentionality is understood as being irreducible to anything like what the CR contains because, if it is so reducible, all you have to do is reconfigure and supplement the CR with enough processes performing enough additional functions to produce an intentional system. Given this, the CRA cannot have the broader implications Searle claims for it. But if, as you claim, Searle doesn't claim those broader implications, then we're back to the triviality of the point being made.

> JPD OLD: Second, we should consider the historical context. People have offered various responses that seek to distinguish to evade the Chinese Room Argument and in so doing, their positions sometimes no longer qualify as Strong AI. Would that be a demonstration that Strong AI was a strawman? Or could it be evidence that in raising the issue, he has forced others to reconsider their positions and to reject the position he's set out to criticize, whether they acknowledge it or not?
>
> SWM, responding: Nor have I said anything different. If you are as familiar with my past remarks on these lists about this (as you initially suggested you were) you would know that I have expressed respect for Searle in general and even noted that he provided some useful insights into what we mean by consciousness through his CRA.
>
> JPD NEW: The point was not to accuse you of disrespecting Searle (like it would matter to me if you had). The point is to answer the argument that if "Strong AI" is defined as narrowly as I have insisted, it amounts to a strawman. The point is that even defining "Strong AI" so narrowly, the arguments still have value.
>

Yes, I should have put that reply further down. It was a carryover from the predecessor reply which got lost and I just stuck it in. But you're right, it was in the wrong place.

> JPD OLD: Third, the literature of the Turing test and on machine functionalism written prior to the publication of "Minds, Brains, and Programs" does show positions that could at least be mistaken for what he describes as "Strong AI". If his work has forced the authors of those works to clarify their positions, to make explicit that they are not advocating Strong AI but had merely been mistaken for such, then he has done a service.
>
> SWM, responding: As I said above, I am in agreement with this so, if you think this is the crux of our disagreements here you have misread me again.
>
> JPD NEW: I didn't say that it was anything like the crux.

Then we are not in agreement again. Well that's much better. I was beginning to worry!

> But it does address the argument that by reading "Strong AI" as narrowly as I have, I am reading Searle as presenting a strawman argument.
>

I think that is a bad reading of him. The argument is clearly intended to apply more broadly than you allow but it is flawed for a variety of other reasons, none of which having to do with straw or men.

> JPD NEW: And you most certainly have made such arguments. In fact, you made an argument like this just a few lines up, viz. "If all of Searle's responses were just to say 'that's not what I mean by Strong AI' then we would have to conclude that his argument wouldn't be worth very much at all because he will be seen to have constructed a strawman claim which no one actually holds."
>
> JPDeMouy
>

Note that I said "if". As it happens I don't agree with that reading of yours. My point about the "if" was to note the implications of what I take to be your mistaken reading of him.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.2.

Re: SWM and Strong AI

Posted by: "J D" wittrsamr@xxxxxxxxxxxxx

Mon Jan 11, 2010 12:00 am (PST)



SWM,

You really are incorrigible.

> Suffice it to say I
> initially responded to all of that but, on noting half way
> through that you finally get down to some brass tacks I
> realized there was no reason to respond to all your initial
> caveats and hemming and hawing.

Your seeming complete inability or unwillingness to read things through before replying, your compulsion to rush through without thinking about the overall discussion - as evidenced by what you describe here - seem to be symptomatic of an overall intellectual laziness. Be that as it may, it only confirms my suspicion that exchanges with you are utterly pointless.

Not that further confirmation was needed.

Any conceivable obligation I might possibly have had to answer you has been completely discharged. And if you presume now to suggest otherwise, then you are as arrogant and manipulative as you are lazy and thoughtless.

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Fashion News

What's the word on

fashion and style?

Yahoo! Groups

Mental Health Zone

Schizophrenia groups

Find support

Yahoo! Groups

Going Green

Explore green tips

and resources

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 104 - WittrsAMR