I just did a long response to this (wasting much of the morning) and lost it before finishing when my finger hit the wrong button. On the supposition that it will never appear here, I'll do it again albeit with a bit less detail.
--- In
Wittrs@yahoogroups.com, "J" <wittrsamr@.
..> wrote:
>
<snip>
> Suffice it to say, Searle and Dennett are on one side, Hacker and Bennett are on the other, and I am a lot closer to Hacker. And the disputes concern fundamental issues about the nature of philosophy as well as (what Hacker and Bennett take to be) conceptual confusions and misunderstandings common to many in this "field", including Dennett and Searle. I cannot imagine being persuaded to discuss these topics with you further (though obviously I've done equally
> foolish things already)
I don't know what your problem is. It seems to be remarkably personal with you and me but I will leave that alone except to reference it in passing here.
> fo a variety of reasons, though it should suffice to say that it would complicate matters to no real benefit. And saying this much should suffice to address suggestions that I am somehow indignant that you would dare criticize Searle or somehow just uncomfortable with Dennett, suggestions that are simply irrelevant anyway.
>
I don't recall saying that you, personally, are indignant at criticism of Searle (though I think some are indignant at any claim that suggests we might be nothing but organic machines and that you may well fit into that category -- however, I reserve judgment pending further statements of your own).
> Now, I am going to take a different approach here.
>
> In Searle's paper, "Minds, Brains, and Programs", in which the Chinese Room Argument makes its first appearance, we find the following passage, reminiscent of a press conference by former US Secretary of Defense, Donald Rumsfeld, in which Searle poses and answers a series of questions. My own remarks will be in parentheses.
>
>
> "'Could a machine think?' The answer is, obviously, yes. We are precisely such machines."
>
> (Here, I agree. For what that's worth. So, to read him as denying that a machine can think, be conscious, and so forth, is simply to misread him.)
>
And where do you think I have ever read him in THAT way? If you are as familiar with my past remarks on the subject as you have suggested, you would know that I have often noted that Searle speaks of brains as organic machines and also that it may be possible to build machines some day that can do what brains do.
What is the point of this "different approach" if it continues to fail to meet the one thing required, to back up what you have claimed?
> "'Yes, but could an artifact, a man-made machine, think?'
>
> "Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question."
>
> (Note how much he grants here. My own answer would be somewhat different, but that needn't concern us here. The fact is that he does grant the possibility that an artifact, a man-made machine, can think, be conscious, and so forth. He doesn't even limit this possibility to an artificial brain that operated on the same chemical basis. So, to read him as denying the possibility that a man-made machine can think, be conscious, and so forth, is again, a misreading.)
>
Again, where do you think I have ever offered THAT reading of him? Once again, any such suggestion is a misreading of ANYTHING I've ever said on this subject and, if imputed to me as part of what you are arguing against, a classic strawman.
> "'OK, but could a digital computer think?'
>
> "If by 'digital computer' we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think."
>
> (I think this is a muddle, but again, that needn't concern us here. He doesn't deny that something that can be correctly described as the instantiaton of a computer program can also be correctly described as thinking.)
>
I agree that there is confusion here. Elsewhere he has suggested that even wallpaper can be described as a digital computer as I recall. If anything can, then the description loses its potency. Of course we are talking about certain very specific kinds of items when we use the term "digital computer" in ordinary language and we don't mean wallpaper or even thermostats (unless they are small scale computers as some, today, are).
> "'But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?
'"
>
> (Note well: "solely in virtue" and "sufficient condition".)
>
Noted. Where do you think I am saying otherwise? (Below we will have a chance to address this in more depth.)
> "This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
>
> "'Why not?'
>
> "Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."
>
> (I consider this also to be a muddle. But that's not the point here. The point is that this answer is addressed to the preceding two questions and not to the three questions prior to them.)
>
And my comments on the CRA have to do with precisely that last question. You're imputing to me things I have never said or held to be the case here vis a vis Searle's argument in order to say that you are demonstrating my alleged misunderstanding. But if these aren't things I have said or held, how do they count as evidence I have got Searle wrong? (Do you have some evidence, some statements I have actually made, which you think show me saying such things?)
> "The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese."
>
> Now, continuing in a Rumsfeldian vein, I offer some questions and answers of my own:
>
> Is every position that Searle ever criticized therefore an example of the position he calls "Strong AI"?
>
> No. He even explicitly points this out in the original essay. Regarding the "Brain Simulator Reply", he wrote:
>
> "Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI."
>
> He then goes on to construct a scenario resembling the Chinese room in some respects, but whatever the merits of this argument, it is no longer the CRA and it is no longer addressed to Strong AI as he defines it.
>
Note that my response to the CRA is not premised (and never has been premised) on this particular reply and I will note, in passing, that I agree with the view that that reply does not answer his argument.
> Does he sometimes criticize positions that do not fit his definition of Strong AI without taking the time to explicitly point that out?
>
> Yes, he does. Again, in the original essay, regarding the "Robot Reply", he doesn't explicitly spell out that this reply is no longer what he has defined as "Strong AI". He does point out the difference though and if you've followed closely, you'll see that the position does involve a departure from the position he's called "Strong AI".
>
Now you proceed at great length to make this case over and over again below, to wit, that not every argument against Searle's CRA really speaks for or supports what Searle calls "Strong AI". And I addressed these in more specificity in my earlier reply. But to save time I will now stipulate to this and just note that MY argument against the CRA is not based on such a non-AI supporting argument but on a variant of the Chinese Gymnasium Reply (sometimes called the Connectionist Reply, though it is not always presented in quite the same way so even this has some variations to it).
My argument boils down to the one exemplified by Peter Brawley's analogy on the Analytic list, that you can't build a bicycle and expect it to fly. As such we can call it the Bicycle Reply for convenience. It is grounded in the claim that Searle has underspecked the CR. That is, real AI researchers do not think or claim that a rote responding device like the CR is conscious. What they presume is that more things are going on in consciousness than merely transforming symbols mechanically using look-up tables (or their equivalent) as happens in the CR. Thus their efforts are aimed at producing a computationally based system that has all the things needed.
In a nutshell, the CR, as specked by Searle, doesn't have enough going on in it to qualify as intentionally intelligent (the proxy for consciousness in this case).
The thesis of real world AI researchers is that they can use the same sort of operations as exemplified in the CR (Turing equivalent) to perform these other functions in an integrated way, as part of a larger system than the CR, and that THIS would be conscious. If "Strong AI" doesn't represent this claime, then it has nothing to do with the question of whether AI can achieve consciousness.
Obviously the AI project, understood in this way, means capacity matters, which could involve more processors as well as faster processes, more memory, etc., all intended to enable more the accomplishment of more tasks by the processes in the system. But note that the processors and the processing would be the same as you find in a CR type apparatus. Thus the "solely in virtue of" criterion is met (unless you want to so narrowly define THAT concept as to again reduce this to being just about a device with no more functionality than the CR).
Now I will snip away at least some of your repetitive stuff below (which I had replied to at some length in my earlier, uncompleted effort) for convenience.
<snip>
> Do philosophers whose positions do not qualify as "Strong AI" as Searle defines it still criticize the Chinese Room Argument?
>
> Yes. The examples above demonstrate this. And undoubtedly, there are other examples of positions that depart from "Strong AI" as Searle defines whose advocates would still take issue with the
> Chinese Room Argument.
This was never in dispute between us so I am at a loss to see why you spend so much time on the issue.
> For example, I take issue with the Chinese Room Argument and I don't advocate a position even remotely resembling "Strong AI"! But leaving that aside, I am sure there are many people who think it's just a bad argument. That doesn't prove that their positions count as "Strong AI" nor does it mean that they hold positions to which the Chinese Room Argument is even relevant!
>
On this very list, Neil is an opponent of "Strong AI" but thinks Searle's CRA fails. I, myself, am not a supporter of "Strong AI" but only one who believes that it is possible, based on what I take to be a viable (because reasonable) conception of consciousness provided by people like Dennett.
Note that Searle's CRA aims to prove that the thesis that consciousness can be achieved via computational processes running on a computer is impossible, not that it is unlikely, and my dispute is with THAT claim. It is NOT an effort to prove that, contra the CRA, "strong AI" is true. (Go ahead and check my historical postings if you don't want to take my word for it here.)
> Another example, from the original essay, would be what he calls the "Combination Reply". He acknowledges that the case described would be persuasive unless we looked "under the hood" (and again, I am not addressing the merit of this argument), but he says:
>
> "I really don't see that this is any help to the claims of strong AI, and here's why: According to strong AI, instatitiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality.
"
>
> Again, the fact that a philosopher presents a counter-argument to the Chinese Room Argument and the fact that Searle rejects that counter-argument do not demonstrate that the position they're debating qualifies as "Strong AI".
>
The text you give us above does not reveal that he thinks it does not support "Strong AI". It merely says it fails to undermine the CRA.
Note that the Connectionist Reply (as I have given it) is made up of the same internals as the CR and that is what this must finally be about for it to be about anything of significance at all. It's just that the system proposed by the Connectionist Reply has more going on in it and what is going on is doing so as part of an integrated system.
> Isn't "Strong AI" then a straw man, if it's defined so narrowly that most people who argue with Searle don't count as "Strong AI"?
>
A very important point. If all of Searle's responses were just to say "that's not what I mean by Strong AI" then we would have to conclude that his argument wouldn't be worth very much at all because he will be seen to have constructed a strawman claim which no one actually holds. But I see no reason to conclude that he has done that. Searle doesn't assert that the Chinese Gymnasium Reply isn't the sort of thing that he thinks the CRA denies, nor does he take that tack with Dennett's thesis and Dennett's is all about computational processes running on a computer (with the added fact being that the computer is conceived as a massively parallel processor, i.e., just what you would need to implement the Chinese Gymnasium).
I repeat: If Searle's argument is only relevant to the limited system exemplified in the CR, then it has no potency because it applies to nothing but such very specific systems and AI researchers do not think that achieving computationally based consciousness is just a matter of building rote responding devices like the CR.
> First, suppose that it is. Searle would not be the first to offer a straw man and he would not be the last. That in itself is no reason to disregard the textual evidence that he did define the position he called "Strong AI" quite narrowly.
>
There is no textual evidence I have seen that suggests he was only arguing about a very narrowly defined device like the CR because, if there were, he could not draw the broader conclusions he does draw from the argument about computers generally.
> Second, we should consider the historical context. People have offered various responses that seek to distinguish to evade the Chinese Room Argument and in so doing, their positions sometimes no longer qualify as Strong AI. Would that be a demonstration that Strong AI was a strawman? Or could it be evidence that in raising the issue, he has forced others to reconsider their positions and to reject the position he's set out to criticize, whether they acknowledge it or not?
>
Nor have I said anything different. If you are as familiar with my past remarks on these lists about this (as you initially suggested you were) you would know that I have expressed respect for Searle in general and even noted that he provided some useful insights into what we mean by consciousness through his CRA.
> Third, the literature of the Turing test and on machine functionalism written prior to the publication of "Minds, Brains, and Programs" does show positions that could at least be mistaken for what he describes as "Strong AI". If his work has forced the authors of those works to clarify their positions, to make explicit that they are not advocating Strong AI but had merely been mistaken for such, then he has done a service.
>
As I said above, I am in agreement with this so, if you think this is the crux of our disagreements here you have misread me again.
> Now, Mr. Mirsky. I have patiently and carefully elaborated my reading of Searle's usage of "Strong AI" and its relationship to the Chinese Room Argument, I have considered various counter-arguments, and I have shown complete civility in doing so. I consider any
> obligation to you fully discharged.
Your obligation was and is to back up your charges that I do not understand Searle's CRA based on remarks of mine you had allegedly seen via Google lists from the past few years. To date you haven't done that, leading me to believe you overstated your case, for whatever reason. Be that as it may, it's clear you and I have little personal rapport though you have indeed been civil in this last post. Perhaps it will last, perhaps it won't (as it frequently seems not to). At any rate, the only real obligation ever you had to me has NOT been discharged but I won't expect you to address it anymore if you are no longer making that claim.
> If you do not, I can only wonder what would satisfy you, short of my dishonestly saying that I'm somehow persuaded that you're right
> and I'm mistaken.
What I have seen here is that you have a complete misunderstanding of my position. I don't know where you have gotten this understanding from because you have resisted my requests that you post my statements, or links to my statements, that show that I hold such positions as you claim I hold. In fact, much of what you have said in response to me above assumes the exact opposite of things I have actually said on this list and elsewhere over the years. Since you seem reasonably intelligent I must ascribe your misreadings to either something intentional on your part or, perhaps, to your having been misled by something others may have said. (It's also possible that I haven't been clear enough, but if so, all you have to do is present the statements that are allegedly unclear so we can examine them.)
> The alternative is for me to engage in endless exchanges with you, addressing each and every point you might raise. I don't think my obligation extends that far.
>
> JPDeMouy
>
>
As I have repeatedly said, your only obligation is to back up the things you say, especially when they are directed in a pejorative sense at someone (in this case, me). Arguing against things I have never maintained and saying that therefore you have now discharged THAT obligation simply doesn't qualify.
But the truth is, I don't enjoy corresponding with you anymore than you seem to enjoy doing so with me. However, note that I have never attacked you personally, alleged something about your views and then declined to back it up, or pretended to back up my claims about what you said by presenting a strawman case for your positions and then proceeding to knock them down; but you have done all of that with regard to me.
Be that as it may, I am prepared to continue to look in here periodically on the off chance you still may decide to forgo the smoke and mirrors game and actually present the statements I have allegedly made which, you have told us, represents my flawed view of Searle's CRA.
SWM
============
=========
=========
=========
==
Need Something? Check here:
http://ludwig.squarespace.com/wittrslinks/