[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 17:00:00 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Tue, 3/16/10, SWM <wittrsamr@...> wrote:
>
> > the link and post it again. (By the way the issue is not
> > using a super computer, it's to use a parallel processing
> > system that runs MANY different processes performing many
> > different functions in an interactive and simultaneous way.
>
> I understand that as your view, and I think you make the same mistake as the 
> Churchlands in the article I posted.
>

Well you asked me to tell you how it is underspecked, what it's missing. That 
was the point of my directing you to that prior post, i.e., for some details.

Of course you think I make the same "mistake" as you impute to the Churchlands 
since I am saying much the same thing. I think you make the same mistake as 
Searle does by not realizing that your presumption as to the nature of 
consciousness is implicitly dualist and thus allowing that presumption to 
affect the conclusion you draw from the CR.

Anyway, I've taken the liberty of hunting down that material I put up here from 
Dennett's Consciousness Explained where he deals with Searle's CRA. It explains 
fairly well, on my view, just WHY it is appropriate to view the CR as 
underspecked and why the CRA, with its conclusion generalizing to all other 
configurations of the constituents in the CR, is mistaken:

http://groups.yahoo.com/group/Wittrs/message/4135

Starting on page 436 Dennett writes:

". . . and while philosophers and others have always found flaws in his thought 
experiment [the CR] when it is considered as a logical argument, it is 
undeniable that its 'conclusion' continues to seem 'obvious' to many people. 
Why? Because people don't actually imagine the case in the detail that it 
requires."

[Here follow two pages of example and explication, using a putative 
conversation concerning a joke between a computer and a person. Then picking up 
on page 438 he goes on. -- Added Note to Gordon: Here he addresses the claim 
you make further down, that "you believe he got it right". That is, it seems 
obvious to you. But is that enough? Dennett's point is that it isn't, that we 
cannot stop with what seems obvious to us.]


Dennett continues (addressing Gordon's question as to just what could be 
missing that would prompt me to say the CR is "underspecked"?):


"The fact is that any program that could actually hold up its end in the 
conversation depicted would have to be an extraordinarily supple, 
sophisticated, and multilayered system, brimming with 'world knowledge' and 
meta-knowledge and meta-meta-knowledge about its own responses, the likely 
responses of its interlocutor, and much, much more. Searle does not deny that 
programs can have all this structure, of course. He simply discourages us from 
attending to it. But if we are to do a good job of imagining the case, we are 
not only entitled but obliged to imagine that the program Searle is 
hand-simulating has all this structure -- and more, if only we can imagine it. 
But then it is no longer obvious . . . that there is no genuine understanding . 
. . Maybe billions of actions of all those highly structured parts produce 
genuine understanding in the system after all. If your response to this 
hypothesis is that you haven't the faintest idea whether there would be genuine 
understanding in such a complex system, that is already enough to show that 
Searle's thought experiment depends, illicitly, on your imagining too simple a 
case, an irrelevant case, and drawing the 'obvious' conclusion from it.

"Here is how the misdirection occurs. We see clearly enough that if there were 
understanding in such a giant system, it would not be Searle's understanding 
(since he is just a cog in the machinery, oblivious to the context of what he 
is doing). We also see clearly that there is nothing remotely like genuine 
understanding in any hunk of programming small enough to imagine readily -- 
whatever it is, it's just a mindless routine for transforming symbol strings 
into other symbol strings according to some mechanical or syntactical recipe. 
Then comes the suppressed premise: Surely more of the same, no matter how much 
more, would never add up to genuine understanding. But why should anyone think 
this is true? Cartesian dualists would think so, because they think that even 
human brains are unable to accomplish understanding all by themselves . . ."


[Recall my point that Searle's CRA hinges on an implicit case of substance 
dualism.]


Page 439:

"The argument that begins 'this little bit of brain activity doesn't understand 
Chinese, and neither does this bigger bit of which it is a part . . .' is 
headed for the unwanted conclusion that even the activity of the whole brain is 
insufficient to account for understanding Chinese. . . It is hard to imagine 
how 'just more of the same' could add up to understanding, but we have very 
good reason to believe that it does, so in this case we should try harder, not 
give up."

"Searle, laboring in the Chinese Room, does not understand Chinese, but he is 
not alone in the room. There is also the system, the CR, and it is to that self 
that we should attribute any understanding . . ."

"This reply to Searle's example is what he calls the Systems Reply. It has been 
the standard reply of people in AI from the earliest outings . . . but it is 
seldom understood by people outside AI. Why not? Probably because they haven't 
learned how to imagine such a system. They just can't imagine how understanding
could be a property that emerges from lots of distributed quasi-understanding 
in a large system. . . ."


[Recall my point that this is about how consciousness can be conceived, how we 
can imagine it! Note that I have been stressing the point that the inability to 
imagine it in the way Dennett proposes, or the unwillingness to do so, hangs on 
an implicit presumption that consciousness, or, in this case, understanding, 
cannot be reduced to more basic constituents that are not themselves instances
of understanding. I have stressed that Searle's argument hinges on precisely 
this insistence, that because there is no understanding to be found in the 
Chinese Room, no understanding is possible. Dennett notes that Searle basically 
underspecs the CR, just as I have said, which is why the "Bicycle Reply" -- a 
tip of the hat again to Peter Brawley for this name -- is the right one, i.e., 
that just as you can't build a bicycle and expect it to fly, you can't build a 
rote responding device and expect it to be conscious.]

Dennett again:

". . . Searle begs the question. He invites us to imagine that the giant 
program consists of some simple table-look up architecture that directly 
matches Chinese character strings to others, as if such a program could stand 
in, fairly, for any program at all. We have no business imagining such a simple 
program and
assuming that it is the program Searle is simulating, since no such program 
could produce the sorts of results that would pass the Turing test, as 
advertised."

"Complexity does matter. . . ."


> If Searle got his third axiom right that semantics cannot be had from syntax 
> (and I yes I believe he got it right, and I think the CR thought experiment 
> makes it blindingly obvious) then the number of processors becomes irrelevant.
>


It's not the "number of processors", it's the number of things being 
accomplished in the system. It's just an empirical fact that we cannot do this, 
with the requisite simulaneity in real time, without multiple processors 
running in parallel.

And yes, I know you "believe (Searle) got (the denial of syntax as constituting 
or being sufficient for) semantics right".

That is just what one would think IF they cannot conceive of consciousness 
being made up of constituents that aren't themselves of the same type 
(conscious).

Now we can say he either got it right or he didn't and therefore either you are 
right in agreeing with him on this or I am right in disagreeing with him on 
this.

I frankly don't know how one demonstrates either position to someone who 
doesn't see it -- except that I once held your position so there must be some 
way, i.e., something must have convnced me.

Looking back on it, I believe it was a gradual shift in my understanding of 
what consciousness is and that that was prompted by extensive consideration of 
the CR itself.

Initially I was convinced consciousness IS qualitatively different, all the way 
"down" to its most basic level, from the things we know through being 
conscious. But at some point, considering all the features I recognize as being 
part of what I mean by "consciousness", all the features I find in my own 
experience via introspection, I came to conclude that none of them were beyond 
replication via a computational platform.

From having perceptions to having beliefs to having images to having concepts 
to being able to think ABOUT things to "attaching meaning" to symbols to having 
a sense of self to understanding a language or a math problem or an allusion, 
etc., etc., I concluded that all of these things could, at least in principle, 
be products of a complex system working like a computer (i.e., running 
computational processes).

If this is so, I realized, then Searle's CRA had to be wrong and I set about 
trying to figure out how.

Initially, I focused on the actual CRA and looked at its premises and noted 
some definitional problems in some of the key terms, as well as what seemed to 
me an obvious equivocation in the key third premise. But these technical flaws 
were not the real problem I eventually realized. THAT lay in the underlying 
presumption itself, the one I had initially shared and which, in considering 
all the aspects of what we mean by consciousness and how they could be 
replicated, I eventually jettisoned.

But that realization, that jettisoning, isn't something that can be argued for, 
I have realized in these many discussions, because the problem lies with the 
underlying assumption itself and that is beyond real argument.

It's got to be seen, grasped if you will. If not, then the standard intuition 
we have about this will retain its hold, i.e., the sense that consciousness is 
separate from the rest of the otherwise physical universe.

So yes, I understand that you agree with Searle and, therefore, disagree with 
me. But the mere fact that you hold that position (or that I hold mine) is not 
enough to make the case for either. So there's no need to simply reiterate that 
you agree with Searle as you do above. I know and acknowledge that. But your 
agreement with him is not an argument FOR his position.

But if you want to give reasons for that agreement, beyond merely asserting it, 
I am certainly prepared to consider them, just as I hope you're prepared to 
consider my reasons for disagreeing with Searle.


> If you believe a million cpu's doing syntactic operations on symbols will 
> generate conscious understanding when one cpu does not then it seems to me 
> that you must believe organic brains actually exist as multi-processor 
> computers.
>
> -gts
>

Well, actually that is rather like Dennett's claim, i.e., that brains are 
massively parallel processing systems. I'm inclined to agree but I don't have a 
very detailed opinion on it because I don't claim to be an expert on brains 
(and I'm not convinced brains work the way computers do -- see Jeff Hawkins' On 
Intelligence for an interesting take on that). I do find Dennett's thesis 
preliminarily convincing however, precisely because his account of 
consciousness matches what I came to realize when I examined Searle's CRA and 
concluded it was based on an inadequate model.

The issue in the case of understanding, say, comes down to what understanding 
actually amounts to. I don't believe Searle gives much of an account of it 
except to say it's what we all "see" in ourselves when we understand anything. 
But WHAT is THAT?

It looks like every instance of understanding can be exhaustively described as 
various mental images and associations between images and other linkages we 
make between various representations (including to various implicit narratives 
we retain in our memories), etc. Think of the man in that cartoon staring at 
the Chinese inscription and thinking about (visualizing) a horse! Understanding 
the Chinese "squiggles" amounts to thinking "horse" and thinking it amounts to 
images (even if no two people ever have exactly the same image there can be 
enough similarities to enable shared ideas and, thus, understanding.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: