[Wittrs] Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 12 Mar 2010 14:03:05 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote:

> SWM wrote:

>  >Joseph Polanik wrote:
>  >For the record, the argument which I have made before and with which I
>  >think Dennett is in accord based on that text, goes rather like this:

>  >1) If you think consciousness cannot be broken down to non-conscious
>  >constituents, then you are a Cartesian Dualist.
>  >2) The only way to think that the CR implies that no similarly
>  >constituted system can be conscious is to think that consciousness
>  >cannot be broken down to non-conscious constituents.
>  >3) Searle thinks that the CR implies that no system with the same kind
>  >of consitutents as the CR can be conscious.
>  >4) Therefore Searle is a Cartesian Dualist.
>  >http://groups.yahoo.com/group/Wittrs/message/4550

>  >you may want to bear in mind that my position is not premised on
>  >Dennett's claims since I came to it after reading Searle, but before
>  >reading Dennett.

> okay, so this is *your* argument based on reading Searle. so, when asked
> for your basis for saying this or that, you should be able to cite
> Searle rather than Dennett.

Joe, I have done so many, many times both here and on prior lists. However, in 
this thread we have been talking about Dennett's position vis a vis Searle's 

Searle's CRA is well known, of course, though he has presented it over the 
years in multiple and often varying iterations. In generic terms, however, it 
goes like this:

1) Minds (consciousness) have (has) semantics.

2) Computers consist exclusively of syntax.

3) Syntax does not constitute and is not sufficient for semantics.

4) Therefore computers cannot have minds (be conscious)

The third premise is grounded in his Chinese Room (CR) thought experiment 
which, he says, demonstrates that no matter how you cut it, the CR (a proxy for 
any computer) cannot be said to really understand Chinese (because 
understanding anything, including Chinese, involves being aware of meaning, 
i.e., getting the semantics). Without understanding you don't have 
consciousness, and so forth.

You can google Searle's CRA anytime you like to find various versions of this 
argument, including supports for it and attacks on it (and we have often done 
just that and placed links to the results on this and earlier lists).

My response to the CRA is that, while #1 and #2 are unassailable if we agree to 
accept the meanings of the terms as Searle presents them (which we can for 
argument's sake), he is certainly mistaken on #3 because the CR does not 
demonstrate that as he claims.

I go on to argue that the only reason one would think it does demonstrate that 
is if one presumes, from the outset, that there is something about what 
consciousness is that excludes what he calls syntax (not always clear, as we've 
seen from other discussions!), from being a cause of it.

If one doesn't make that assumption (an assumption made easier by his later 
claim that syntax is all in the mind of an observer and has no natural place in 
the world, i.e., that it's not a "natural kind" and thus entirely without 
causal power), what you have are physical processes going on in the machine 
just as you have physical processes going on in brains. In both cases you have 
physical events only in brains we associate them with the occurrence of 
consciousness while in computers we don't. But is that a strong enough reason, 
alone, to think that computer operations lack the causal powers that brain 
operations are presumed to have?

If the processes going on in brains are enough to produce consciousness, why 
should such processes in computers not be (at least, in principle)?

Thus, if Searle says brains cause consciousness (as he does), then there is no 
reason computers cannot do so, as well, UNLESS consciousness is some kind of 
special phenomenon, something different from all other outcomes of physical 
processes in the universe such that only brains (or something capable of the 
unknown thing brains do) can do it. And Searle does say this when he argues 
that brains have some unknown (but possibly discoverable) causal capacity that 
computers lack.

But on what basis does he think computers lack it? Well, it's back to the CR 
where, if you look inside it, all you see going on are obviously mindless rote 
physical processes. But if you look in brains you don't see consciousness 
either, just rote physical processes, too (which we ASSUME are in some way 
associated with minds). That is, we assume that brain processes are different 
and Searle makes his case based on THAT assumption!

But my point is that the only reason to think that consciousness is not to be 
found in anything like the CR is to presume that we have to "see" it somewhere 
inside the machine for it to actually be there (even if we admit that we don't 
see it inside brains either)! But the only reason to think THAT is to think 
that consciousness cannot be constituted by these kinds of things (rote 
physical processes), i.e., that it cannot be constituted by things that are not 
like it (because they are not, themselves, conscious).

And this presumption is the suppressed premise that Searle draws on to arrive 
at a general conclusion from the specific case of the CR, which is all that the 
CRA amounts to.

His conclusion is that if the constituents of the CR cannot produce 
consciousness in the CR, they cannot do so anywhere else either (in any other 
configuration, i.e., in any other "R"). He gets there by presuming that there 
is something about consciousness (i.e., it is not reducible to what isn't like 
itself in type) that prevents it from being constituted by the constituents of 
the CR which are merely physical events performing certain syntactical 

And it is THIS assumption that implies the notion that consciousness is 
irreducible, i.e., that it is ontologically basic.

But then Searle runs afoul of this whole logical apparatus he has built up when 
he asserts that brains cause consciousness for now he is in self-contradiction, 
because he has asserted computers can't do it since they are only mindless 
physical processes, or he is claiming that, whatever the consciousness that 
brains cause is, it is itself ontologically basic at the instant of its genesis 
and not reducible.

Either way he is stuck in a dualist mode of thinking because he is saying 
either that:

1) Brains cause something ontologically separate (dualism) from themselves; or

2) Computers can't cause consciousness because consciousness is ontologically 
separate (dualism) from the constituent elements of computers.

But since Searle insists that he is not dualist, he is in contradiction once he 
claims that the CRA is valid since the only way he can get to the CRA's 
conclusion is to accept either #1 or #2 above.

> in (2) and (3), you're a little vague on which meaning of implies you
> are invoking. for each of those statements, are you using 'implies' to
> mean 'logically entails a conclusion' or to mean 'logically requires a
> presumption' or something else?

"logically requires"

Searle's argument is a logical one. Note the syllogistic form he gives it. My 
response above directly addresses that syllogistic claim. (By the way, there is 
no difference between "logically entails a conclusion" and "logically requires 
a presumption". The latter merely notes that there is a suppressed premise in 
the argument that enables the conclusion to be reached. A suppressed premise, 
of course, is something that is unstated but which is included in the series of 
steps needed to reach the conclusion. What is suppressed in the CRA is #2 above 
since Searle never really addresses the issue raised by #1 head-on, at least in 
what I've read of his arguments.)

> hopefully, you can clarify those points while I comment on (1).
> (1) is dubious.
> it seems to me that anyone who is not a reductive physicalist believes
> that consciousness cannot be broken down to non-conscious constituents;

THAT is merely to assert what seems intutively clear to many of us. But being 
intuitive is no guarantee of being right. Anyway, it doesn't matter what anyone 
believes or what the majority believe. What matters is if something is right or 

So what is "Cartesian dualism"? Earlier you wanted to say it is to embrace the 
whole (or bulk) of Descartes' philosophy. But that isn't how the term is used 
in philosophical circles. Rather it is used to denote the case where someone 
shares the view of minds first articulated in western philosophy by Descartes, 
to wit, that it is a separate and self-subsisting something (he used the term 
"substance" as you often note) that stands apart, as observer, from the 
physical phenomena of the world. From this picture Descartes developed a 
complex story about how minds interact with the rest of the universe, given 
their separateness. But many different stories are possible. The important 
question is whether minds are really what he presumed, a separate something 
coexisting with the physical phenomena of the rest of the universe.

My first premise merely states what it means to be a Cartesian dualist in this 
philosophical sense, i.e., it is NOT to subcribe to Descartes' entire doctrine. 
It's to hold the same picture of mind and the world that he did.

> and, that would include emergent physicalists, property dualists,
> panpsychists among others; so, unless you can show that all of these
> positions are logically equivalent to Cartesian (interactive substance)
> dualism, premise (1) is false.
> Joe

THAT is completely wrong. Premise 1 is about what it means to be a Cartesian 
dualist and nothing more than that. Many distinct doctrines are possible but 
there are only three basic possibilities:

1) The universe consists entirely of whatever it is that underlies physicality 
and nothing more.

2) The universe consists entirely of whatever it is that underlies what we call 
mind and nothing more.

3) The universe consists of at least two basics, that which underlies mind and 
that which underlies the physical, which somehow co-exist.

The doctrines you mention, and others, are all variations on these themes (many 
of the variations in dualism, at least, being driven by arguments over how 
co-existence is possible, i.e., the mind-body problem).

However, as I have frequently tried to point out, none of the above three 
claims can be successfully argued for in any definitive way because, being of a 
metaphysical kind, they are all equally compatible with the way things are as 
we find them. That, indeed, is their point: to account for the world as it is.

Wittgenstein aimed to direct us away from such inquiries because, as he noted, 
they hinge on linguistic applications that are extracted from their real world 
contexts, e.g., look at how taking a term like "substance" out of its everyday 
contexts leads us to think about the universe in a certain way, a way that's no 
longer compatible with modern physics theory and can thus mislead or, at least, 
prompt us to step away from physics into a realm of discourse that can go 

While we can recognize that there are underlying metaphysical pictures 
(sometimes we have one or the other or both simultaneously, say), it pays to 
also recognize that that is all they are so why argue about them?

And that, of course, is the point I have been making about Dennett's model, 
i.e., that it is not an argument FOR one of these pictures over the others but, 
rather, it's an argument that consciousness CAN be explained in terms of one of 
them (the one most consistent with modern scientific perspectives), in which 
case we don't need to invoke a different one (which may be less consistent or 
require unscientific add-ons) to explain the presence of minds in the universe.

None of this prevents anyone (including you) from being a dualist or an 
idealist, say. It merely enables those who aren't to proceed by relying on a 
picture that is neither of these two.


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: