[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 15 Mar 2010 00:07:49 -0000

I had lost Internet connection for a couple of days because of the big 
Nor'easter we just had. Will try to catch up.

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
<snip>

SWM:
> > The man in the room isn't there as a conscious entity but
> > as an automaton blindly following a list of rules. His
> > consciousness isn't relevant since he is acting in lieu of a
> > cpu. 
>
> His consciousness has *enormous* relevance, Stuart!
>
> The thought experiment shows that the human mind attaches meanings to symbols 
> by some means other than running formal programs. It shows us that the 
> computationalist theory of mind fails to explain the facts.
>
> -gts
>

We don't need the man in the room to know what we mean by consciousness or 
understanding. But his role in the room is merely to show that he doesn't 
understand Chinese by following rote rules of symbol matching. We don't need 
him there for that, of course, but using a man enables Searle to say the 
understanding is missing. But in terms of the logic of the scenario, the man is 
irrelevant since his awareness, his understanding, his consciousness is 
irrelevant to the understanding the CR is supposed to be evidencing.

Finally, the issue is whether the failure of the CR with its man in the room to 
understand Chinese implies a general conclusion that nothing operating in this 
way could have understanding.

And that involves understanding what understanding is. Is it some irreducible 
phenomenon present in the world in some fashion alongside other things or is it 
just a function of those other things or of the same things those other things 
are functions of?

Do brains produce understanding by physical processes and, if they do, what 
kinds of physical processes can we expect to be able to do it?

Are computational processes running on computes excluded from doing what brains 
do simply because the man in the room playing the role of a CPU lacks 
understanding of what he is doing?

That is Searle's point in the CRA, by the way, and I have suggested it is 
mistaken because it depends on an idea of understanding (and consciousness) 
that is probably itself a mistake, namely that consciousness is irreducible to 
anything that isn't already like it, i.e., isn't conscious.

As I have noted, this also puts Searle into a bind because if brains can do it 
and they do it with physical processes (which he seems to accept) then nothing 
in the CR shows that computational processes are the wrong kind (even if they 
are, in fact, the wrong kind). But if brains don't do it with physical 
processes or if, in using perfectly physical processes, they are bringing 
something new into the universe that isn't reducible itself, then Searle is a 
dualist despite his claims to the contrary.

As long as Searle insists on the CRA's conclusion that the CR shows that 
computational processes running on computers can't produce consciousness, while 
agreeing that brain processes runniing in brains can, he is in contradiction, a 
contradiction that is compounded by his disavowals of dualism.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: