[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 15 Mar 2010 00:41:41 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> The CRA illustrates two concepts:
> 1) Software/hardware systems cannot by virtue of running formal programs 
> obtain understanding of the meanings of the symbols they manipulate.
> and
> 2) Given that the human brain/mind does understand the meanings of the 
> symbols it manipulates, it must do something other than or in addition to the 
> running of formal programs.
> -gts

It does not illustrate the first. All it illustrates is that such processes in 
isolation (as stand alone processes) are not conscious and don't have 
understanding. But it says nothing about combinations of such processes which 
are still the same sorts of thing but capable of doing much more (more 
extensive and complex information processing).

The problem is that the CR is too stripped down a system, too limited a picture 
of what it means to understand Chinese. No one would argue that rote responding 
(symbol matching), which is all the CR is doing, is what we mean by 

The question is what is it that brains do that produces what we recognize as 
understanding since there is NO reason to think they are just engaged in rote 
responding (symbol matching).

As to the second statement, we can agree that the brain must do something other 
than rote responding (symbol matching) but there is nothing in the CR that 
demonstrates that a more robust system doing many more things in a layered and 
interactive way couldn't do what brains do and what the limited CR cannot do.

As Peter Brawley noted on another list: "You can't build a bicycle and expect 
it to soar above the clouds like an airplane".


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: