[Wittrs] Re: Ontologically Basic Ambiguity: Mode of Existence

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 22 Mar 2010 18:53:27 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> > His job in the room is to pretend to be a mindless CPU and go through
> > certain rotes steps like a CPU would.
>
> No, his job as the system, OR as the man in the system, is to try with all
> his might and with all his resources to understand what those darned Chinese 
> symbols mean.
>

No. Where do you think Searle says anything about his trying to understand 
Chinese as being a feature of the system? Does a CPU "try" to understand 
Chinese? If it did, we would say it already has intentionality, one of the 
features we consider an aspect of what we mean by consciousness, in which case 
the question of whether the CPU understands would be moot!


> He can't understand them, not even when he contains the entire system. 
> Neither he nor anything in him understand the symbols, because there exists 
> nothing in the room that does not exist in him and nothing in room can 
> understand the symbols.
>

His trying to understand Chinese is irrelevant. He is just playing the role of 
a rote mechanism, matching symbol to symbol! I'm sorry but this is a complete 
misreading of the Chinese Room thought experiment.

As to the relevance of there being anything in the room that understands, well 
that is precisely my point vis a vis the third premise of the CRA! The fact 
that there is nothing in the room (no constituent or constituent process of the 
system) that understands
is NOT a demonstration that those constituents or constituent processes 
(depending on how we characterize this), cannot understanding if combined in 
the right way.


> Even if strong AI=true, a strong AI system could not understand Chinese 
> symbols solely by virtue of implementing a program for manipulating them 
> according to syntactic rules.
>
> Think about that Stuart. Even if.
>
> -gts
>

I have thought about it, Gordon. That's why I hold the position I now hold. 
This really hinges on what it means to understand. If understanding is just 
making connections in different ways, connections that build various pictures 
and connect them through a network of associated links, then this is nothing a 
computer could not accomplish, though it would have to be a computer with 
massive capacity for receiving and retaining information and for performing 
different functions with that information.

Think of the man in that cartoon looking at the Chinese character for horse and 
thinking about a horse. What is he doing? He's picturing certain features to 
himself. He's recalling an image, in a recall process that starts with a link 
between the abstract symbol and some particular retained mental image and then 
which branches out in ways the cartoon itself cannot show.

For instance, along with the mental picture of the horse he presumably has some 
other things, e.g., bits of knowledge about what horses are (mammals, four 
legged, long faces, rideable, sweaty when you run them, herbivores, etc.) which 
is associated with that picture, as well as what he has learned in more indiret 
ways about horses (e.g., in terms of history: mankind tamed them, they came 
from a smaller animal called eohippus, they were initially brought to North 
America by the conquistadors, the American Indians learned to ride those that 
had gone wild and became great plains warriors, Alexander the Great rode a 
horse called Bucephalus, etc., etc.)

The links are, in principle, endless, and to the extent we share a bunch of 
them in varying degrees we have common understandings of what the symbol(s) for 
horse and the word "horse" mean.

Ask someone a general question about horses and you'll get different answers 
but some the same and to the extent you get a decent amount of answers you 
share with the person you questioned, there will be common understanding.

So what then is understanding? It's being able to take any input and place it 
into a network of these associations. There is no reason in principle, that I 
can see, that a computer could not do this given enough capacity and the right 
programming (to enable all these different pictures to be built, retained and 
accessed/used). And, if so, then what we call understanding is nothing more 
than a feature or property of such a complex system.

The fact that we cannot look at any of the component processes of such a system 
and say aha, here is the understanding in this particular stand-alone process 
does NOT preclude our finding it in a combination of such processes.

Searle's CRA depends on the notion that the understanding MUST be a feature of 
a particular constituent process and completely disregards the possibility that 
it might be a systemwide funcion.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: