[Wittrs] Semantics, Meaning, Understanding and Consciousness

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 28 Mar 2010 13:50:16 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> > The issue is whether a computer can be conscious. The man
> > is already. But his consciousness is irrelevant
>

> The man had consciousness when he entered the room, so you and Dennett 
> misunderstand the experiment when you suggest that it tests whether 
> implementing a formal program will cause consciousness. We would never know 
> from the experiment if programs cause consciousness.
>

Even Searle deploys the CRA in response to questions about consciousness 
despite his very slippery disavowal of same on that site you linked us to 
(which, on re-reading, I see was Hauser's own). Searle does use the term 
"semantics" as a proxy for mind as in "brains have semantics" or "minds have 
semantics". But insofar as he is using "semantics" in that way, it can only 
mean understanding as in "understanding Chinese" in the CR example!

As to the claim that "we will never know from the experiment if programs cause 
consciousness", if that is Searle's new position, then the whole CRA is 
rendered pointless. If "semantics" isn't about "understanding" and 
"understanding" isn't about what we recognize in ourselves as that phenomenon 
and if that phenomenon isn't associated with consciousness (what it means to be 
conscious), then what are we supposed to take the CR and the CRA to 
demonstrate? Why does the CRA end with a conclusion that programs running on 
computers can never cause minds?

Searle has become too cute by half in his efforts to shimmy out of the 
untenable box he has got himself in with his insistence on defending the CRA.


> The experiment plainly illustrates Searle's third premise that syntax does 
> not give semantics, and nothing more.
>
> -gts


It doesn't do that. Moreover, if "semantics" has no implications for minds then 
what's the point of using the CRA to claim that computationalism, the idea that 
computational processes running on computers, can never cause minds?

Note that the 3rd premise depends on a particular conception of consciousness, 
a particular way of thinking about it. If you share that conception, then the 
third premise certainly looks "obvious" to you. It once looked obvious to me, 
too.

The point of Dennett's critique, however, is to show why it's not obvious at 
all, why it only seems obvious. (My critique was aimed at something a little 
different, i.e., to show the logical flaws in the CRA and that it depended on a 
particular conception of mind.)

But to see any of this you have to be willing to step away for a moment from 
your idea of consciousness and consider at least one other possibility, that 
understanding may be a system rather than a process property in this context, 
i.e., that there are no more atomic properties than there are atomic facts or 
atomic propositions.

If the possibility that understanding is a function of a system rather than a 
property somehow attached to certain physical events can account for the 
features we recognize in ourselves as consciousness, then nothing is missing 
and the failure of the CR to be conscious can be seen to be an outcome of the 
kind of system the CR implements, not the implementing processes themselves.

Searle's argument, on the other hand, depends on thinking the problem lies with 
the implementing processes, not the system.

As to the consciousness of the man in the room, THAT is irrelevant to the 
capacity of the CR to do what it does. The man in the room is implementing 
steps that have NOTHING to do with understanding the information being fed in 
or out of the CR so his consciousness, his understanding, is irrelevant to the 
case.

Yes we can contrast his understanding his instructions and so forth with his 
lack of understanding of the inputted and outputted symbols. But that is not 
the crux of the argument.

Searle's CRA is about whether such a set of processes as found in the CR 
(receiving symbol X, matching it to symbol Y on the look-up table, and 
outputting symbol Y) represents/has/reveals/consists of what we call 
"understanding". Manifestly it does not and from that Searle says, aha, 
therefore nothing consisting of such processes can manifest understanding! 
Hence his general conclusion from the specific case of the CR.

But if understanding is a function of a larger system, a more complex set of 
processes that includes matching symbol X to a whole range of other symbols 
representing DIFFERENT kinds of things, all of which are internally associated 
(in the way we associate things that WE think about) with other representations 
and networks of representations, IF the system consists of many different 
processors handling various representations of the world and the self and the 
range of relations between these, and if the inputted symbol is not simply 
matched to a list of other symbols (as in the CR) but gets connected to the 
appropriate symbol via an associational network such as I have just described, 
then we have

1) a more robust system than the CR which is made of the same kinds of 
processes which

2) is able to pick out a responding symbol based on the same kind of processes 
that seems to happen in us.

I'm reminded of an incident that occurred to me while driving north through the 
Carolinas. I come from the NY area so I am used to certain ways of speaking. At 
a certain point I passed a sign that said "burn lights with wipers" and I drew 
a temporary blank. It didn't compute! Then I had an image in my mind of a big 
bonfire on which hands were throwing light bulbs and winshield wipers. But that 
made no sense and that's when I realized what it meant: drivers should turn on 
their vehicles's headlights when running their windshield wipers in inclement 
weather. The context of it being a road sign and me driving and the weather 
looking overcast combined to trigger the right set of associations in my mind 
and with them came understanding.

With my recognition of the meaning (the semantics) my mental imagery changed 
and I imagined myself leaning across the steering wheel and pulling the knob 
for my headlights with my windshield wipers going. The new image made sense.

Suddenly I understood the meaning of what, to me, was an unusual way of 
phrasing the message on the sign.

In NY the sign would have said something like "turn on headlights when wipers 
are in use" or some such. That message would have prompted the right mental 
imagery in me much more quickly and, thus, understanding. But the South 
Carolinian sign threw me for a loop and conjured the wrong images which, 
initially, I could make no sense of.

Were I a CR, of course, the symbols on the sign might have just prompted a 
mindless action if the conditions were right, nothing if they weren't. But, 
being a person, it prompted a series of mental pictures in my head, each of 
which had its own meaning because it carried its own additional connections, 
i.e., I knew what a bonfire looked like and what it did, I knew what it meant 
to speak of "lights" and "wipers" and "burn", etc. But some of those meanings 
were plainly the wrong ones for the context, i.e., they had the wrong 
associations and the initial reaction I had, to imagine a picture with the 
wrong associations didn't match any context I recognized, and then the second 
reaction did match the context and it was at that point I understood the sign's 
message.

So understanding may be understood as a series of associative connections we 
make vis a vis the inputs we receive and which involve a stored repertoire of 
mental pictures and connections. Of course, this is far more extensive and 
complex than simply matching inputted symbols to appropriate response symbols 
according to certain rote rules of operation.

Searle misses this entirely in favor of what looks like little more than a 
magical view of consciousness qua understanding, i.e., something happens in 
those of us with brains that is inexplicably dependent on having a brain and on 
that brain doing whatever it does. But on Searle's view, though we don't know 
what it is brains do or how they do it, we know that computers can't do it!

But if all that it involves is maintaining and running such a complex 
associational system of representations and relations, then why shouldn't 
computers have the same capacity to do it (in principle) as brains?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: