[Wittrs] Re: Who beat Kasparov? How about that Asimo?

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 21 Mar 2010 16:28:36 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> Yes, Searle introduced the idea of "as-if" intentionality but he did not 
> introduce the idea of intentionality itself. As-if intentionality seems an 
> easy enough concept to grasp.
>

A friend sent me a few links to some AI YouTube clips as I indicated in a 
nearby e-mail. This one particularly struck a chord with me as it seems 
remarkably pertinent to this argument over intentionality as in who has it, 
what should count as having it, and whether there's a distinction between 
"intrinsic intentionality" and "as-if intentionality" a la Searle or whether 
Dennett's account of it as being a function  of how certain things behave and 
how we therefore choose to relate to them. Here is the link to an experimental 
robot being developed to learn how to "see" and recognize (as in distinguish 
between) the things it sees.

http://www.youtube.com/watch?v=P9ByGQGiVMg&feature=related

Obviously, aside from the warm and fuzzy smiley face on Asimo and its 
child-like behaviors, most of us will doubt there is anything like what we 
consider intentionality in ourselves going on in it. And I would be inclined to 
agree. The behaviors are remarkably compelling and even convincing on a gut 
level but there is no reason to think they haven't been programmed in and are 
all artificially designed and implemented constructions.

But the behavior of the robot is driven by a mechanism that the engineers have 
designed to mimic the way human infants learn to differentiate images. If, in 
fact, this robot is operating along lines much like those we find in human 
infants and has the capacity to develop increasingly sophisticated memories of 
what it has "seen" and to relate them (that is, if the programming is not for 
the final behaviors but for some internal behaviors that can learn and 
eventually develop into the right kind of final behaviors), then where do we 
actually draw the line between such synthetic systems and ourselves? And can we?

Here's the point: Whatever it is that the human brain does, we know it develops 
certain capacities and therefore has the capacity to learn. By "learn" I don't 
just mean to memorize information. I mean to develop and retain responding 
capabilities. In us this takes the form, at least in part, of a mental life. 
The anti-AI people want to say that robots might be able to be developed to 
have (or even develop) the right behaviors, but that they will still be missing 
the right internals, the mental life.

Is that necessarily true? It might be, of course, but people like Dennett argue 
that you cannot get the full panoply of behavior that we find in ourselves 
without an equivalent mental life and that, therefore, it's absurd to suppose 
that an artificial entity could be built with everything we have, including the 
full range of our behaviors, without it also having the equivalent of our 
mental lives.

So here is a threshold that radically divides the two sides. Dennett says the 
idea of philosophical zombies (fully functioning synthetic humanoid machines) 
that still lack a mind is incoherent while people in Searle's camp seem to 
think it is perfectly coherent. Why this radical divide in how we see things?

Perhaps it comes down to this question of intentionality and what it is? Look 
at Asimo. The machine seems to have intentionality until we look deeper and 
then we are inclined to say it lacks it. Surely it lacks the full range of 
anything we might want to recognize as a mental life and you need a mental life 
to have intentionality. Or do you? (Lower animals certainly have a kind of 
intentionality though they would seem not to have the kind of mental lives we 
have.)

People like Dennett want to say (and I am on record as agreeing) that 
intentionality isn't a thing that we have. It's just an array of behaviors, 
dispositions and certain subjective experiences (think of the guy in the 
cartoon visualizing a horse as he views the Chinese character for "horse"). And 
if it's that, then it's not a qualitatively distinct thing from what Asimo has. 
In us it's just more of the same kind of thing doing many more different things 
as what is going on in Asimo's processing unit (its brain).

Thus you can say that intentionality occurs in us as it occurs in other 
creatures but on a kind of sliding scale -- along a continuum. And Asimo is an 
entity that has been designed to stand on one end of this continuum in the 
ongoing effort to develop increasingly sophisticated "minds" in machines (where 
"sophisticated" just means minds more like our own).

So does Asimo have "as-if intentionality"? Well, yes. His (its) "face" has been 
designed and constructed like a smiley child's and, thus, to touch us. It's 
body is small and round with little arms and legs and the way it moves these 
and its face in response to stimulative prompts is endearing. But all of this 
has been built and programmed into Asmio to fudge the distinctions between us 
and this machine. The same kind of entity could have been built as a mechanical 
spider or or salamander or as a black box (at least in principle though it may 
at least be arguable that you need a primate configuration to achieve the right 
kind of relations with the world to enable development of human like 
intelligence). The point, though, is whether this Asimo really can learn to 
recognize and distinguish visual images within its visual range and then 
whether it will be able to do with these the kinds of things we do, i.e., build 
up more and more complicated networks of these images and their relationships 
such that it can eventually come to recognize its environment in a broader 
context and to recognize objects before it in the context of that environment, 
etc. Can it build an idea of a world, distinct from itself, using such 
algorithms as its makers have built in or may build in in the future?

Can an Asimo ever graduate from "as-if intentionality" to what we recognize in 
ourselves as the real thing? The argument of people like Dennett hinges on the 
notion that we are just way more complicated and more capacious Asimos and 
that, therefore, there is no qualitative difference between us and such 
machines even if there are quantitative ones. And what we understand as a 
qualitative difference is, finally, reducible to quantitative constituents.

This is why people who espouse a Searlean view and people who espouse a 
Dennettian one seem to be unable to ever effect a real meeting of the minds. 
Having been on both sides of the divide I like to think I can see both sides. 
But I have clearly come down on one side so it's apparent that I am now taken 
as a partisan for that side and cannot offer any path to reconciling these 
opposing views to those on the other side.

But I want to at least point out that in plunking for an understanding of 
intentionality as just an expression of the stances we take with regard to 
phenomena in our experience, Dennett is not denying that we have mental lives, 
that we are intentional in the sense that we do think ABOUT things and grasp 
meanings. Nor would he, I'm sure, argue that Asimo is already there. His point 
is only that there is good reason to think Asimo is on the contiuum with us and 
could get there with the right algorithms in place. (If so, the CRA must be 
wrong by the way.)

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts:

  • » [Wittrs] Re: Who beat Kasparov? How about that Asimo? - SWM