[Wittrs] Re: Who beat Kasparov?

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 23:07:40 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Tue, 3/16/10, SWM <wittrsamr@...> wrote:
>
> >> Did you know Dennett believes Deep Blue actually beat Kasparov at chess
> >> -- not Deep Blue's designers at IBM?
> >
> >
> > I do, too. The designers designed and built a machine and
> > that machine acted in the world and did what it did. Of
> > course if it won,  it beat Kasparov. The designers
> > didn't play in the game, face the particular plays Kasparov
> > made.

>
> I believe the designers at IBM created Deep Blue as a tool for beating 
> Kasparov at chess. They, not Deep Blue, beat Kasparov.
>

Yes, but isn't this mainly an artifact of how we choose to look at it? Although 
I wouldn't cast this as an expression of "belief" as you have done, I certainly 
would say it seems to me that the machine beat Kasparov though not in the way a 
human opponent would have done it. And, in a sense, I would also say the 
designers beat him. But then as Dennett notes, none of the designers acting 
individually presumably would have been able to take Kasparov on over a chess 
board -- or at least just because they could design a machine that could beat 
him in the game (however it was done) doesn't mean any of them are his match or 
that they actually played against him like some homunculus in the machine!


> Deep Blue has what Searle would describe as as-if or derived intentionality. 
> The human designers have genuine intentionality. Dennett fails to acknowledge 
> that important distinction, perhaps because his philosophy leads him to take 
> the "intentional stance" toward Deep Blue.
>

Dennett talks about ascribing intentionality and suggests there is no important 
difference between doing this and the idea that so and so has intentionality. 
Again, it looks like different ways of talking to me.

The important issue is whether we can build a machine that has autonomy of the 
sort we have including a sense of itself, of the world that isn't itself and, 
at higher levels, the kinds of things we ascribe to ourselves like 
understanding, intelligence, intentionality, and so forth. That, ultimately, is 
not just about the ways we use words though we may still use words differently 
in such cases.

> I classify computers as tools. Philosophically, they differ in no important 
> respect from any other kind of tool.
>

And sometimes we treat people or animals as tools. Being a tool is a role lots 
of things can play. But if a person or anumal can be a tool, why can't a 
computer be sentient like a person or an animal if it has the right system in 
place and in operation?

> When you open a can of soup, who/what opens the can? Does the can-opener open 
> the can? Or do you open it using the can-opener as a tool?
>
> -gts
>
>

Why do you think that is a relevant analogy? And how does it affect the 
possibility of building a machine that has the kinds of awareness we have?

It seems to me that you are finally making this an argument about how we treat 
things rather than about what can be done with artifacts or how our own brains 
happen to work. But that is surely a different question!

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: