[Wittrs] Re: Dualism Cooties: Ontologically Basic Ambiguity

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sat, 20 Mar 2010 23:46:15 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

> Right, but in the meantime, we can clarify what you're after. Derived
> intentionality such that we don't care whether what we call a
> cognitive system is really a cognitive system?

I'm not sure where you are getting that.

Although I disagree with Searle's CR argument, it does not follow  that
I am a disciple of Dennett.

> Suppose Searle is just fine with weak AI and its possibility, as
> you suggested earlier may be a bit optimistic on his part. That's
> one thing. Creating robots is just awesome.

But we won't create robots like R. Daneel Olivaw
<http://en.wikipedia.org/wiki/R._Daneel_Olivaw> .

> But wouldn't you want to distinguish between weak AI and a bona
> fide theoretical issue as to how intrinsic intentionality works too?

I have been investigating that theoretical issue.

Weak AI = Strong AI, version 1:  Intentionality is merely derived
intentionality, so we can do without it.

Weak AI = Strong AI, version 2:  You will never get a sufficiently
human robot (such as, say, R. Daneel Olivaw), without providing it  with
intentionality.  That is, the only solutions to the weak AI  problem
will be those that are also solutions to the strong AI  problem.

You seem to be assuming that I was proposing version 1, whereas I  am
going with version 2.


Other related posts: