[Wittrs] Re: Dualism Cooties: Ontologically Basic Ambiguity

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sat, 20 Mar 2010 18:38:43 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "iro3isdx" <wittrsamr@...> wrote:
>
>
> --- In Wittrs@xxxxxxxxxxxxxxx, "jrstern" <jrstern@> wrote:
>
>
> > I mean, we don't really know what is meant by "baseball", so some
> > guys get together and specify it, and then we all play.
>
> Similarly, we should just work on the principles of how to build  an
> artificial cognitive system.  All of the talk about physicalism,
> reductionism, dualism, etc, is just a distraction.
>
> Regards,
> Neil

Right, but in the meantime, we can clarify what you're after.  Derived 
intentionality such that we don't care whether what we call a cognitive system 
is really a cognitive system?

Suppose Searle is just fine with weak AI and its possibility, as you suggested 
earlier may be a bit optimistic on his part.  That's one thing.  Creating 
robots is just awesome.

But wouldn't you want to distinguish between weak AI and a bona fide 
theoretical issue as to how intrinsic intentionality works too?

On Searle's proposal, we have two noncompeting research programs.

On others' proposal (including Hacker) we have but weak AI as if that is as 
good as it gets vis a vis philosophy of mind.

Enter Fodor:  "Having thoughts:  A Brief Refutation of the Twentieth Century," 
for starters.

Wittgenstein didn't change the game of philosophy after all.  He (ironically) 
allowed us to bypass epistemology and do a type of philosophy that some of his 
more quietist admirers abhor, and sometimes with arguments..  But are they any 
good?

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: