[Wittrs] Re: Dennett's Intentional Stance

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 20 Jan 2010 21:46:08 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "iro3isdx" <wittrsamr@...> wrote:
>
>
> --- In Wittrs@xxxxxxxxxxxxxxx, "jrstern" <jrstern@> wrote:
>
>
> > My point, should I have one, is that I rather favor the Searle view,
> > that intentionality is really something beyond an attribution.
>
> And I actually agree with that.
>
> The problem with Searle's argument was that, in effect, he said  that
> even if your AI system gets all of the behavior right, it  won't have
> intentionality so won't be "strong AI."


You are not quite up to speed here.  The point is that even if the AI system 
got all the behavior right, it won't NECESSARILY have intentionality and, 
further, would still actually BE the strong AI thesis which assumes that 
behavior is all that matters.  Searle refuted strong AI (and functionalism _en 
passant_) by noting that the thesis of strong AI is that the appropriately 
behavioral output just IS all we can squeeze out of intrinsic intentionality, 
while showing a case where this is not true, i.e., the man instantiates the 
formal program and exhibits the appropriate behavior while all can see he 
doesn't have what the strong AI thesis claimed he would have.  Strike one.


>  In my opinion  he should have
> said "your AI system won't succeed in getting the  behavior right
> because it won't have intentionality."


That misses the whole point of the original target article which focusses on 
the exact thesis of strong AI.  It furthermore misses the point that Searle 
allows for akrasia and the like which amounts to no necessary connection 
between intrinsic intentionality and behavior.  Strike two.
>
>
> > In fact, it only now occurs to me, that Searle does offer his own
> > purely attributional story in his Wordstar parable. He makes it
> > out to be absurd, does he not?
>
> For sure, his "Wordstar system" in his wall does not get the  behavior
> right.
>
> Regards,
> Neil

Strike three.  The point about the program ex hypothesii instantiated  by the 
wall is designed to show that a systems reply changes the subject to the point 
where we no longer have a thesis (strong AI was supposeed to be a candidate) 
for distinguishing minds from nonminds.

Cheers,
Budd  (Hi Gordon!)


=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: