[Wittrs] Re: What the Man in the Room Knows (and when does he know it?)

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 24 Mar 2010 20:34:21 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:


> But your reply is a bullshit reply if you don't go anywhere toward
> less bullshit.

Unlike Searle, I am not trying to prove anything about AI.  So it
suffices that I point out where Searle's argument fails.


> I always thought that the "stuff" being internalized in Searle's
> reply to the systems reply was the "stuff" of computation.

It really does not matter exactly what Searle claims to internalize.

There are two claims that Searle is making in that argument.

Claim 1: Searle has internalized the computation, and is merely
mechanically applying rules to the symbols, without any understanding
of the meaning of the symbols.

Searle wants you to see it as implausible that an understanding of
Chinese could arise out of such rule following by Searle.  Okay,  I'll
grant him that it is implausible.

Claim 2: While following those mechanical rules, Searle is claiming
that he is responding to questions in Chinese, giving Chinese  response,
and passing a Turing test in Chinese while doing so.  Now that is not
merely implausible - it is quite impossible for  Searle to pass such a
test if he does not understand Chinese.

Searle's argument is a sleight of hand.  He wants you to concentrate  on
claim 1, and see how implausible it is that understanding  could arise.
But he does not want you to notice claim 2 at all,  for if you notice
that, you will see that it is ridiculous and that  making claim 2 thus
undermines the whole argument.

It seems that at least two people in this thread have fallen for
Searle's conjuring trick.

Regards,
Neil

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: