[Wittrs] Re: formal arguments

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 15 Apr 2010 17:44:23 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> --- On Thu, 4/15/10, SWM <wittrsamr@...> wrote:
>
> > Look, the CRA is about whether so-called "Strong AI" is
> > possible. "Strong AI" is Searle's name for the thesis that
> > one can produce consciousness via programs on computers. So
> > the issue IS entirely about what a computer can or cannot
> > do, i.e., can it CAUSE consciousness as brains do.
> >
> > If it's not about THAT, the whole exercise is pointless.
>
> Right, but the CR thought experiment/3rd axiom is not "about that".


Wrong. There is nothing to show if it's not about that because no one thinks 
that syntax IS semantics.


> It's about syntax and semantics and nothing else -- just one leg of the three 
> legged stool that leads to the conclusion that programs don't cause minds.
>

And it's the stool that's the issue.

> For whatever reason, you can't see clear to separate the three legs of the 
> stool. You confabulate and imagine arguments that don't exist.
>

The reason is simple. NO ONE THINKS SYNTAX IS SEMANTICS. The issue is what does 
it take to produce "semantics"?

If non-identity doesn't imply non-causality then the fact that syntax isn't 
semantics is irrelevant to the conclusion.


> The actual CRA is very simple. Given A, B and C, D is true. And A means A and 
> nothing else, B means B and nothing else, and C means C and nothing else.
>


The CRA purports to lead to certain conclusions. That's why it's called the 
Chinese Room ARGUMENT -- CRA.


> > I am challenging the validity of that conclusion for the
> > reason that the third premise is misleading BECAUSE it
> > equivocates its meaning, shifting from a denial of identity
> > (undoubtedly true) to a denial of causality (not undoubedtly
> > true based on the CR and very likely false
>
> The "denial of causality" argument does not exist in the 3rd axiom, which 
> tells us only that no agent of any kind can obtain semantic understanding 
> from syntax.


Then it doesn't support the conclusion of the CRA and the argument is shown to 
be mistaken.


> You imagine a "denial of causality" claim in the 3rd axiom because it exists 
> in the conclusion (where syntactic programs don't *cause* minds) and you 
> don't like the conclusion.
>
> gts
>

The conclusion is shown to be unsustained by the premises because a denial of 
identity does not imply a denial of causality. This has nothing to do with my 
preferences, especially since I initially thought the CRA was right and agreed 
with its conclusions. Only on more careful consideration and analysis did I 
realize something was wrong with it and then just what that something was.

It was two things:

1) The misleading equivocation in the third premise; and

2) The underlying presumption of dualism in the CR which leads to the 
conflation of the two readings in the equivocal third premise.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: