[Wittrs] Re: Focusing on the Refusal to Focus: Is the Third Axiom True?

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 16 May 2010 01:14:25 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote:
<snip>

> since we both agree that the conclusion of the CRA would be true if the
> third axiom is true, the only relevant issue is whether the third axiom
> is true.
>

I am considering Searle's argument for those conclusions. I have long since 
agreed that there might be other arguments (empirically or EVEN logically 
based) that would establish the truth of Searle's conclusions. My point is only 
that the CRA, Searle's argument, doesn not.


> I reject any restrictions on the arguments that may be offered to
> justify taking the third axiom as true.
>

There are two different issues here:

1) Is Searle's third premise true? I have said there is nothing in the CRA and 
nothing Searle says that establishes that though he does elide the issue when 
he equivocates in how he states the third premise. But if there is something 
else that can be invoked to show its truth, it would support the CRA (even if 
Searle missed it).

2) Is the CRA's conclusion (that computers cannot be made to be conscious) true?

You can argue for the second without the first though arguing for the first 
implies the second.

Insofar as you have been arguing for the truth of Searle's third premise in  
his CRA, I believe you have not established its truth for all the reasons I've 
already given. So far, your argument hinges on the same error Searle makes: 
supposing that the failure of what's in the CR to produce consciousness 
demonstrates the incapacity of what's in the CR to produce consciousness. But a 
failure to do something in one scenario does not imply a failure in a different 
one.

Thus the idea that consciousness (or understanding) is a system-level property 
(or feature or phenomenon or what have you) leaves open the logical possibility 
that the problem in the CR lies in its configuration (the kind of system it is) 
rather than in its constituents.


> to show that the third axiom is true, it suffices to show that the third
> axiom is true.
>
> no one is required to show that it is also self-evidently true or that
> it is also manifestly true or that it is also conceptually true or that
> it is also analytically true.
>

Again, you can argue for the conclusion of the CRA separately from the steps 
Searle used (in which case it is no longer the CRA) or you can argue for the 
truth of the premise in question in order to salvage the CRA AND its 
conclusion(s). Insofar as you are doing the latter, you haven't succeeded on my 
view since your argument has so far been to say that the evidence that "syntax" 
can't cause "semantics" is the fact that it doesn't in the CR. My point is that 
that is only evidence it can't do it in the CR in which case it is not the fact 
that it is "syntax" but that it is configured in an inadequate way.

> no one is required to base their argument for taking the third axiom as
> true on your assumption that it contains an equivocation Searle inserted
> into it.
>
> do you not agree?
>

You are not required to accept anything I have said. My point is that the CRA 
has two problems. The first and less important problem is that, as Searle 
presents it, it equivocates the meaning of the third premise, thereby blurring 
the claim and enabling one to draw a conclusion from a non-identity claim. The 
second and more important one is that, in the CR itself, understanding (or 
"semantics") is presented as an irreducible, hence the mistaken notion that its 
absence from the CR precludes syntax from producing it.

But there is NO basis for presuming that consciousness is irreducible and, in 
fact, Searle is of two minds on the question, allowing that it is reducible to 
something unknown happening in brains while assuming it isn't reducible to what 
computers do ("syntax"). But if it's reducible to what brains do, why not to 
what computers do? Searle is in self-contradiction here and is also in 
self-contradiction when he denies being a dualist since the supposition of 
irreducibility of consciousness is just what it means to be a dualist.

SWM

> Joe
>
>
> --
>
> Nothing Unreal is Self-Aware
>
> @^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
>        http://what-am-i.net
> @^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
>
>
> ==========================================
>
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>


=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: