[Wittrs] Re: The CRA accord. to Stuart

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 26 Apr 2010 22:45:21 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:
>

<snip>
> If you go back to the target article in BBS, you may find that Searle does a 
> fine job.
>
> OTOH, if you want to focus on a simple CRA with three premises, the third of 
> which is really two independent clauses, you would do well to see the first 
> premise in the terms spelled out in the target article.
>

> It was your conversions which obscured the CRA after all--and in the name of 
> Wittgensteinian clarity!
>

The Searlean argument looks clear at first glance and quite compelling. The 
problem is it masks a deep confusion. That that confusion still isn't clear to 
some, despite my explication, doesn't absolve the CRA of that problem.

> You were trying to claim that in the CRA there was a noncausality claim 
> lifted out of an identity claim.

Yes.

>  Well, the meaning of the first premise contains a noncausality claim.

The first premise: "Computer programs are syntactical (formal)."

Note the verb "are". It denotes an identity relation (or, another possibility, 
a predicate relation). It certainly doesn't denote a causal relation. If it did 
it would say "cannot cause" or some such (which would, of course, render the 
argument circular since it's aim is to prove a "cannot cause" conclusion).


>  And this is why you can create a thought experiment showing that no matter 
> how much computational complexity is happening in a S/H system, an 
> intelligent humnculus couldn't get any semantics out of it solely in virtue 
> of the computational description of the system.
>

Even Searle would not claim that intelligence or understanding is a function of 
an humunculus. Thus it is irrelevant to the actual CR thought experiment and to 
the CRA derived from it.

> So there is a dilemma:
>
> If you conflate the computational properties with the physics, you have a 
> nonS/H system which may cause consciousness/semantics.


Recognizing that AI researchers talking about programming AI are talking about 
using computers isn't to "conflate computational properties with physics" no 
matter how many times you insist otherwise. That is just your misunderstanding. 
No one in the field EVER supposed that programs can do anything in isolation 
from their physical platforms or that AI (being of the strong persuasion) was 
about any such thing!


>  This is consistent with Searle's biological naturalism.
>

His biological naturalism asserts that brains cause consciousness (naturally!) 
from some as yet unidentified thing they do or that happens within them. That's 
the extent of his biological contribution. Well, okay but few moderns disagree 
with this claim. The question is whether what they do is enough like what 
programs running on computers do to allow replication of consciousness on 
computers AND an understanding of brains on THAT model.

> OTOH, if you adequately grasp how S/H systems work, then the question is 
> whether 2nd order properties, as such, may cause semantics.


I think this whole business of yours about "second order properties" is a false 
trail leading nowhere.

>  But if such properties are really abstract, then even if you had a 
> humunculus doing ALL the syntactic operations in the whole system, you still 
> wouldn't necessarily have a case of semantics.


No one argues that syntax equals semantics. The argument is about whether 
computational processes running on computers can do whatever it is that brains 
do. That programs are deemed "syntax" and understanding "semantics" and that we 
agree that syntax and semantics aren't the same thing is totally irrelevant to 
whether some configuration of syntax can produce semantics. This is the system 
level vs. process level argument again!


>  Hence the thought experiment.
>
> My main criticism of your efforts is that they are not sensitive to the plain 
> meanings Searle uses in the target article, particularly the meaning behind 
> the premise that programs are formal.


if "formal" means they can accomplish nothing in the world, as you have often 
claimed, then this is irrelevant because programs manifestly do accomplish 
things in the world WHEN RUNNING ON THE RIGHT KIND OF PHYSICAL PLATFORM. And 
that's all that the AI project is about!


> If you want to say RUNNING programs, it is of no help, since the alternatives 
> above remain, one of them being consistent with Searle and the other being 
> subject to the CR found in the target article, or so I am supposing.
>

There is a clear passage of ships missing each other in the night here!

> If one wants to say, along with Stich, that the case of Helen Keller is one 
> where she is of a different psychological kind, then Fodor's response will 
> look like Searle's, to wit, that if that's what a computational account of 
> the mind results in, then such an account is most likely subject to a 
> reductio argument.
>

I don't know what you mean by this.

> If one tries Gordon's shot at a different room called a language room, one 
> wonders just what motivates Gordon besides not seeing what motivated Searle 
> in the first place.
>
> Cheers,
> Budd
>

Nor this.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: