[Wittrs] Re: Searle's Revised Argument -- We're not in Syntax anymore!

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 24 May 2010 16:18:37 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Sun, 5/23/10, SWM <wittrsamr@...> wrote:
>
> >> His APA argument shows it as incoherent, not even
> >> rising to the level of falsehood.
> >
> >
> > In which case, the CRA has collapsed, i.e., it made a
> > mistaken claim. If something doesn't "even rise to the level
> > of falsehood" then it is wrong to claim it is false, isn't
> > it?
>
> No. In his APA address, Searle argues that because computation does not name 
> an observer-independent physical process, it makes no sense to describe the 
> organic brain as a computer. Doing so leads to the homunculus fallacy. If the 
> brain exists as a computer then who observes it? Who/what acts as the 
> end-user?
>


A computer exists in the physical world as do brains. While we need people to 
build computers but not to build brains, both are equally physical.

The question is whether, in building a computer, we can, in some cases, also 
build a physical contraption that can do what brains do (in the way of being 
what we call "conscious"). This is a totally different question from whether we 
need people to have computers in the world in the first place.

The fact that we probably do (or at least we need entities like people, unless 
we enlarge the meaning of "computer"), is not evidence or support for a claim 
that computers can't be conscious. They are just two different questions.

The fact that, for a computer to be a computer we need a person, not only to 
build it but to see it as THAT (Searle's point about "syntax' being in the mind 
of the observer), is again no argument for a claim that computers can never do 
what brains do.

Now this has nothing to do with whether it makes sense to call the "organic 
brain" a computer, as you put it, especially since even Searle admits we don't 
really know how brains work. For all we know, after all, they might actually 
work just like a computer (and we might discover that uncomfortable fact once 
all the data is in and understood)!

But suppose, for an instant, that we discover brains really don't work like 
computers just as you and Searle say, that they work, for instance, more like 
Hawkins proposes, or Edelman (though his thesis is less clear). If the question 
then is:

What does it take to make a mind (that is, all the features, or at least enough 
of the features, we recognize as being, in the aggregate, that which we call 
"mind")?

Then it still doesn't follow that a mind can only be made in one way, e.g., in 
the way brains do it.

If a mind is just a combination of various functionalities (tasks performed by 
a processor of some sort), if it is just a system level feature or features, 
then there is no reason, in principle, that different kinds of processors could 
not perform the same tasks (accomplish the same things) and combine them in an 
equivalent way to how brains combine them and thus achieve a comparable result.

In THAT case, it would still be conceivable that machines can be conscious, 
even if they don't work precisely as brains do!


> And if we cannot rightly describe the brain as a computer then the strong AI 
> thesis is false.
>


No. That's a false assumption. See above.


> If you don't grok that argument then you must still contend with his CRA.
>


Maybe you're the one not "grokking"?


> Choose your poison!
>
> -gts
>

Well I suppose it depends on who groks what, eh?

Note, however, that you can't have both the CRA and the later argument even if 
both aspire to demonstrate the same conclusion (that computers can't do what 
brains do). That's because the later argument destroys the earlier one (which 
has plenty of its own problems as we've already seen). Instead, the view you 
take here seems to be any argument in this storm, if the CRA doesn't work, well 
I can always invoke the incoherence argument! So the point is to "prove" a 
pre-established conclusion rather than to achieve an argument that tells us 
something we would not already be sure we should believe. 

Note that in your response here, you haven't dealt with the thrust of my post 
responding to your claim that Searle's later argument abrogates the CRA which, 
of course, was the issue I was addressing when I responded to you!

You claimed in your initial foray into this thread that the CRA isn't abrogated 
by the new argument and I replied by laying out a case for why it is.

First I pointed out that the CRA hinges on the claim that 'computer programs 
are "syntax" and "syntax" can't make "semantics" (Searle's proxy for instances 
of understanding as found in our own minds), therefore computer programs can't 
make minds.'

Then I noted that Searle's later argument denies that computer programs ARE 
"syntax", pointing out that, since the CRA depends on the claim that they are, 
the CRA inevitably collapses.

As with Searle's failure to see the dualistic presumption he depends on in 
making the CRA in the first place, Searle subsequently failed to see that his 
replacement argument demolished his CRA and so THAT earlier argument must now 
be recognized as in error.

Searle never explicitly disavowed the CRA as far as I know, of course, except 
to say that it should be taken more as indicative of an obvious fact about 
computers than as a dispositive argument as he apparently originally intended. 
And yet, his own adherents, from you to Budd and many others, have still not 
faced up to the fact that the CRA itself has been fatally undermined by his 
later argument.

To reiterate for some hoped-for clarity:

This is because you cannot hold the later argument and the earlier one at the 
same time since the later one, in denying that computer programs running on 
computers are "syntax", undermines the critical point in the earlier argument 
which hinges on the claim that computer programs running on computers ARE 
"syntax".

So yes, this is about "grokking" but the failure to "grok" is not where you 
think it lies.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: