[Wittrs] Re: SWM and Strong AI

  • From: "J" <ubersicht@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 04 Jan 2010 21:13:10 -0000

SWM,

The paper from which the excerpt was taken was, as I originally indicated, 
"Minds, Brains, and Programs". I had not specified that this paper was from 
1980, though I did refer to the paper as "seminal", which I thought would make 
clear that this was the original presentation of the argument.

Well, now I've made that explicit.

>
> Actually, my critique of Searle does not hinge on the claim
> that he is an implicit dualist...

I didn't say that it did.  My point was that the issue of "dualism" seems to be 
of no small interest to you and that I would therefore have thought it likely 
that a reference to dualism in that paper would have stood out to you.  
However, pause before you debate this point because shortly I'll acknowledge 
myself why I was mistaken in that supposition...

The way it's presented in the Reith
> Lectures, for instance, is not how it appears later on, nor
> does it match a very early paper of his that I saw.

Ah, my guess is that the paper you mention here is the one I've called 
"seminal" and the "primary source".  But if that paper is one you only vaguely 
remember seeing, then it's perfectly understandable that you might not recall 
the specific dualism reference in that paper.

My mistake was in supposing that you would have very closely studied the 
original paper.  Now whatever the merits of treating that paper as of vital 
importance in scholarly matters, it is not necessarily what is required in a 
discussion like this.

Whether he made similar comparisons between "Strong AI" and "dualism" 
elsewhere, I do not recall.  And I do not have the relevant papers at hand to 
check.  So in the absence of that, I'd have to suppose that I was in error in 
expecting that you'd have been well aware of the dualism comparison.

I think he is right about that
> and it strikes me that he was wrong in linking AI to dualism
> in the text you provided since supposing that the mental can
> be produced on platforms other than brains is nothing like
> the kind of dualism Searle asserts is the only kind of
> dualism that counts.

This strikes me as a reasonable point and it even suggests to me that there may 
be good reasons not to expect that he'd have made the argument comparing 
"Strong AI" with dualism in later papers.

By the way, the "supposing that the mental can be..." is not the position of 
Strong AI.  Strong AI is not the acknowledgment of a possibility.  Strong AI is 
the assertion of an equivalence.

But I won't hang you on a clause that was part of a larger point with which I 
don't generally disagree.

So perhaps he no longer would
> make that claim about AI and dualism in light of the paper
> in which he asserted that the only real dualism reduces, on
> analysis, to substance dualism.
>

Perhaps so, yes.


> There are plenty of arguments against this, the most
> sensible, in my view being the connectionist reply which
> hinges on a sometimes insufficiently explicated notion that
> the reason intentionality is absent in the CR

You do grant that?  I'll ask that more directly later on.

is not because
> of the nature of the constituents of the CR, as Searle
> asserts, but because it hasn't been specked in in the first
> place.

Could you elaborate on the contrast between "the constituents of the CR" and 
the claim that it hasn't "been specked (sic)"?  What does it mean to say that 
it hasn't "been specked" but that it is lacking some relevant constituents?

(I believe the word you intend is "specced" or "spec'd", as it is used in 
engineering, "built according to spec" (specifications), rather than "spec 
houses" (where "spec" is "speculation") and certainly not "specked" as we would 
describe a drinking glass that needs rinsing.  But correct me if I misread.)

>
> Note that the CR has only one basic function going on: rote
> translation (conversion of inputs in one set of symbols to
> outputs in another).

Giving answers to questions is not the same as translation.

But nobody thinks that that is what
> consciousness is.

"Thinking" and "being conscious" are two different things.  The claim that 
thinking, properly understood, requires consciousness may be a claim that 
Searle has made (and I don't care to venture that far afield) but it is a 
separate claim.

That is, the brain is not a rote
> translating machine like this and no one argues that it is,
> not even AI researchers!

No but the claim that "thinking" should be ascribed to anything that gives the 
appropriate outputs given the appropriate inputs under appropriate testing 
conditions is a core tenet of much AI research, taking Turing's paper, 
"Computing Machinery and Intelligence", as a defining statement of the research 
program.

>
> The connectionist reply to Searle's CRA proposes that a
> system built of many different R's, performing a much
> broader range of functions than just rote translation, and
> linked together, would qualify as conscious if they were
> performing the right functions (as with Dehaene's brain
> model).

If the algorithm in question can be implemented on any Turing-equivalent 
architecture, then it can be implemented by the Chinese Room.

If it cannot be, if the specific hardware implementation is a relevant 
consideration, then the proposal is no longer Strong AI.  Searle's argument 
then is no longer the Chinese Room Argument, per se.

"On the assumptions of strong AI, the mind is to the brain as the program is to 
the hardware, and thus we can understand the mind without doing 
neurophysiology." (from the same 1980 paper)


> I have pointed out that when Searle claims that the CR
> isn't conscious because none of its constituents are,

Are you saying that the Chinese Room is conscious now?  Did you not earlier 
grant that it lacked intentionality?  Are you saying that it is conscious but 
lacks intentionality?  Or did I misread you before?

I'm getting that uneasy feeling.

he is
> relying on a picture of consciousness that presumes
> consciousness is irreducible and that this is in conflict
> with certain of his claims: 1) that he is not a dualist; and
> 2) that brains cause consciousness (because, if they do,
> they must do so in a physical way unless he wants to presume
> they do so as agents bringing something new and irreducible
> into the world which IS dualism).

I'm not going to address this, which is not to say I grant it.  If it becomes 
relevant to the question of your interpretation of Strong AI and the Chinese 
Room Argument, I may take it up them.

As it stands, I've withdrawn the suggestion that your failure to recollect the 
dualism remark was an indicator of wider problems in your reading.

> > When it comes down to it, the central problem that
> seems to plague your various readings is this:  where
> Searle asks,  "But could something think, understand,
> and so on SOLELY IN VIRTUE of being a computer with the
> right sort of program? Could instantiating a program, the
> right program of course, by itself be a SUFFICIENT CONDITION
> of understanding?"(emphases mine), you regularly ignore the
> "solely in virtue..." and "sufficient condition..."
> phrases.
> >
>
> Here you finally offer some kind of critique of my
> critique. Note that the issue comes down to what
> consciousness is. If a functionalist account is sufficient,
> then the right kind of computer (having sufficient capacity)

If "the right kind of computer" and "capacity" mean nothing more than "the 
capacity to implement the algorithm", that's right.

If "capacity" means something more than Turing-equivalence, then that's wrong.

It's wrong as an account of classical functionalism and it's wrong as an 
account of the position Searle calls "Strong AI".

"(B)eing a computer with the right sort of program" is what is relevant here.  
He does not mention "being the right sort of computer with the right sort of 
program".  Once you introduce the requirement that it be "the right sort of 
computer", the position ceases to be Strong AI!  Once you add the requirement, 
then it is no longer "solely in virtue of..."


> and the right kinds of programming (performing the right
> functions) could succeed ("solely in virtue" of being that).

But you've added a requirement.  And in so doing you are no longer describing 
the position of Strong AI.

> But the issue must be whether such a functionalist account
> is sufficient.

What you describe is no longer functionalism.

http://plato.stanford.edu/entries/functionalism/#MacStaFun
http://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)

Connectionism is not functionalism.  Connectionism was a reaction to 
functionalism.


I have argued that we can give a full account
> of what we mean by "consciousness" in such a functionalist
> way and, if we can, then the Searlean CRA's flaws become
> evident.

While the Chinese Room Argument likely has many flaws, your own arguments 
completely miss the point because what you're defending is no longer Strong AI. 
 The CRA doesn't apply to what you're defending.  Searle likely would object to 
what you're defending but on grounds other than the Chinese Room Argument.

>
> To think a functionalist account can't be sustained, one
> has to presume that brains do something other than just
> perform some processes in the right sort of way. While it is
> not impossible they do, Searle has no account of what that
> might be while his formulation that brains cause
> consciousness leaves us with either of two options: 1) they
> operate in a way that is analogous with what computers do or
> 2) they act as a deus ex machina that brings something new
> into the world.
>
> Certainly his CR offers no evidence that a more robustly
> specked system could not achieve

Requiring a "more robustly specked (sic) system" means going beyond Strong AI 
and beyond classical functionalism.


what the CR, itself,
> cannot.
>

Okay, are you now back to acknowledging that the Chinese Room does not think?

I'm getting that uneasy feeling again.



=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: