[Wittrs] Re: Dennett's paradigm shift.

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 24 Feb 2010 02:21:59 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:
>
> Stuart,
>
> I'll comment on your claim about whether Searle is arguing against Dennett 
> (and why I offered that on one interpretation he is not).

> In the target article (BBS), Searle points out that the systems (or robot) 
> reply changes the subject from strong AI to nonS/H systems (or a combination 
> of S/H and nonS/H systems.
>



What Searle is doing is denying the relevance of the System Reply to his 
argument. Dennett responds in Consciousness Explained, among other places (and 
I have already transcribed that response onto this list in reply to a challenge 
by Joe), as to why it is relevant by arguing that the CR as a model is simply 
underspecked. The reason Searle doesn't see this, something I have pointed out 
before, is because Searle is committed to a conception of consciousness as an 
ontological basic (an irreducible) whereas Dennett is proposing that 
consciousness CAN be adequately conceived as being reducible. If it can, if we 
can explain subjectivity via physical processes performing certain functions, 
then the System Reply doesn't miss Searle's point at all! And that is Dennett's 
case.

Of course the two are at loggerheads. No one is denying that. But the claim you 
and some others have made, that Dennett and Searle are really on the same side 
because both agree that some kind of synthetic consciousness is possible, 
except not via computers, is simply wrong. Dennett is specifically talking 
about a computer model being conscious and Searle is specifically denying THAT 
possibility.




> The point about Dennett is that he can't have it both ways.
>
> The systems reply (as well as the robot reply) is motivated by strong AI or 
> not.
>

This isn't about motivations but about the merits of the competing claims. The 
System Reply hinges on conceiving of consciousness in a certain way and Searle 
simply doesn't conceive of it in that way. Therefore he either doesn't see, or 
refuses to see, the point of the System Reply. Recall that his argument against 
that reply is it misses his point. But if he is simply unable to conceive of 
consciousness in the mechanistic way proposed by Dennett then he is missing 
Dennett's point.

You may recall that I have long said here and elsewhere that in the end this is 
about competing conceptions of consciousness. Either consciousness is 
inconceivable as anything but an ontological basic or it isn't. If it is, then 
Searle is right. If it isn't, then Dennett's model is viable (and therefore 
Searle's blanket denial of that model is wrong).


> If not, then Searle is not in disagreement--and so would not be in 
> disagreement with Dennett if he is waffling on strong AI.
>

See above.

> If so, then Searle has caught those offering the systems or robot reply 
> either changing the subject (no disagreement if so) or being incoherent.
>


Just because Searle asserts they are changing the subject doesn't mean they 
are, anymore than just because I assert something of you (or you assert it of 
me) means I am (you are) right.


> If someone manages to say that the program is purely formal and so the 
> semantics are somewhere else (or a combination of program and nonprogram), 
> then one has effectively removed the original motivation for strong AI as 
> discussed quite clearly in the target article.
>

You yourself called Dennett's thesis "Dennett's strong AI" and Searle himself 
repeatedly argues against Dennett's position using his argument against 
so-called "strong AI". So these two facts are prima facie evidence, at least, 
that this is about Searle's concept of computationalism (what Searle has named 
"strong AI"). Therefore Dennett's argument contravenes Searle and vice versa.

Now if you want to take the position that this isn't about computer programs 
running on computers (software on the necessary physical platform that runs 
it), then you have a problem because Searle is very clear that he IS talking 
about computers, even if he often speaks of programs as abstract. If he 
genuinely holds a view like the one you are imputing to him, that this has 
nothing to do with the platform (the hardware), then you must be saying that he 
is only arguing against the possibility of programs being conscious. But what 
then is a program, once you extract from it the operations implemented by the 
machine in which it is installed?

NO ONE IN THE AI WORLD IS ARGUING OR EVER ARGUED THAT THE PROGRAMS QUA 
ALGORITHMIC INSTRUCTIONS ENCODED ON SOME TAPE OR ON A PIECE OF PAPER OR IN A 
PROGRAMMER'S MIND CAN BE CONSCIOUS. There must always be implementation and 
implementation ALWAYS implies a platform, a machine. So while computationalism 
implies multiple realizability (that different machines can realize the same 
kind of conscious system if they are running the same processes), it does NOT 
imply that no platform is needed or that a platform having sufficient capacity 
is not required to do the job.

Dennett argues that the platform must be extremely powerful and have parallel 
processing capabilities to do the job. Searle argues that Dennett's system 
still can't do it because, in the end, it's just running syntax, mechanical 
operations according to certain prescribed rules. But Dennett counters that one 
can account for all the features we associate with consciousness by a 
description of sufficiently complex processes of this type. ("Complexity," 
Dennett argues, "matters".)

But remember there is a fundamental asymmetry here in their arguments. While 
Searle is arguing for the impossibility of a Dennettian type of model, Dennett 
is arguing only for its possibility. Impossibility implies an end to the debate 
but possibility does not as it remains to be refined, implemented and tested on 
machines capable of doing what Dennett proposes needs to be done.


> I still also disagree with your proposal that Searle is wrongheaded in his 
> later critique of Strong AI being incoherent.  His reason is crystal 
> clear--no one knows what it would mean to discover if something were 
> intrinsically computational.  Computation names an abstract sort of thing.
>


Computer processes are no more abstract than brain processes. Both classes of 
process are physical events occurring on a physical platform. If brain 
processes can produce subjectivity there is no reason, at least in principle 
(based on their being processes!), why other processes can not do so as well. 
This is the point of multiple realizability.


> If one bypasses this point by insisting that it is all about the combination 
> of computation along with the physical processes used to carry the formal 
> program, then one also has bypassed the original
> strong AI claim.


No, one has not, unless you think Searle's argument against computationalism is 
only against programs, not against computers running them! And if you do, you 
will be at odds with Searle himself since he is quite explicit about arguing 
against the possibility of computers being conscious. Indeed, to argue that he 
is only making the case against pure programs would be empty since no one 
thinks programs in isolation do anything but carry the information the machine 
running them will ultimately implement.


> And it still is problematic to understand just what formal processes can add 
> to brute ones.
>

Computer programs running on computers are no longer merely "formal processes". 
They are real events in the real world, as real, indeed, as brain processes 
running in brains.


> So Searle manages to distinguish his position as biological naturalism


That's what he calls it but so what? He still offers no answers as to what is 
"natural" except to assert that we know brains cause consciousness. Okay, but 
that says nothing about whether anything else can. So long as he hazards no 
explanations for how they do it, which somehow computers cannot match (as 
people like Edelman and Hawkins attempt), then he is just naming his position, 
he isn't explicating it.


> and insists that one (Dennett's among others) of the motivating factors of 
> strong AI is still the idea that we can learn things about mind by studying 
> the laws of computation without needing any information whatsoever about real 
> brains.
>

Notice that real world brain researchers like Stanislas Dehaene (excerpts from 
a recent talk he gave in Paris available on this list in some earlier posts) 
pay attention to what Dennett says. Dennett, for his part, is engaging in a 
theoretical approach that, among other things, considers what it is brains must 
do if they are to produce consciousness. Dennett, in fact, has been involved in 
actual brain research (as his Consciousness Explained documents) so it is 
absurd to say that he is arguing for a model of consciousness that takes no 
account of what brains actually do. If Searle makes THAT assertion (and I don't 
recall him doing so -- but I don't have a photographic memory) then he is way 
off base. (Recall that one of Dennett's claims is that to succeed in building 
an artificially conscious entity, we have to do all the things brains 
manifestly can do. THAT's why he argues for massively parallel processing!)


> But I do agree that brain science is tough.  And I would disagree with your 
> idea that Searle has to be a dualist because brain science is both tough and 
> he is arguing against computational theories of mind.
>

My idea that he is an implicit dualist hinges on one thing only: That to 
suppose that syntax qua computational processes running on computers cannot 
achieve consciousness, if they are doing the right things in the right way, you 
have to presume that consciousness cannot be causally reduced to non-conscious 
constituent processes or events. Once we shake that picture and recognize that 
there is nothing in our own experience that isn't replicable by a physical 
process-based system, then there is no reason, at least in principle, that 
consciousness cannot also be realized on other kinds of platforms than brains.


> For your argument to go through (Searle's dualism that he doesn't know is 
> implied by his CRA and biological naturalism), you would have to waffle on 
> strong AI.


This is simply false but it reflects your rather odd view that Dennett is and 
is not arguing for "strong AI"! See above for my response to that.


>  I believe you do along with all the systems and robot repliers.  But if you 
> waffle, you're really accepting something with which Searle is in agreement.
>
>
> Cheers,
> Budd
>

Then why do you think Searle doesn't just say, 'You know, Dennett's right about 
that. A massively parallel computational system like he describes could achieve 
consciousness because my CRA is ONLY about a simple rote response system such 
as I specked in the CR!'

If, in fact, Searle's position is as you describe it, then all that's needed is 
for him to agree with Dennett.

But if he doesn't (or can't, based on his already well documented arguments), 
then how can you continue to say that I or Dennett are "really accepting 
something with which Searle is in agreement"?

I'll leave you to sort this one out.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: