[Wittrs] Re: Dennett's paradigm shiftiness--Reply to Stuart

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 24 Feb 2010 21:40:01 -0000

> > Budd

> Stuart

New = Budd

Hope it's not too confusing!

--- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
> >
> > Stuart,
> >
> > I'll comment on your claim about whether Searle is arguing against Dennett 
> > (and why I offered that on one interpretation he is not).
>
> > In the target article (BBS), Searle points out that the systems (or robot) 
> > reply changes the subject from strong AI to nonS/H systems (or a 
> > combination of S/H and nonS/H systems.
> >
>
>
>
> What Searle is doing is denying the relevance of the System Reply to his 
> argument. Dennett responds in Consciousness Explained, among other places 
> (and I have already transcribed that response onto this list in reply to a 
> challenge by Joe), as to why it is relevant by arguing that the CR as a model 
> is simply underspecked.


All parallel processing can be implemented on a serial computer.  There simply 
is nothing more by way of computation that can be done in parallel that can't 
be done serially.

It is therefore a red herring to say that Searle's CR is "underspecked" in 
computational terms.  It is underspecked, indeed, in that it is the upshot of a 
computational theory of mind.   The Turing test is passed while the semantics 
ain't there--no matter how much paprallel processing goes on WHEN SAID PARALLEL 
PROCESSING IS ALREADY KNOWN TO BE ALSO HAD BY SERIAL COMPUTATION.

This response might be given to everything else you write below.  Where 
something different might be given, I'll try to give it below.






>The reason Searle doesn't see this, something I have pointed out before, is 
>because Searle is committed to a conception of consciousness as an ontological 
>basic (an irreducible) whereas Dennett is proposing that consciousness CAN be 
>adequately conceived as being reducible.


He not only sees the possibility you are attempting to describe, he sees it as 
a nonstarter because all parallel processing can be done by a serial computer 
(a Universal Turing Machine UTM for short).  This is a repeat.  Your conclusion 
simply can't follow.



> If it can, if we can explain subjectivity via physical processes performing 
> certain functions, then the System Reply doesn't miss Searle's point at all! 
> And that is Dennett's case.


The systems reply rebuttal, of course, has Searle flippantly describing "bits 
of paper" as a stand-in for the formal processes.  Dennett and Hofstadter (_The 
Mind's I_--is there one?!) parlay this into a claim that his CR is underspecked 
but the song remains the same--parallel processing can be....  (I think this is 
where Peter kept giving you the option of spelling out parallel processing 
without resort to simply more computation--and what is left is brute causality 
which leads me to think that the system reply is a waffling mess because it 
conflates brute causality with a type of processing which is supposed to be 
more causally robust as in complex but turns out to be that which can alreasdy 
be done on a UTM/CR).




>
> Of course the two are at loggerheads. No one is denying that. But the claim 
> you and some others have made, that Dennett and Searle are really on the same 
> side because both agree that some kind of synthetic consciousness is 
> possible, except not via computers, is simply wrong. Dennett is specifically 
> talking about a computer model being conscious and Searle is specifically 
> denying THAT possibility.



He is denying the coherence of strong AI as defined by Schank's and Abelson's 
1977 (and Winograd's 1973 and Weizenbaum's 1965--"and indeed any Turing machine 
simulation of human mental phenomena" (Target article).  To the extent that the 
systems reply misses the point is the extent which it may be compatible in 
spirit to both Searle's biological naturalism as well as his contention that he 
is not arguing against AI in general (just strong AI as defined by Schank and 
others which is spelled out in the target article.
>
>
>
>
> > The point about Dennett is that he can't have it both ways.
> >
> > The systems reply (as well as the robot reply) is motivated by strong AI or 
> > not.
> >
>
> This isn't about motivations but about the merits of the competing claims. 
> The System Reply hinges on conceiving of consciousness in a certain way and 
> Searle simply doesn't conceive of it in that way.


Look, this is where you are dead wrong.  Searle is speaking about a specific 
thesis held by Schank and others and then shows that such strong AI systems may 
pass a TT while not having the semantics that the TT was to be a criterion for. 
 The best a criterion can do is spell out our original intuitions anyway.  Both 
sides intuitions are that nonconscious processes cause semantics and, say, 
consciousness.  There is simply no way to go from Searle's seeing a flaw in 
functionalism/computational theory of mind to a position that denies the very 
spirit of the systems reply.  So the systems reply is motivated by strong AI or 
not.  That remains true along with the demerits found in the vacuity of the TT 
after strong AI is fleshed as the thesis it actually is.  If one wants to 
waffle, then one is simply flirting with Searle's position under another (two) 
name(s).  Searle's biological naturalism allows for AI and both are simply 
general statements that physical systems may cause and realize consciousness, 
whether the system be a biological one or an artificial one.  Denying strong AI 
is not denying AI.  And denying strong AI is absolutely not a denial that a 
physical system (like a brain or artifactual system that has at least the same 
causal capacities is necessary for semantics/consciousness.

You are just locating a false dilemma.







>Therefore he either doesn't see, or refuses to see, the point of the System 
>Reply. Recall that his argument against that reply is it misses his point.


His actual response is that the man can internalize the whole system and still 
not understand Chinese.  In the next paragraph of his response to the systems 
reply he mentions that he is embarrased even to give the above reply due to its 
implausibility.  He mentions the system reply involving the claim that while, 
accord. to the systems reply now, the man doesn't understand Chinese, the whole 
system nevertheless does.  Here is where Searle mentions the extra stuff 
besides the man's rule following being a case of "bits of paper" added to what 
the man is doing.  The point he is making is that no amount of computation 
(whether in serial or parallel because all parallel processing can be done 
serially = UTM =CR) added to what the man understands is going to make one iota 
of difference.

I know, I know, it is the process of BOTH the program as well as its 
implementation (hardware) that is the REAL story and not just software in 
isolation, yada, yada.  But that is to court a form of AI which is not strong 
AI or to court a waffling of brute physics with the computational level of 
description which was to be what strong AI was all about.

But you are also right to say (if you ever did) that Searle claims the system 
reply begs the question simply by assuming the man understands Chinese somehow. 
 Or wait, you said he said that it misses the point.  I think this is true when 
he goes on to explain that the systems reply may have the absurd consequence 
that we can no longer distinguish systems that have a mental component from 
those which do not.  But in that case it may not have missed the point of 
strong AI after all--the point amounts to the idea of hylozoism since mind is 
defined computationally and everything under the sun can be given a 
computational description.  For fun, Cf. Rudy Rucker's new sci-fi book 
_Hylozoism_, where in a funny passage Jayjay gets confused while teleporting 
rocks and almost teleports his head from his body!





>But if he is simply unable to conceive of consciousness in the mechanistic way 
>proposed by Dennett then he is missing Dennett's point.


The whole point of insisting that it is the brain that causes consciousness is 
quite mechanistic enough!  The only shot you have here is to conflate physics 
with computation and insist that since Searle is denying the plausibility, er, 
coherence of a computational theory of mind, then he has to have some 
nonprocess based system in mind.  But note that your argument has the absurd 
consequence that Searle's notion of the brain causing consciousness amounts to 
his inability to conceive of consciousness being caused by noncomputational 
mechanisms.  This is where I see your argument as quite bad indeed, absurd 
even.  Recall that your other bad argument amounts to the same thing.  Searle 
doesn't know how brains do it.  He argues against strong AI.  Ergo he must be a 
dualist of sorts.

That is aweful but explainable given your conflation of computation and 
physics.  It occurs so frequently below that it is probably enough to end it 
right here.  But not until I spank you just a bit more below--lighten up if you 
are thinking of taking offense!


>
> You may recall that I have long said here and elsewhere that in the end this 
> is about competing conceptions of consciousness.


And I have said that you wanted it to be but I've shown that both Dennett and 
Searle agree that consciousness is caused by physical processes.  So maybe it 
IS about competing conceptions of consciousness for SOME.  But you can't accuse 
Searle of dualism when he is simply arguing that strong AI is 
incoherent--unless you conflate strong AI with physics.  But that would be to 
forget about the fact that strong AI is a species of functionalism and 
functionalism is wedded to a level of computation that is SUPPOSED to be 
somewhere between the brute physical level and intentional level, if you get 
the history right.  This is part of my contribution to the topic, by the way.




>Either consciousness is inconceivable as anything but an ontological basic or 
>it isn't.

And who really has taught the world how to distinguish an ontological basic 
from a nonbasic?  I'll remind you that this isn't about what is conceivable 
only--the thought experiment took something conceived via the TT (Turing test) 
and showed that the criterion wasn't good enough. That it is conceivable that 
physical processes cause consciousness is a thesis shared by Searle and 
Dennett.  This nonsense about ontological basicness doesn't arise in the case 
of Dennett OR Searle but may be parlayed into another discussion of other 
proposals for how minds are what they are.  You keep wanting to lump Searle 
with those who would talk of ontological basicness.  The very idea of 
ontological commitment is shown by Searle to have a merely trivial application 
as commitment via a complete (or set of) speech act(s).  Cf.  _Searle's _Speech 
Acts_.



>If it is, then Searle is right. If it isn't, then Dennett's model is viable 
>(and therefore Searle's blanket denial of that model is wrong).


I've found you saying that for quite a while.  But both Dennett and Searle 
share the thesis that physical processes cause consciousness somehow.  Searle 
may be wrong about strong AI's viability in your eyes, but you can't be unaware 
that Searle's reasons for thinking strong AI incoherent is because he thinks it 
too abstract and "not machine enough."

Now suppose you are aware of Searle's reasons for arguing against the coherence 
of strong AI.  Then you can't lump Searle in with the "ontological basic" camp, 
wherever they are.  Now suppose you don't know, then what gives?  Can you be 
that myopic as to not see that Searle and Dennett are on the same page as far 
as physical processes causing consciousness?
>
>
> > If not, then Searle is not in disagreement--and so would not be in 
> > disagreement with Dennett if he is waffling on strong AI.
> >
>
> See above.

I've seen.  Now you see?


Anyway, my God you have a unique set of pipes, Stuart!

Have a good one!

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: