[Wittrs] Re: Dennett's paradigm shiftiness--Reply to Stuart

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 25 Feb 2010 01:39:38 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:


<SWM>


> New = Budd
>
> Hope it's not too confusing!
>
<snip>

> All parallel processing can be implemented on a serial computer.  There 
> simply is nothing more by way of computation that can be done in parallel 
> that can't be done serially.
>


This misses the point again. The issue is that, if consciousness is a certain 
kind of process-based system, then you need to have all the parts in place, 
even if they all consist of different computational processes doing different 
things and it takes a parallel platform to do this. That one can do each of the 
processes in a serial way, too, isn't the issue because one can't do it all in 
the way that's required, i.e., by running a sufficiently complex system with 
lots of things interacting simultaneously, in parallel, using a serial 
platform. (PJ has argued that a really, really, really, really, etc., fast 
system could do what a parallel system could do even if we have no such system 
or the possibility of building one and I am agnostic on that. It may, indeed, 
be possible to achieve synthetic consciousness on a serial processor running at 
super-duper speed. But so what? The issue is what does it take to do it in the 
real world and, for that, parallel processors are a way more realistic option.)

If the issue were that consciousness cannot be sufficiently accounted for by 
describing syntactical processes at work, then introducing complexity of this 
type wouldn't matter, of course. But as Dennett shows, we can account for the 
features of mind by this kind of complexity, at least in a descriptive way (if 
one is prepared to give up a preconceived notion of ontological basicness re: 
consciousness). Whether Dennett's model is adequate for accomplishing the 
synthesis of a conscious entity in the real world remains an empirical 
question. But the point is that there is nothing in principle preventing it, as 
long as we can fully describe consciousness this way. So everything hinges on 
whether Dennett's account of consciousness as a certain agglomeration of 
features is credible.

To dispute Dennett you have to say his account doesn't fully describe all the 
features that must be present. Searle attempts this with his CRA but his 
attempt hinges on a conception of consciousness which requires it be 
irreducible (i.e., already assumes Dennett's model is mistaken at the outset) 
-- and yet even Searle doesn't stand by this with regard to brains, thereby 
putting him in self-contradiction.


> It is therefore a red herring to say that Searle's CR is "underspecked" in 
> computational terms.  It is underspecked, indeed, in that it is the upshot of 
> a computational theory of mind.


That misses the point. If consciousness is an outcome of a certain kind of 
complex system, then not only is it feasible, at least theoretically, on the 
Dennettian model but Searle's CR manifestly fails because of the very thing 
Peter Brawley on the other list pointed out: you can't build a bicycle and 
expect it to fly. Searle's CR is not doing the things a brain does, i.e., it is 
not running all the complex processes that go into understanding, intending, 
etc. It's a rote responding device without all the processes doing all the 
things that are part of what it means to understand, intend, etc. It's a 
bicycle relative to the brain as jet plane.


> The Turing test is passed while the semantics ain't there--no matter how much 
> paprallel processing goes on WHEN SAID PARALLEL PROCESSING IS ALREADY KNOWN 
> TO BE ALSO HAD BY SERIAL COMPUTATION.
>


This isn't about whether the Turing Test is passed. Searle's argument presumes 
it is passed by fiat (even though it is questionable that such a system could 
do any of the things Searle stipulates it does). This is about whether a system 
running the kind of processes the CR runs could, if it appears to be conscious, 
actually be taken to be. Aside from the fact that Dennett's point is that it 
could not actually succeed in passing the test, let's grant that it does 
anyway, for argument's sake. Let's grant that it really does look from the 
outside as if a real mind is there. Searle says look inside and what you see is 
only rote processing, no understanding at all. Thus no one would agree that the 
CR is conscious.

As I have already pointed out, I grant he is right on that. The CR qua system 
is not conscious, the standard System Reply notwithstanding. But that is 
because the bicycle of the CR is stipulated by both sides in this argument to 
be flying. But it has no wings and no jet engines and no aerodynamics. All the 
things that would enable it to fly are missing. We simply agree that it is 
flying!

Well, you can do that in an argument but so what? It can have no relevance to 
the real world! Even a stipulated flying bike still isn't flying up there in 
the real clouds. And that's because it is missing key constituent parts! Well, 
so is Searle's CR.

Understanding involves a lot more than rote match up of symbols in a mechanical 
way. And the CR lacks the capacity to do the missing stuff.

So if the CR is conscious it is so only by stipulation in which case it has no 
real world implications.


> This response might be given to everything else you write below.  Where 
> something different might be given, I'll try to give it below.
>
>

If all you have is what you have already said, then I've given the answer 
already.

>
>
>
>
> >The reason Searle doesn't see this, something I have pointed out before, is 
> >because Searle is committed to a conception of consciousness as an 
> >ontological basic (an irreducible) whereas Dennett is proposing that 
> >consciousness CAN be adequately conceived as being reducible.
>
>
> He not only sees the possibility you are attempting to describe, he sees it 
> as a nonstarter because all parallel processing can be done by a serial 
> computer (a Universal Turing Machine UTM for short).  This is a repeat.  Your 
> conclusion simply can't follow.
>
>

See above. (Note: this is not about the quality of the processes but about the 
type of system being run where "system" equals multiple processes doing 
multiple things running in parallel time.)

>
> > If it can, if we can explain subjectivity via physical processes performing 
> > certain functions, then the System Reply doesn't miss Searle's point at 
> > all! And that is Dennett's case.
>
>
> The systems reply rebuttal, of course, has Searle flippantly describing "bits 
> of paper" as a stand-in for the formal processes.  Dennett and Hofstadter 
> (_The Mind's I_--is there one?!) parlay this into a claim that his CR is 
> underspecked but the song remains the same--parallel processing can be....  
> (I think this is where Peter kept giving you the option of spelling out 
> parallel processing without resort to simply more computation--and what is 
> left is brute causality which leads me to think that the system reply is a 
> waffling mess because it conflates brute causality with a type of processing 
> which is supposed to be more causally robust as in complex but turns out to 
> be that which can alreasdy be done on a UTM/CR).
>


This is just a repetition of your mistake of presuming this is about the 
quality of the processes rather than the nature of the system.

>
>
>
> >
> > Of course the two are at loggerheads. No one is denying that. But the claim 
> > you and some others have made, that Dennett and Searle are really on the 
> > same side because both agree that some kind of synthetic consciousness is 
> > possible, except not via computers, is simply wrong. Dennett is 
> > specifically talking about a computer model being conscious and Searle is 
> > specifically denying THAT possibility.
>
>
>
> He is denying the coherence of strong AI as defined by Schank's and Abelson's 
> 1977 (and Winograd's 1973 and Weizenbaum's 1965--"and indeed any Turing 
> machine simulation of human mental phenomena" (Target article).  To the 
> extent that the systems reply misses the point is the extent which it may be 
> compatible in spirit to both Searle's biological naturalism as well as his 
> contention that he is not arguing against AI in general (just strong AI as 
> defined by Schank and others which is spelled out in the target article.
> >
> >


Searle's response misses the point, not the other way around. You miss the 
point as well when you fail to understand that Dennett's thesis IS the "strong 
AI" which Searle opposes (and which you previously called, in a weaker moment, 
"Dennett's strong AI').


> >
> >
> > > The point about Dennett is that he can't have it both ways.
> > >
> > > The systems reply (as well as the robot reply) is motivated by strong AI 
> > > or not.
> > >
> >
> > This isn't about motivations but about the merits of the competing claims. 
> > The System Reply hinges on conceiving of consciousness in a certain way and 
> > Searle simply doesn't conceive of it in that way.
>
>
> Look, this is where you are dead wrong.  Searle is speaking about a specific 
> thesis held by Schank and others and then shows that such strong AI systems 
> may pass a TT while not having the semantics that
> the TT was to be a criterion for.


Searle is also attacking people like Dennett as exponents of what he calls 
"strong AI". You have said it yourself in a weaker moment.


>  The best a criterion can do is spell out our original intuitions anyway.


Who says?



>  Both sides intuitions are that nonconscious processes cause semantics and, 
> say, consciousness.


But Searle's view falls into self-contradiction when he asserts that brains do 
it but computers can't because computational processes aren't instances of 
consciousness ("nothing in the Chinese Room understands Chinese and the Chinese 
Room doesn't either" -- Searle). While there may well be reasons to say 
computational processes can't do it (Edelman and Hawkins both attempt to make 
the case for that), Searle has no reasons aside from the nature of the 
computational processes themselves (they are merely "syntax", "formal", lacking 
in causality, etc.). But his idea of computational processes confuses the 
algorithmic aspect of programs with the processes they become when implemented 
on the right physical platform.


>  There is simply no way to go from Searle's seeing a flaw in 
> functionalism/computational theory of mind to a position that denies the very 
> spirit of the systems reply.


What???


>  So the systems reply is motivated by strong AI or not.


This isn't about "motivations" it's about substance. The systems reply hinges 
on a particular way of explaining consciousness while Searle's rejection hinges 
on another. That difference boils down to whether consciousness is reducible or 
not to constituents that aren't, themselves, conscious. If they are (and 
Searle's assertion that brains cause consciousness suggests he thinks they 
are), then there is no reason, in principle, to suppose computers cannot do the 
same kinds of things brains do. But if they aren't, then you have to either say 
brains can't do it (Searle won't say that, obviously), or else brains do it by 
conjuring something entirely new in the universe into existence. But that is 
dualism and Searle denies being a dualist. So he is in self-contradiction.


> That remains true along with the demerits found in the vacuity of the TT 
> after strong AI is fleshed as the thesis it actually is.


This is just rhetoric, not an argument.


>  If one wants to waffle, then one is simply flirting with Searle's position 
> under another (two) name(s).


This is just a reiteration of the charge you have previously made which I 
refuted by showing that Searle IS at odds with Dennett's thesis and that both 
he and even you think Dennett is arguing for AI. Additionally, I've pointed out 
the mistake you make when you confuse the quality of the processes in question 
with the kind of system in question. You can't build a bicycle and expect it to 
fly, etc., etc.


>  Searle's biological naturalism allows for AI and both are simply general 
> statements that physical systems may cause and realize consciousness, whether 
> the system be a biological one or an
> artificial one.


No one is denying Searle makes such claims.


> Denying strong AI is not denying AI.


Strong AI = the thesis that whatever it is we call "consciousness" can be 
synthesized on a computational platform.

Weak AI = the thesis that whatever it is we call "consciousness" can be 
simulated/modeled on a computational platform.

Note that Dennett is talking about the first, not the second.


> And denying strong AI is absolutely not a denial that a physical system (like 
> a brain or artifactual system that has at least the same causal capacities is 
> necessary for semantics/consciousness.
>

I have already spelled out the contradictions inherent in Searle's CRA vis a 
vis brains and what they do. But note that the description you give immediately 
above is NOT what Searle means by "weak AI" though you once made the mistake of 
supposing it is!

Moreover, I have presented enough evidence here for you to see that Dennett is 
arguing for "strong AI" and that Searle, in opposing Dennett, thinks so, too. 
Enough already, don't you think?


> You are just locating a false dilemma.

>
> >Therefore he either doesn't see, or refuses to see, the point of the System 
> >Reply. Recall that his argument against that reply is it misses his point.
>
>

> His actual response is that the man can internalize the whole system and 
> still not understand Chinese.


And that is because HIS system, the CR, is underspecked.


> In the next paragraph of his response to the systems reply he mentions that 
> he is embarrased even to give the above reply due to its implausibility.


Who cares? What has THAT to do with the actual merits or lack thereof of his 
response?


>  He mentions the system reply involving the claim that while, accord. to the 
> systems reply now, the man doesn't understand Chinese, the whole system 
> nevertheless does.


This was the mistake of the early System Reply responders. They left out the 
extra step of noting that the system in question must also be adequately 
specked and the CR simply wasn't.


>  Here is where Searle mentions the extra stuff besides the man's rule 
> following being a case of "bits of paper" added to what the man is doing.  
> The point he is making is that no amount of computation (whether in serial or 
> parallel because all parallel processing can be done serially = UTM =CR) 
> added to what the man understands is going to make one iota of difference.
>


And THIS hinges on his mistake in focusing on the quality of the processes 
rather than the nature of the system the processes constitute.


> I know, I know, it is the process of BOTH the program as well as its 
> implementation (hardware) that is the REAL story and not just software in 
> isolation, yada, yada.  But that is to court a form of AI which is not strong 
> AI or to court a waffling of brute physics with the computational level of 
> description which was to be what strong AI was all about.
>


No, you are making the same mistake you used to make, to suppose that by "weak 
AI", which Searle is on record as accepting, he means some form of as yet 
unspecified configuration of machine parts that could replicate what brains do 
without relying on computation primarily. THAT is NOT what he meant by "weak 
AI" so this is not a matter of Dennett or anyone confusing the two AI's but of 
Searle's mistakenly supposing that computational processes are merely abstract 
without causal efficacy in the world on a par with brain processes.


> But you are also right to say (if you ever did) that Searle claims the system 
> reply begs the question simply by assuming the man understands Chinese 
> somehow.  Or wait, you said he said that it misses the point.  I think this 
> is true when he goes on to explain that the systems reply may have the absurd 
> consequence that we can no longer distinguish systems that have a mental 
> component from those which do not.  But in that case it may not have missed 
> the point of
> strong AI after all


Make up your mind!


--the point amounts to the idea of hylozoism since mind is defined 
computationally and everything under the sun can be given a
> computational description.


That is a false trail indeed! This isn't about expanding the idea of 
computationalism but about whether computers doing what they do can be 
conscious.

<snip>

>
> >But if he is simply unable to conceive of consciousness in the mechanistic 
> >way proposed by Dennett then he is missing Dennett's point.
>
>
> The whole point of insisting that it is the brain that causes
> consciousness is quite mechanistic enough!


But by doing so, Searle falls into contradiction as already noted, i.e., he 
says brain processes can do what computational processes running on computers 
can't do because computational processes running on computers aren't 
intrinsically conscious! So is he trying to say brain processes are? If so, 
from whence does that consciousness come? Does it just blink into existence in 
certain brains?


> The only shot you have here is to conflate physics with computation and 
> insist that since Searle is denying the plausibility, er, coherence of a 
> computational theory of mind, then he has to have some nonprocess based 
> system in mind.


????


>  But note that your argument has the absurd consequence


It's your argument or, better, your strawman imputed to me!


> that Searle's notion of the brain causing consciousness amounts to his 
> inability to conceive of consciousness being caused by noncomputational 
> mechanisms.


No, that is manifested in his argument for the consequences of the CR (i.e., 
the CRA).


> This is where I see your argument as quite bad indeed, absurd even.  Recall 
> that your other bad argument amounts to the same thing.


Just asserting badness is nonsense. It's just editorializing.


>  Searle doesn't know how brains do it.  He argues against strong AI.  Ergo he 
> must be a dualist of sorts.
>


That's not my argument as you should know by now. If you go back and read above 
in this very post you will see that.


> That is aweful but explainable given your conflation of computation and 
> physics.  It occurs so frequently below that it is probably enough to end it 
> right here.  But not until I spank you just a bit more below--lighten up if 
> you are thinking of taking offense!
>

I have. I find such silly editorial comments off-putting and a waste of both 
our times. Talk substance and leave the personal remarks aside and we'll both 
be better off.

>
> >
> > You may recall that I have long said here and elsewhere that in the end 
> > this is about competing conceptions of consciousness.
>
>
> And I have said that you wanted it to be but I've shown that both Dennett and 
> Searle agree that consciousness is caused by physical processes.


They do. But Dennett offers an explanation for how while Searle simply asserts 
it as his belief, while falling into contradiction between what he says about 
the CR and what he says about brains. Self-contradiction is a problem for a 
philosopher like Searle who is purporting to provide a logical picture of what 
can't work.


> So maybe it IS about competing conceptions of consciousness for SOME.  But 
> you can't accuse Searle of dualism when he is simply arguing that strong AI 
> is incoherent--unless you conflate strong AI with physics.


Searle's dualism is manifested by his assumption in the CRA. without that 
assumption of ontological basicness for consciousness, one cannot draw the 
conclusion from the CRA Searle says we should draw.


>  But that would be to forget about the fact that strong AI is a species of 
> functionalism and functionalism is wedded to a level of computation that is 
> SUPPOSED to be somewhere between the brute physical level and intentional 
> level, if you get the history right.  This is part of my contribution to the 
> topic, by the way.
>
>

Computationalism is the thesis that minds are just certain process-based 
systems operating in a certain way at a certain level of complexity and that 
these systems are the kind computational processes can achieve.

>
>
> >Either consciousness is inconceivable as anything but an ontological basic 
> >or it isn't.
>
> And who really has taught the world how to distinguish an ontological basic 
> from a nonbasic?


The issue isn't this as an explicit thesis but rather whether it is implicit in 
some theses.


> I'll remind you that this isn't about what is conceivable only


You are mistaken. It most certainly is.


--the thought experiment took something conceived via the TT (Turing test) and 
showed that the criterion wasn't good enough.


This isn't about whether the Turing Test is a reliable test for intelligence 
but about whether a system like the CR that can pass it would be considered as 
having the understanding we associate with human type intelligence. Recall that 
Searle simply stipulates that the Turing Test is passed by his CR.


> That it is conceivable that physical processes cause consciousness is a 
> thesis shared by Searle and Dennett.  This nonsense about ontological 
> basicness doesn't arise in the case of Dennett OR Searle but may be parlayed 
> into another discussion of other proposals for
> how minds are what they are.


It's the fundamental conceptual difference between their competing views about 
the possibilities of computationally based consciousness.


> You keep wanting to lump Searle with those who would talk of ontological 
> basicness.


No one that I know of uses that terminology but me and I use it to get away 
from the archaic connotations of talk about substances. It's a more generic 
formulation, that's all.


>  The very idea of ontological commitment is shown by Searle to have a merely 
> trivial application as commitment via a complete (or set of) speech act(s).  
> Cf.  _Searle's _Speech Acts_.
>

Elaborate your point and how it is relevant here then.


>
>
> >If it is, then Searle is right. If it isn't, then Dennett's model is viable 
> >(and therefore Searle's blanket denial of that model is wrong).
>
>
> I've found you saying that for quite a while.


Well congratulations on your memory then.


>  But both Dennett and Searle share the thesis that physical processes cause 
> consciousness somehow.


See my response to this same point which you have already made above!


>  Searle may be wrong about strong AI's viability in your eyes, but you can't 
> be unaware that Searle's reasons for thinking strong AI incoherent is because 
> he thinks it too abstract and "not machine enough."
>


I know his rhetoric. So what? Rhetoric isn't argument.


> Now suppose you are aware of Searle's reasons for arguing against the 
> coherence of strong AI.  Then you can't lump Searle in with the "ontological 
> basic" camp, wherever they are.


It's the dualist camp and I already have for the reasons already given, 
numerous times.


>  Now suppose you don't know, then what gives?  Can you be that myopic as to 
> not see that Searle and Dennett are on the same page as far as physical 
> processes causing consciousness?
>

This is the third or fourth time you've made this irrelevant point in this post!

>
<snip>

>
>
> Anyway, my God you have a unique set of pipes, Stuart!
>
> Have a good one!
>
> Cheers,
> Budd
>
> =========================================

You too, Budd. I can see we will never really understand one another. This is 
roughly the same argument we had back on the Wisdom Forum in 2004. Nothing, or 
very little, seems to have changed (though I do think my argument against 
Searle's viewpoint and for Dennett's has become better honed with repetition 
and even with dealing with some ongoing challenges). I wonder, though, if 
discussions like this ever lead to much?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: