[Wittrs] Re: Understanding Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 27 Aug 2010 03:32:46 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

>
> --- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@> wrote:
> >
> > --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
> >
> > > --- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@> wrote:
> > > >
> > > > I think, Bruce, that you are stuck in the same kind of picture of mind 
> > > > as dualists whether you espouse dualism or not. It's easy, after all, 
> > > > to deny it (Searle does regularly) but if it walks like a duck and 
> > > > quacks like a duck . . .
> > >

> > > I believe Stuart's response here is just misinformed.  He's demonstrated 
> > > over the years that he can't read simple English when it comes to Searle 
> > > and Stuart doesn't really get that Bruce is coming from Hacker's 
> > > perspective such that consciousness has to do with persons who are more 
> > > than their brains, yada, yada.
> > >
> >
> >

> > Budd, it doesn't matter whose "perspective" Bruce may be coming from. I am 
> > criticizing the position, not questioning the authority on which it is 
> > held! As to my capacity to "read simple English", well these kinds of 
> > remarks are getting increasingly tiresome.
>

> But again you just failed to note the point made.  One can hold Bruce's 
> position which is like Hacker's without the position implying dualism.
> >

Budd, my suggestion as to Bruce's dualism hinges on his repeated references to 
the mind-body dichotomy (in order to explain the so-called problem with 
supposing brains produce minds), to his invocation of "substance" in his 
explanations, and on his insistence that mind can't be said to be CAUSED by 
brains because, if mind isn't a physical thing, it cannot be a product of a 
physical thing.

I take all of these claims to be expressions of a dualist picture and, 
moreover, Bruce has said in one of the recent threads that on the account I 
have given of his position, which he acknowledges is not unfair, he would be 
considered a dualist.

Bruce denies being a dualist on the grounds that he doesn't care about the 
term, doesn't invoke it to defend his position that brains can't be said to 
"cause" minds, etc. But just denying something doesn't mean it isn't the case. 
Searle himself accused Chalmers of epiphenomenalism despite Chalmers' denial of 
the charge (The Mystery of Consciousness) so denial alone cannot be taken as 
dispositive (unless you want to say Searle couldn't make that criticism of 
Chalmers).


> >
> > > Bruce simply, along with Hacker (and Neil and probably the whole 
> > > populations of Germany and France), thinks that the thesis that the brain 
> > > causes consciousness is somehow problematic whereas Searle thinks such a 
> > > view simply retarded.
> > >
> >
> > Irrelevant to what I said to Bruce.
>
> But to an historical point.
>

Anything can be relevant to something but that doesn't mean it is relevant to 
the issue at hand.


> >
> > > On other occasions, Stuart tried to distinguish Searle's position as 
> > > consciousness arising/caused "full-blown" from the brain, as opposed to 
> > > some functional account of system processes which he conflated with BP 
> > > while yet wanting the account in functional terms PP.
> > >
> >
> > See my longstanding critique of the CRA and why it fails. The problem 
> > alluded to here is explicated there.
>
> The critique was based on not being able to understand that the point of the 
> CRA was about the inadequacy of functional explanation.


I think that 1) YOU don't really understand the full point of the CRA and 2) 
you certainly DON'T understand my critique of it.


>  You invented a way for strong AI to be about BP,


I wish I could claim inventor's rights but, as we have seen, Dennett got there 
first and so, apparently, did a number of other critics (some of whom we've 
read on-line). I like to think that I have formulated the argument against the 
CRA better than anyone else had yet done but that is certainly not something I 
can claim given my failure to have convinced others on this and earlier lists.


> something that Searle wouldn't argue with if true.  But strong AI is about PP 
> after all, no matter how many tries you've had for conflating it with BP.
> >


Searle is very confused with his CRA, however credible a professional 
philosopher he may be. But then, the issue of minds is a tough one and many 
philosophers have foundered on that rock before him.

If by "PP" you mean "parallel processing" (which is what you meant when you 
first coined this latest of your acronyms), then yes, I agree that "strong AI" 
IS about "PP" precisely as Dennett says. And Searle's CRA argues against the 
possibility of "strong AI", i.e., he argues against Dennett's position. 
Therefore (and do pay careful attention now) Searle and Dennett are not in 
agreement on this issue.

Since my position on this particular issue is virtually the same as Dennett's, 
it stands in the same position, vis a vis Searle, as Dennett's does. (Got that 
so far?)

Therefore what I have said cannot be consistent with what Searle claims on this 
issue DESPITE THE FACT THAT YOU HAVE PREVIOUSLY REPEATEDLY ARGUED (taking a 
leaf from PJ's book) THAT MY POSITION REALLY AGREES WITH SEARLE'S.

Can you now see just how absurd that position of yours is?

Now as to your PP-BP dichotomy, note that Dennett's position is about the 
hardware running the software. Parallel processing does not occur in terms of 
the software BUT THE HARDWARE. So, if this is about parallel processing, then 
yes, it is about the hardware! And since Searle denies Dennett's thesis, which 
is about the software running on a particular configuration of hardware ("PP"), 
Searle's position cannot hold that, if more robust hardware in the form of 
parallel processing is added to the mix, then the CRA doesn't deny it.

In arguing against Dennett (as we have seen he does) and in arguing against the 
system reply (as we know he does), Searle IS arguing against both that parallel 
processing hardware running software.



> > >

<snip>

> >
> >
> > What I am getting at is answering the question of how brains do it.
>
> You're not getting at it at all.  Like me, you are being a cheerleader for a 
> Searlean position.
> >

Oh for chrissake, see above.

> >
> > >  But this isn't an argument/can't be an argument against the thesis that 
> > > the brain causes/realizes cons. in some synchronically causal way that we 
> > > have no choice but to look into by first looking for NCCs.  On Searle's 
> > > view you can enjoy your phenomenology without thinking that it rules out 
> > > his brand of brain research when it comes to a good scientific account of 
> > > mind--the philosophical account is indeed a sort of identity theory 
> > > without ontological reduction.
> > >
> >
> >
> > Again totally irrelevant to the discussion here.
>
> Right.  It is about you telling everybody exactly how the brain does it!
> >

Nope, I am not telling anyone exactly anything. I am proposing a way that 
consciousness can be explained in a manner that is analogous with computational 
operations on computers. Whether this can work in the real world is for 
empirical researchers to experiment with and determine. It cannot be 
definitively decided on lists like these or by anyone who is merely arguing 
about it without benefit of empirical data.

I think your problem is you don't understand the difference between discussions 
of possibilities and scientific claims.

I also think some people have such a strong aversion to the possibility of this 
particular thesis, for whatever the reason, that they simply cannot accommodate 
themselves to the possibility such a thesis might be true. The result? They 
seek various logical and/or a priori arguments to shut a door they fear may 
already have been opened.


<snip>

> >
> > . . . I said I once found Searle's CRA convincing and then, after giving it 
> > more thought, concluded it wasn't and then set out to discover why. I 
> > kicked a few ideas about that around but finally settled on two problems 
> > with it:
> >
> > 1) It is structured equivocally because the critical third premise (or 
> > second, depending on the iteration being considered) depends on an 
> > equivocation for the conclusion to stand; and
>

> There is absolutely no equivocation--where is the equivocation in saying that 
> syntax is neither the same as nor sufficient for
> semantics?


"Sufficient for" can mean either "sufficient to cause" or "sufficient to say 
that the one (syntax) is the other (semantics)".

That is, the second part of the sentence allows a reading either of 
non-causality or non-identity.

However, only the claim of non-identity is self-evidently true (as Searle puts 
it).

The claim of non-causality is neither self-evidently true nor is it seen to be 
true by considering the CR itself (unless one presumes dualism, which puts the 
one making such a presumption in contradiction with Searle's other position 
that brains [which are physical] "cause" minds and puts Searle, if he is the 
one making it, in contradiction with his own denial of being a dualist).

Of course, for the conclusion of the CRA (a conclusion of non-causality) to be 
true, the premise in question has to be true with regard to the non-causality 
interpretation of that premise. But as we see above, it cannot be true unless 
dualism is assumed but Searle denies dualism and actually holds a position 
about brains which appears in contradiction to the dualist thesis he applies in 
the CRA.

Now we can still grant that the non-identity interpretation of the premise is 
true, precisely as Searle claims, but that doesn't matter because NON-IDENTITY 
DOES NOT IMPLY NON-CAUSALITY.

Oy. I am really tired of exlaining this to you again and again and I suspect 
others here may be tired of reading about this yet again.

Instead of simply denying this in the future, why don't you bite the bullet and 
ACTUALLY TRY giving us an argument as to WHY THE EQUIVOCATION I HAVE IDENTIFIED 
DOESN'T REPRESENT A GENUINE EQUIVOCATION.

That actually might be nice for a change of pace.


>  Note the the first premise lays down that programs are formal.

Not disputed by me for the purpose of this argument though, obviously, we have 
some disagreement about what it means to say of programs that they are 
"formal". You think it means they are abstract whereas my point remains that 
they are sets of coded instructions and the only way they count is when 
IMPLEMENTED by the platform onto which they have been programmed and it is the 
implementation that this is about.

>  And note that semantics involves mental content (except for eliminativists 
> who are trying to squeeze semantics from syntax by
> conflating syntax with physics).


Note that saying "semantics involves mental content" is not to say very much 
unless one can elaborate what that means. What IS "mental content" and so 
forth. But, again, for the purpose of critiquing the CRA it's not necessary to 
get into this since the whole problem with the argument can be traced to the 
equivocation in the third premise. So I grant you that "semantics involves 
mental contents". So what?


>  The "insufficient for" claim is about the insufficiency of functional 
> explanation--it is shown that a machine may pass a Turing test and still have 
> no semantics.
> >

Oh nonsense. No one doubts that such a test could be passed in that it could 
appear to be passed. What is in doubt is whether that means the machine is or 
is not really conscious, EVEN IF IT DID SEEM TO PASS THE TEST.

Now that premise is quite precise. It reads (in one of the standard forms): 
"Syntax doesn't constitute and is not sufficient for semantics."

Note that it doesn't say "not sufficient for explaining".

But let's try it your way and stipulate that it does. In that case it would 
have NO implications for the conclusion of the argument because the conclusion 
requires a true claim of non-causality, since it concludes:"therefore computers 
can't cause consciousness".

Interpreting the text as you would have us do, as a claim of insufficiency of 
explanation, would tell us nothing about what a computer could or could not do 
but only about how we could or could not explain whatever it is the computer 
did. But if it walks like a duck and quacks like one . . . if a computer acts 
conscious, then who cares how we explain it? Note, the CRA is about denying 
that a computer ever could walk and quack like that duck!


> > 2) The equivocation masks an implicit assumption in the interpretation 
> > Searle wants us to make of the CR itself, an assumption which is in 
> > contradiction to what Searle says elsewhere about brains AND which 
> > contradicts Searle's own assertions about dualism.
>
> There is no equivocation--

There is.

> Searle's implicit assumption is that brains cause consciousness via > BP, and 
> that PP is STILL too formal.


"PP" is "BP" even if the operations it is implemented are contained in and fed 
to it via formalized coding of the information.

>  But if YOU want to equivocate between functional explanation and BP without 
> PP explanation, then you are on notice that I see what is happening.
> >

You are kidding yourself.

> > Now that's all I want to say about this as it's an old argument here and 
> > elsewhere by now and we have kicked it around many times. If you don't 
> > agree with my points that's fine. I will simply say you are wrong, most 
> > likely because you don't understand them. Enough said.
>
> I understand exactly how you come to your conclusion.  But it is crooked and 
> the product of any honest toil.
>

You don't understand diddly about any of this.


<snip>

> > In the above I am not talking about Searle but about the implications of 
> > something Bruce said! Everything in the universe is not about Searle, Budd. 
> > There are other issues and thinkers, after all!
>
> The topic indeed revolves around Searle's position that you butcher by 
> conflating PP with BP in order to miss Searle's point, while in essence 
> sharing his view.
> >

Not diddly. See above for a refresher course.

<snip>


> > > > Well sure, but that is not the key component because a brain can be 
> > > > conscious even if deprived of sensory inputs as some scientific 
> > > > experiments have shown.
> > >
> > > That's a good point.
> > > >
> >
> > You're scaring me, Budd! Are you all right?
>
> You would be a moron to think that we disgree on a lot since I pointed out 
> that the upshot of your position is consistent with Searle's.  But your 
> critique of the CRA is utterly moronic.
> >


Don't come to me for another refresher. It's all in the text I've just typed 
above!

<snip>


> > > Here's how [Bruce's position] is not dualistic, Stuart.  All Bruce needs 
> > > to do is say he's uninterested in how the brain causes consciousness when 
> > > it
> > > comes to his "being in the world."
> >
> > Then he is not addressing the issue I am addressing and there is little 
> > point of his going back and forth with me over the issue I have focused on 
> > -- and I certainly don't challenge his desire to think about minds apart 
> > from brains. More power to him! Just don't latch onto that as an answer to 
> > a question about how brains relate to minds.
>

> You and Fodor both are identity theorists of a kind, along with Searle.  The 
> "direct realists" are going to be made fun of below, if I remember to append 
> it.
> >

I have no comment on Fodor. I find him opaque but I haven't read enough of him 
on this matter to say anything definitive. Your position on my position though 
is a total mess. You are all at sea where my arguments are concerned.

> >

<snip>

> > > Is replication simulation or pound for pound emulation?
> >
> >
> > You know or should know my view: By "replication" I mean to completely 
> > copy, down to the small RELEVANT operating details whereas by "simulation" 
> > I mean what Searle means, i.e., to model digitally using computational 
> > technology. Of course, both words in ordinary language could do duty in 
> > both cases so here is a case where, for clarity, we want to stipulate 
> > (i.e., indicate which of a set of legitimate meanings of the term in 
> > question we are invoking). I have done this in the past. Now I have 
> > reiterated. Perhaps you will commit it to the memory banks now?
>

> Okay, just as long as cashing out "relevant" meets Searle's causal reality 
> constraint which he thinks PP as well as serial processing doesn't.  But 
> maybe one can conflate PP with BP and get away with a Searlean position which 
> is noted not to be his osition because he argues against strong 
> AI..............
> >

Searle doesn't agree that "Strong AI" on parallel processors is any more 
possible than it is on serial processors because Searle thinks computational 
processors of any sort are not candidates for that job for certain logical 
reasons. Those articulated by him in the CRA have already been shown by me to 
be flawed above: Equivocal third premise, suppressed dualism which is 
contradictory to his other claims about mind AND to his own position vis a vis 
dualism. Hence, his position is a mess. His later argument is also mistaken and 
we have dealt elsewhere with that as well.

> >
> > >  Is PP conflated with BP such that a PP explanation is equivalent to some 
> > > BP explanation?
> >
> >
> > This is fiddle-faddle. It's not about which explanation is better for what 
> > a computer can do but what a computer can do.
>
> OY!

My sentiments exactly. Credit to Peter Brawley for introducing the term in 
earlier debates on Analytic.

> >
> >
> > > Then you are with Searle even though without understanding why he thinks 
> > > functional explanation insufficient for a sound philosophy/science of 
> > > mind.
> > >
> >
> > See above.
>
> OY, OY!
> >

Time for your refresher again (though I suppose it still won't do you much 
good).

> > <snip>
> >
> >
> > Searle misses the point of the system reply.

>
> Um, they are conflating PP with BP at some point or not.  If not, they are 
> contradicting themselves.  If so, then it is Searle original position and one 
> he's not arguing against when replying to the systems reply.
> >

If anyone following this thread thinks you are making sense in your logic, I'd 
like to hear from them. Otherwise I think I'm going to start passing on your 
posts again. I cannot believe the way you have mangled all this!

SWM

<snipped all the rest>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: