[Wittrs] Re: Stuart's questions

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 02 Aug 2010 22:50:11 -0000

Bruce writes:

> >There are processes that exist in the
> absence of anything physical (some mental substance) or there are
> processes that can't be described in physical terms (a host of
> examples)? Obviously I choose the latter. Now you point out that the
> physical is an essential condition. True. But the trick is to relate the
> physical to the non-physical.


Stuart replies:

> It only looks like a trick to you if you persist in conceiving two distinct
aspects of the universe. If one can see how this presumption isn't needed, there
is no trick because there is no "hard problem" and no "mind-body" problem to
solve.


That's not quite right, Stuart.  There is a hard problem and no one has solved 
it and Searle's point is that functional explanation doesn't help.

Actually, if you think about it, a computational functionalist just might 
prefer a point of view where the hard problem can't arise.

But that, for Searle, is not good science.


Stuart continues:

>There is only how we explain the occurrence of subjectness in certain
physical entities but not others. And to get there we have to see if subjectness
can be accounted for by physical processes.


The above sounds as if it is an open question--that maybe we can discover that 
consciousness can't be so explained.  That is certainly not Searle's position 
but may in fact be endemic to functional explanations which in principle really 
give up on solving the hard problem.

No one is going to have a sound argument as to why the hard problem is either 
unsolvable or a red herring.

Hacker, though, would accuse those who think it is a legitimate scientific 
question of being captive to a mereological fallacy:

Contra Searle, for Hacker, it is the person that is conscious, not the brain.  
And persons are persons in relation to external environments.  Ergo, explaining 
consciousness by brain processes is to mistake a part (brain) for the whole 
(person plus environment).

I think this is just a dodge of philosophy in order to prevent a solid 
connection between philosophy and the hard sciences.

It doesn't matter.  You can have Searle's cake and all the externalism you 
want--it is a specific scientific issue as to how the brain causes 
consciousness; this doesn't even touch other matters of practical interest.

IOW, you can do ordinary language philosophy even if a Searlean.

Stuart also writes:

> Searle, too, pays lip service to the idea that brains do it. But he never 
> attempts to explicate how.

No one knows yet, that's why.  And neither does Dennett even though he wrote a 
book as if he explained it already.

> However, it is NOT enough to leave this ambiguous, as Searle does.

Are you really making the mistake right now of saying that because one can't 
explain how the brain does something like cause cons., then one is leaving 
something ambiguous?


> Dennett, for his part, DOES offer an explication of how, on the other hand.

No he doesn't.  Here's a common form of Dennett's argument, though:

1.  Some systems cause consciousness.

2.  This is a system.

Ergo 3., This system causes consciousness.

And then we note that Searle's argument is the same but with one qualification: 
 We know at least that brains do it.


> He proposes that it is done in a way that is roughly analogous with how 
> computers work.

Roughly analogous.  Watch how we can generate ambiguity up to the nines now:

1. PP = BP.  OR

2. "BP plus"  does not = PP, where the "plus" is to be thought of as 
information processing.

Ideally, BP explanation is not about information processing but blind physics.


> To get there, he reconceptualizes consciousness in a way that shows how it 
> COULD be the outcome of just such processes combined and operating within a 
> complex and dynamic system.

Well, we already have an example of a system.  It is the brain.  So we already 
knew what Dennett is trying to sell.  The ambiguity in his position is whether 
some of the dynamics of a system that can do it MUST/CAN be fleshed from a 
computational point of view.  If Dennett is right about software, he might have 
it in mind that the real explanation is BP anyway.  It's just that Dennett's 
bottom line is zeros and ones; whereas Searle's bottom line is BP without 
functional description a la PP.  If the bottom line is zeros and ones for 
Dennett, and the interpretation of the zeros and ones comes down to outside 
interpreters, then the zeros and ones may not be intrinsically adding anything 
to a BP story.

So if one is careful, one can create the following salad bar of options (such 
options as I'll write will simply pale in face of the sixteen or so possible 
positions in philosophy of mind as set out by C.D. Broad):

1.  Maybe it is incoherent a la Hacker to think that our subjective categories 
can be made into good science.  To avoid category mistakes, one inspired by 
Hacker might think to house their speculations in computational terms such that 
the study of consciousness can get along fine without studying the brain.  
Hacker might be a good behaviorist when it comes to science and, say, a 
libertarian when it comes to something like "the space of reasons."

2.  Surely Searle is right but even so all we might get (for a long time) are 
corrolations and not causation.  One can parlay this into the thesis that 
solving the hard problem empirically nets us a theory that won't be completely 
confirmable--"What if consciousness only happens within universes that are 
fully describable in no less than ten dimensions such that the extra dimensions 
are to be taken  (somehow) into account in a good theory of mind?" for example.

3. I'm tired.


Stuart continues:

> I don't assume anything. It's a model, a thesis, indeed, in scientific terms 
> an hypothesis. The question is how does the brain do it. Dennett has offered 
> one possible answer. As Dehaene said in that material we linked to, "it turns 
> out Dennett is right". He was, of course, only referring to one aspect of 
> Dennett's claims but that's how science works after all, one element at a 
> time.
>
> SWM

I'll assume that where Dennett is right is where Searle is right (the "some 
system does it" argument.  Further, I'll assume that what Dennett is okay with 
(PP explanation), Searle is not.  Searle's reason is that endemic to PP (or any 
computational model of the brain or OTHER AI system) is a level of explanation 
that is too abstract to count as blind BP.  To say that the system is 
physically doing information processing is systematically ambiguous.  Is the 
information processing BP or is it BP plus observer-relative ascription of 
information processing because the zeros and ones need to be interpreted from 
outside the system?

The humunculus mistake is endemic to PP in the following way--we need a 
humunculus outside the system to interpret the symbols and if we had a 
humnculus inside the system shuffling the symbols, the humunculus wouldn't 
necessarily understand how the symbols are grounded.

When Searle says that he rejects the systems reply, he makes it plain what he 
thinks the system reply is presupposing:  It is presupposing that it is the BP 
system PLUS the information processing that causes the understanding. And that 
account is rejected by Searle because the information processing is just an 
interpretation of the BP which adds no extra causality to the BP of the 
computational system in question.

Searle just thinks it absurd to allow that the information processing is really 
BP.  The reason, again, is that it is not--an outside humunculus is necessary 
to ground the symbols such that the symbol grounding is not intrinsic to the 
machine language.

We, on the other hand, don't have a symbol grounding problem endemic to 
computational functionalism.

But then again sometimes philosophy goes binge drinking (or worse, Spartan 
Mormons) and solves a problem that at the same time is said not to be a 
problem, while neither touching on the problem or acknowledging the existence 
of one.

The hard problem is about exactly how the brain does it.

If that is a red herring, as Stuart once suggested, then what is presupposed?

I'll tell you:  Denial.

Such denial can go to great lengths:

1.  Label the other side before they label you.

2.  Pretend that the computations are meant as BP, thus agreeing with the 
spirit of Searle's views while stating that Searle created a strawman.  Fodor 
is accused by Dennett of a fair amount of strawmen given that Fodor has had 
beefs with much of mainstream philosophy over the years.  And that's why he is 
someone to read alright!

3.  Shift the topic so that sometimes strong AI is about systems different from 
brains.  Switch back to saying that Dennett's thesis is one about how the brain 
actually works, though strong AI need have nothing to do with how brains work 
because the issue is about multiple realizability of hardware being able to 
sustain programs which cause behavior and yet are explained both in BP and BP 
plus (=PP).

4.  Finally get caught misrepresenting Searle's clear and distinct ideas on the 
issue.

5.  Finally, learn that Dennett hasn't solved the problem of which Searle also 
hasn't, though they both agree that some system gets it done.

6.  5. is compatible with the hard problem as well as the view that the hard 
problem is hard for some precisely because of a wrong view of mind.

7.  The wrong view of mind for Dennett is that there are intrinsic mental 
contents in brains.  Maybe he vacillates, maybe he doesn't--I've heard that 
he's not really really an eliminativist, but I'm not so sure he isn't a zombie 
sometimes.

8.  The wrong view of mind _study_ for Searle is functionalism, since it 
dismisses the hard problem, is inherently eliminativist, and commits a 
humunculus fallacy that a BP explanation need not..

9.  The right science is to explore what we can at different system levels.  No 
one ought expect to explain concepts by studying molecules, even though vast 
systems of molecules are involved in our having concepts.  Likewise, it doesn't 
really make sense to think that duplication of behavior via computation amounts 
to anything but a sort of Disneyland.

10.  Fodor's view of better science (if it is romantic he tells us to blame Tom 
Kuhn): Isolate smallish systems and set up scientific environments to test 
them.  The trouble Fodor has with Dennett:  Too much programming too soon.

11.  Assume PP to be how the brain does it.  The brain allows for semantic 
content.  So PP must also.  Assume that PP is just another way of explaining 
something in BP terms.  Then you have a thesis which is consistent with 
Searle's, though he would prefer to drop PP in favor of BP.  Assume that PP 
explanations are in fact different from BP explanations in virtue of 
computational roles.  Then one must come to grips with whether computations are 
ever intrinsic to systems or whether we just use some systems as aids in 
computing, file sharing, etc.

I'm tired of this discussion.  But have we learned at least what it is that 
Searle has a beef with?

And have we learned a bit about why there is in fact a beef with Searle from 
Bruce's point of view?

Cheers,
Budd



=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: