[Wittrs] Re: Where's the Dualism?

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 17 May 2010 22:54:12 -0000

Stuart writes:

> But we are adding processes doing more and different things in an interactive 
> way (thanks to the added capacity of parallel processors, say).

If in terms of brute causality, then Searle is not arguing against this.  If 
the added capacity is computational capacity, then it doesn't matter how much 
computational capacity there is if the system is a system of software on 
hardware.  Deny the distinction between S/H and nonS/H systems and you still 
have no cogent argument against Searle, but also no understanding of weak AI.


>
> That all the things being done are accomplished by the same type of process 
> is irrelevant. There is no reason to think that brains, at their most basic 
> level, consist of specialized and unique activities called believing, 
> feeling, knowing, etc.


The above is a strawman.  If you have a philosophy of mind that is inherently 
not interested in beliefs, semantic content of beliefs, and intrinsic 
intentionality, then it amounts to a reductio of the program.  Fodor largely 
believes that such approaches are variously incoherent and also a 
prioristic--perhaps resting on some outmoded (eventually!) criteriological view 
inspired by Witters.

Also, Searle is not arguing that the intentional notions are found at the 
bottom level where consciousness etc. is causally reduced.  The point is to 
have a story that doesn't eliminate what we want to explain, i.e., things like 
ontological subjectivity which, as a matter of fact, must be assumed if traffic 
signals, say, are to be meaningful for conscious drivers.

For Fodor, it is mostly taken for granted that intentionality doesn't go as 
deep as fundamental physics.  In this respect he shares Searle's point that one 
can have causal reducibility without ontological reducibility.  Some special 
sciences are at levels above the fundamental physics without being inherently 
dualist.  Take discovering DNA for example.


>Granted that at a certain level of internal observation of the brain we would 
>expect to "see" distinct (or somewhat distinct) sub-systems that accomplish 
>all of these things, but such sub-systems are built up of more basic 
>sub-systems, etc., and at some point we reach a stage where the underlying 
>processes are no longer differentiable. We would expect the same to be said of 
>computers, especially of those set up to replicate what brains do.
>
> SWM

Yes but one would do a bit better to have a distinction between S/H and nonS/H 
all the same.  One doesn't want to find zeros and ones at the bottom level.  
Just pure power.  Incidentally, one can have a computational theory of thought 
without having the same trouble that Fodor sees as endemic to all 
eliminativisms and connectionist architectures.  Wanna talk about that?

But it is notoriously difficult to find mechanisms in the brain which fulfill 
Fodor's notion of how narrow content locks on to properties.

Suffice it to say that it is an empirical affair.  Fodor seems to whole 
heartily agree with Searle on fundamentals such that if intentionality isn't in 
the cards of a research program, then it prolly is a bad research program vis a 
vis philosophy/psychology of mind.  Though Searle isn't arguing against weak AI.

Earlier, Stuart wrote:

>Unfortunately for the CRA, all the premises are not true though the one in
question (Searle's third premise) can be taken to be true if one starts with the
presupposition that the absence of consciousness in the configuration known as
the CR is definitive evidence that the constituents of the CR cannot do the job
>required.

They (the constituents of the CR spelled out as running computer programs which 
are syntactical through as [sic] through) can't even be considered as a 
possible coherent hypothesis because they are defined in terms of computation.  
Once you conflate physics with computation, however, then everything is 
computational and you have a theory of mind which can't distinguish systems 
that have minds from those that don't.  The thermostat will have low-level 
beliefs, etc.  The point is that that is preposterous and a reductio.



>But THAT isn't established except by empirical discovery or reliable conceptual
analysis.

I think Searle pretty reliable.  But if you want to talk about a different 
animal that Searle isn't arguing against, you could do better than get Searle 
wrong beforehand.

Cheers,
Budd




 Thus one is placed in the position, if one wants to accept the CRA's
conclusion as true, of presuming it is true at the outset because one already
subscribes to an implicit presumption of dualism (that understanding cannot be
reduced to anything more basic than itself).





=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: