[Wittrs] Re: Understanding Dualism

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 24 Aug 2010 20:46:46 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:
>
>
> > responding to http://groups.yahoo.com/group/Wittrs/message/6197
>
>
> > SWM:
> > I would disagree though if you think that AI folks, in supposing
> > the "mental" is strictly computational are positing some kind of
> > non-physical process at work.
>
> Whether or not computation is physical is of no importance here.  The
> point it that it is separated from the physical input (what I  called
> "process A".
>

Then on what grounds do you think that computationalists are uninterested in 
the processes which collect and deliver information about the world to the 
processor? Of course there are different kinds of information and information 
delivery: an apparatus might simply capture signals about an object which allow 
construction of a visual image of the object which some other part of the 
processor is equipped to read (as in interpret); or it may receive abstract 
information about the object in the form of coded data which it is similarly 
equipped to read (interpret).

Is your concern with THAT distinction?

Are you making the point that all AIers are focused on the latter, coded kind 
of data rather than the former signal type which results in some kind of 
internal encoding (to form representations and thence useful data for 
interpretation)?


> If process A produces what we consider to be intentional
> representations, then a good part of the requirements of  intentionality
> have to be there in process A, whether or not we  consider process A
> itself to be intentional.


I don't see how this poses a problem for the AI thesis. Perhaps we have 
different understandings of what AI entails?


>  If process A is  generating representations
> that are about the physical world, then  something it is doing has to be
> about things in the physical world.
>

I see no reason to assume that what you have named "process B" must stand apart 
from the operations you ascribe to "process A" if this isn't about the 
distinction between physical and non-physical.

If the mental or the computational, which you have placed in the "process B" 
basket, are conceived in a physical way then there is no real distinction that 
I can see between them and what is placed in the "process A" basket.



> The dualistic division leaves that part of the requirements of
> intentionality absent from process B.


Dualism is the supposition that there must be something other than purely 
physical processes underlying mental occurrences. If you think "Whether or not 
computation is physical is of no importance here" then where is the dualism you 
are opposing?

This is why I initially raised the question. I really am not sure that your use 
of "dualism" fits the classical philosophical use, in which case I am trying to 
determine if perhaps you are making a different point. If so, if your 
opposition to dualism really isn't an opposition to what is usually intended by 
that term, then maybe this has led to some of our past misunderstandings?


> And as long as thinking is  said
> to occur within process B, that leaves the thinking as without  some of
> the requirements of intentionality.
>
> Regards,
> Neil
>
> =========================================

First as to "intentionality" -

Your statement above leaves "intentionality" unexplicated. Most of us will be 
able to say we know it when we "see" it of course. Although a rather abstract 
philosophical term, I think it's pretty clear to most here that what is meant 
by "intentionality" is the aboutness of our thinking, i.e., that to think 
anything at all is to think about something. We recognize it in ourselves when 
we introspectively consider what we are doing when we think and we recognize it 
in others when we realize that their actions demonstrate awareness of others, 
their environment, etc. But where is this "aboutness" in your "process A".

I suspect you'll say that it's in the fact of perception itself, i.e., that 
perceiving is perceiving something which qualifies it as being about something. 
I'm not sure, however, that that is a fair account. To think about anything 
need not require perception. Cut off from all perception, should such a thing 
ever happen, it is hardly unreasonable to suppose that we should be able to 
continue thinking about things at least for a time, until our representational 
structures break down for lack of reinforcement. Similarly there are many 
creatures which manifest degrees of perception of which it would be hard to say 
that they are actually thinking about that which they are perceiving.

Of course you might want to say this is not merely about "thinking about" but 
about connecting in aware way with one's environment so that thinking in any 
normal use of that term isn't really essential. But then to what degree is the 
insect or, more, the amoeba anything more than an organic automaton? And if an 
organic automaton can be said to have aboutness at this level, then why not an 
inorganic automaton with the same levels of functionality thanks to its 
operating elements? And then life is no longer the essential precondition you 
often seem to presume it is.


Second as to the matter of dualism -

If what you place in "process B" is conceivable as being physical, as you seem 
to be saying, then there is no reason for it to be differentiated from what you 
have placed in "process A" as far as I can see. In which case where is the 
dualism?

If computations are physical processes then what is dualistic about supposing 
them to be the operations that, under certain conditions, have consciousness in 
the form of intentionality, and so forth?

I know that you have linked your view to the notion that only living things (or 
things which, like them, are homeostatic systems with the need and capability 
of establishing and maintaining an equilibrium state) can achieve 
consciousness. But what is not clear to me in this is how you take this claim 
about the importance of homeostasis forward to the actual production of the 
condition we call "being intentional", etc.

You have, in the past, suggested that the mechanism (not merely the operations 
of some synthetic inorganic device exclusively, of course, but any function 
that results in a given outcome, including the function of homeostatic 
systems!) that bridges the need of the homeostatic system, the operation that 
causes the occurrence of consciousness, is what you have termed pragmatics (as 
in operations that are purposeful, e.g., to maintain the system's homeostatic 
state). But it is not at all clear how "pragmatics" serves as the missing link 
in an account of the occurrence of consciousness.

That homeostatic systems, like living organisms, are constructed to function in 
a way that sustains their equilibrium state does not, in any obvious way, tell 
us how some living organisms with brains (perhaps only some brains) then become 
conscious.

Certainly this may tell us how and why evolution leads to consciousness in the 
cases of some organisms (and perhaps not others) but it doesn't tell us what is 
being done by the conscious organisms (what is occurring in their key organs, 
like their brains) that is, in effect, making the instances of consciousness 
happen.

I know I am straying a bit beyond what you were posting about with Budd here 
but I am just trying to understand your position in the context of other things 
you've said with regard to this question.

You have indicated that you have a better way of thinking about what makes 
consciousness than is typically found among philosophers or even some 
scientists (AI researchers and perhaps neurobiologists of a certain ilk). I 
just want to understand better what it is you have in mind by that claim.

Thanks.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: