[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 03 Feb 2010 21:53:37 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:


> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:
>
>
> > Is this picture really all that different from Dennett's proposal
> > that brains run processes in the way computers run algorithms?


> Here's a quick comparison.  I'll use "A:" to prefix the AI/Dennett
> view, and "N:" to prefix my view.
>


This is very difficult for me to parse (having already read it once through -- 
those who carp about my tendency to respond on the fly, please take note!) but 
I will attempt to follow and ask questions or comment in the appropriate places.


> Information
>
> A: Information is a naturally occuring part of the world, and  is picked
> up by sensory cells.
>
> N: Information is inherently abstract, so does not exist apart  from its
> construction and use by humans (or other cognitive  agents.  We interact
> with the world in order to construct  information, and we use sensory
> cells in that interaction.
>


What does the claim that information is a naturally occurring part of the world 
rather than being abstract amount to?

In one sense, no doubt, all that the "A" people would want to say is that 
abstractions don't occur in any causal way, they are ideas, general ones in 
fact, covering a number of cases. Hence they are only in the mind (as you seem 
to be saying Neil). Nevertheless, if that WERE the case, if they had no 
concrete reality at all we would have the situation Searle envisions with 
computer programs, i.e., nothing happening in the world, just ideas in some 
heads as it were, the meaning of the computer codes in the minds of their 
programmers and the understanding of the computer users.

Yet things DO happen in the world as we know and computers programmed in 
particular ways certainly cause physical events. People with ideas in their 
heads take actions.

A way around this? Josh would likely say that all abstractions are really 
elaborations of very precise, concrete things, events, what have you. There is 
not the class of X but the name we give to X-1, X-2, X-3, etc., until all the 
relevant X's are exhausted in some relevant context (the class of all X's in Y).

Where are the classes themselves? Well not anywhere, really. Speaking of 
classes is just a way of organizing our references to many X's. One could, 
conceivably, organize these in other ways, too. All the X's in Y, all the X's 
that Z, etc. Classifying and grouping just seem to be natural things we do. 
It's how we think about and talk about the multiplicity of our phenomenal world.

But what is an X, itself, then? Josh (again I am guessing but perhaps I am not 
far off) would say it's this particular X and this one . . . and this one and 
so on. But as soon as we name whatever the referent of X is and say THAT is an 
X (what I mean by "X"), we are back to this idea of "information" aren't we, 
i.e., a named particular is no less an instance of information than a 
generalization about many X's. That is, calling a referent an X is 
informational, no less than grouping multiple X's in some fashion. To name 
something is already "informative", it already represents information!

I think you are right, Neil, that information needs to be information FOR 
someone and I also think it needs to be ABOUT something. So there is no 
information without minds. But THAT information, the kind minds hold/think 
about/conceive, must be grounded in something or we are left with an abyss 
between whatever it is minds do and the world. But that can't be because 1) 
minds lead to effects in the world and 2) the world can manifestly have effects 
on minds.

If there was no physical reality, no phenomenal input, there could not be 
anything to be informed about. So I think Josh (and I'm picking on him because 
he's our most explicit nominalist here) would say that there must be a physical 
underpinning to every informational conception, every informational application.

How then are we to understand the physics of the phenomenally real world if we 
presume a divide between what the mind is and the world it knows? Searle says 
what is abstract can have no causal efficacy  except through an agential medium 
(someone who can act with intention, who makes the abstraction concrete). But 
if a view like the one I am imputing to Josh is true, every single physical 
transaction, whether between two mindless entities or one minded entity and one 
mindless one, or between two minded entities must occur in a physical medium. 
How then does information as abstract mesh with the causally real, how does the 
general with fit with the particular?

Can it make sense to suppose that information is always set apart from what it 
is information about or is it really more sensible to collapse the distinction 
between information as abstraction and information as what is particular? After 
all, the work of both computers and brains occur in terms of real world events, 
albeit of an an apparently quite different sort.

Searle grounds his later argument on this question of the causal incapacity of 
the abstract. But can we really presume such a radical disconnect between what 
is abstract and what is causal?

You say, Neil, that "We interact with the world in order to construct  
information, and we use sensory cells in that interaction." But how can we 
interact if there is this radical divide? How can "we use" anything, how can 
sensory cells do anything to anything else if there is not some kind of 
transactional event occurring between physical entities at some level?

I don't want to suggest that we don't construct ideas, impose form on raw data 
because I think it's pretty clear we do. But perhaps it's a confusion to 
suppose that in so doing we are taking something from an abstract realm to 
superimpose it on raw, formless physical phenomena. Perhaps THAT is just a 
picture which is finally misleading?

If the universe has order (and everything we know tells us it does) why should 
we suppose that that order is only in our mind, our way of seeing things?

While we can never know what the universe would be without the presence of 
observers like us (and indeed, on a strictly individual phenomenal level there 
would just be nothing at all), there is no reason to think the universe exists 
only in our own minds (though, metaphysically speaking, there may be no reason 
to think otherwise, either). If it doesn't then there is order to it, beyond 
ourselves, and our capacity to succeed in it, to survive and even, at times, to 
prosper, must depend on our being in sync with such an order, an order that 
cannot be something imposed by each and every observer at each moment of that 
observer's existence.

If this is the right way of looking at this, then information IS a naturally 
occuring phenomenon (as the "A" people say). It's just different than what we 
normally think of as "information" when we consider things like knowledge, 
perceptions, etc. That is, what we think of as the information we have in our 
heads would just be a particular manifestation of the physical events that 
underlie the reality that produces brains, brain events and the sense of 
subjectness that we recognize as having a mind, being conscious, when we 
consider ourselves.


> Core functionality
>
> A: Computation/logic, applied to the information picked up by  A:
> sensory cells.
>
> N: Information gathering, which I shall loosely refer to as
> "measurement".
>
> Starting point
>
> A: Most AI people assume large amounts of innate knowledge or
> structure, perhaps in the form of a program and a data base (often
> called a "knowledge base").
>

This may be. I don't know what the underlying assumptions are of most AI folks. 
But I don't think it is essential to the AI project to think this way. Yes 
there must be some structured mechanism or medium to have the interactions we 
think of as being conscious. But is that "innate knowledge" in any real sense? 
Is a "tabula rasa" that if it lacks the form of a tabula, the emptiness of 
being rasa, etc.? And if it doesn't, does that mean we must say that there is 
already innate knowledge that is part of being a tabula rasa because it looks 
like a blackboard rather than a grapefruit?


> N: Self measurement of internal states.  The system can be said to
> have, as innate purposes, the maintaining of internal states within
> innately prescribed limits.  Among those innate purposes is a drive  to
> explore ways of interacting with the world, including ways of  forming
> information about the world.
>


I think we can actually join your A and N here. That is, the system you seem to 
have in mind already has form, as the tabula rasa does. It's an X and not a Y. 
But that doesn't imply that it isn't interactive with its world or capable of 
ongoing adjustment to the inputs it is receiving. Is the fact that it is the 
particular kind of system it is, "innate knowledge"? Well it might be, 
depending on how sophisticated the particular system is. Thus, human brains are 
better prepped to handle the world than some other kinds of animals' brains or 
equivalents. Other animal brains may be better prepped though for their 
particular environments. Innate knowledge? Maybe. But then how does that differ 
from the supposition you propose the "A" people hold?

That there is a drive to achieve and maintain internal integrity, to 
self-propagate, etc., as a means of system self-preservation in no way obviates 
the idea that systems are suited for particular conditions and that sometimes 
some of that suiting involves a great deal of built-in capacity for flexibility.


> Learning
>
> A: The usual AI view of learning is one of discovering patterns  within
> the input that is picked up.  There is also some consideration  of
> reinforcement learning.
>

> N: Learning is acquiring behaviors which tend to promote the  ability of
> the system to meet its purposes.  With each new behavior,  there is an
> accompanying new measurement system for self-measuring  performance in
> carrying out that behavior.  Of particular importance  are behaviors
> that provide ways of forming information about the  external world - we
> can refer to that as discovery/invention of new  ways of measuring.
> Note that this could be described as perceptual  learning.
>


The idea of behaviors does not undermine the idea that there is also a 
subjective live, thoughts, mental pictures, memories of such, etc. What, after 
all, drives a great number of our behaviors if not the mental events that make 
up our inner world of thought and feeling? Yet, where is the mental life in 
your picture here? I think the picture you draw immediately above is not 
complete.

> N: With each new way of measuring, there is an associated new  concept
> (that which is measured).  With each new self-measurement  associated
> with new acquired behaviors, there is a new purpose of  carrying out
> that new behavior appropriately.
>

And purposes are articulable and, also, conceptual (we can explain our purpose 
in our own heads or just grasp something we're after in the form of a mental 
picture). Isn't the real question here how it is that we come to a point where 
we have a mental life in the way we do, how it is we get self-awareness, 
reasoning, etc.?


> Intentionality
>
> A: The usual AI view is that there is nothing more to intentionality
> than attribution.  That is, there is only derived intentionality.
> Dennett argues for that in his "The Intentional Stance."
>

I'm not sure that's quite fair. Even Dennett doesn't say we don't think about 
things. He just wants to say there is no such thing as a phenomenon of 
intentionality somewhere in the brain but, rather, it's just a way of relating 
to things around us. We call it "intentionality" because we see it in the 
behavior of others and so we think that there is some special intentional 
feature happening in their brains. But, on Dennett's view, there isn't. 
Intentionality is just a term we impute to certain things behaving in a certain 
way.


> N: The initial self-measurement of internal states, and the consquent
> initial purposes, are perhaps best considered to be examples only of
> derived intentionality.  However, the new measuring systems created  by
> the system itself are best considered to be examples of orginal
> intentionality.  In particular, information about the world that  is
> formed on the basis of these acquired measuring systems should  be
> considered intentional information.
>

This, I'm afrad, loses me. You have spoken of "intentional information" before 
but I don't see how this makes either of the constituent terms any clearer. If 
by "intentional" we mean aboutness (as in thinking about things) then we can 
say that we are intentional when we make a complex set of relational 
connections between things we become aware of.

You have also called information, abstract, something imposed on what is not, 
itself, fundamentally informational because it exists apart from any mental 
observation. As noted at the outset, I think that is only one use of the term 
"information" and that a more comprehensive understanding of it would relate it 
to the transactions between physical phenomena, independent of minds as well. 
Thus, everytime one physical entity impinges on another, in a perfectly 
reasonable sense, we could say information is being exchanged, even if there is 
no thinking observer taking it in, considering it, filing it away!

Now what is "intentional information"? Is it just the information that makes 
sense to an observer, that the observer is able to impose his/her forms of 
comprehension upon? Why should that have some special place in the area of 
physical causation underlying the occurrence of minds in the world?

> Free will
>
> A: The behavior of the system is determined by the input and the
> mechanistic rules it is following.  The system is free to choose  only
> in the compatibist sense that it is free to accede to doing  what the
> mechanism dictates that it shall do.
>
> N: Free will is the ability to make pragmatic choices.  The options  are
> evaluated according to the systems purposes, and a choice is  made in
> accordance with those purposes.  Note that there might be  several
> relevant purposes and some of them might be in conflict.
>
> Regards,
> Neil
>

Even making pragmatic choices is going to be driven by what serves the purpose 
so in the same sense one might say choosing only according to the rules isn't 
"free" neither would choosing according to the purpose because, in this case, 
the rule is "serve the purpose".

I know my response has been rather extensive. I am not trying to shoot you down 
Neil. I am just trying to express my concerns about some of the issues you've 
presented.

But let me ask the original question. Maybe this will help. How do all the 
foregoing dynamics you've described serve to explain how a brain comes to be or 
to produce consciousness? What is going on in the brain that is the 
consciousness and where does it come from? Would you say that abstractions 
underlie the abstractions of the thinking mind? Mustn't we, finally, presume a 
physical foundation for thought?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: