[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 05 Feb 2010 14:07:07 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:
>
>
<snip>
>
> A homeostatic process is self aware (in a primitive sense) and  is
> adaptive to change.  Those seem like plausible precursors to
> consciousness and intelligence.
>

Yes, self-aware in a very primitive sense, i.e., in the sense that it is 
equipped to maintain a degree of internal integrity or it could not be what it 
is. So your point then is that it is from this limited sense of 
"self-awareness" that our awareness of our selves as selves (our consciousness) 
arises? I think that's a fair point and probably a useful observation because 
it is certainly something about living organisms that prompts some to become 
conscious over the course of their evolutionary history.

What I am interested to know, of course, is a little different, i.e., it's the 
mechanism, or whatever one wishes to call it, that actually effects awareness 
as we have it in ourselves and I don't think an evolutionary account (however 
true and reasonable it is) does more than contribute to an understanding of 
that.


>
> > Every time I ask you to answer these questions you tell me I am
> > misunderstanding you or talking about something else.
>
> I had been attempting to give explicit things that a cognitive  agent
> does, but that AI people are not considering.  And that's  where
> communication breaks down.
>

> Let me try a different example.  I take out a ruler and measure my  desk
> to be 30 inches high.  That "30 inches high" is a representation  that I
> created out of whole cloth.


And which already requires a high level of conscious development in you, the 
measurer. The question before us is where does that  development come from, 
what is there about you (or any of us) that makes us measurers in this way?


> By that, I mean that if I had  looked at
> all of the signals being received by sensory cells, "30  inches high" is
> not something I could extract from those signals.  It didn't come from
> signals, it came from my carrying out a procedure  (a measuring
> procedure).
>

Yes, but we can't say that the phenomenon of being a consciousness with the 
capacity to measure comes from having the capacity to measure, can we? The 
latter depends on the former.


> The reason that "30 inches high" is about something, is that I created
> specifically to be about something.  By constrast, the usual AI
> approach is to look at signals picked up by sensors.  But those  signals
> are not intentional.


By "intentional signals" then, you mean signals that we integrate into a 
framework of meaning, signals that take on intention because they fit into our 
existing structure of data association? And non-intentional signals are just 
raw events, information for no one because no association is going on?

But don't you see that the very question is being missed in all this  because 
what really needs to be explained is how such signals do take on meaning, 
intention, in the process of being received and stored by a conscious system. 
The issue is WHAT is this associative process that links signals and thereby 
gives them form and meaning?

The AI project seems to be grounded in the supposition that we can build 
computationally based systems that do the same kinds of associative linking of 
raw data as we do to build up pictures of the world which overlay one another 
and cross link and which include both external aspects (our environment) and 
internal (our bodies and memories) and which become the basis of our being 
aware of things, of being intentional.

Now AI may, indeed, be the wrong way to go about this. It may be that, as 
someone like Hawkins suggests, brains don't accomplish the task we are trying 
to accomplish computationally (using complex algorithms running on computers). 
But then, whether a computational approach could work, whether it could still 
do the same thing as brains do, even if they are running on a different model, 
would be a different question. That is, there could conceivably be more than 
one way to achieve consciousness.

You have suggested that AI cannot work because it cannot be homeostatic. (Have 
I got that right?) But aside from the question of whether virtual homeostasis 
could do the trick or not (and we have seen that at least some AI people think 
it could), perhaps we also need to consider whether it is homeostasis per se 
that provides the requisite mechanism (in which case computers lacking this 
mechanism might be expected to fail) or whether it just provides the impetus to 
develop the mechanism (in which case the absence of homeostasis would not 
necessarily mean an absense of the requisite mechanism -- and then there is no 
reason to think AI must fail).



> They are just apparently meaningless  signals
> picked up.  There might be something useful hidden in them,  if you have
> prior knowledge on how to extract.  But how you get  from signal to
> intentional representation is not obvious if  it is even possible.
>


One way would be to suppose we consciously make sense of them, but that, of 
course, begs the question. Another way is to suppose that we make sense of them 
at multiple levels, only one (or a few) of which occur(s) at a level at which 
we are aware, have access, and this(these) would be the conscious level. Thus 
we could say that consciousness is part of a layered system of brain operations 
that build on, and undergird, one another. But this IS consistent with the AI 
thesis (see Minsky's The Emotion Machine). This is certainly consistent with 
Dennett's model.


> So notice the difference.  I start with intentions and deal with
> intentional representations from the get go.  The AI model starts  with
> meaningless signals, and I am inclined to think that it will  never have
> more than meaningless signals.
>

But conscious intentions are not the same as non-conscious intention. The 
lizard in the maze has more limited capacity to make sense of its surroundings 
than the mouse. Yet both, as you would say, are homeostatic systems seeing to 
maintain their internal integrity. Our intentionality in a maze would be 
different again. As would the snail's and the earthworm's. I think we have to 
be careful to separate intentionality as we find it in ourselves with lower 
level intentionality which is, the deeper down you go, basically mindless 
(unaware).

Insofar as it is missing genuine awareness, the intentionality of lower animals 
starts to look a lot like what machines (in the sense of human artifacts built 
to accomplish functions) can do. The question, then, is whether we can get our 
kind of awareness (what we associate with being conscious) by building on a 
platform of machine intentionality the way we get our kind of awareness on a 
platform of lower level, unaware intentionality in the animal kingdom along the 
evolutionary hierarchy.



> Think, for a moment about bird songs or whale sounds.  It is quite
> likely that these are some part of a communication system.  And if  they
> are, then presumably the bird song is intentional for the birds.  But we
> apparently cannot determine what those bird songs are about.  At best we
> can look for correlations with behavior.  If it were  ever possible to
> start with signals that are meaningless to us,  and to somehow find
> meaning by analysis of those signals, then this  should be an ideal
> case.  If we can't do it, then there's not much  chance that we can
> program computers to do it.
>


Why do you think we can't do it? There has been lots of work in studying and 
interpreting animal signals and lots of progress. Insofar as some signaling in 
the animal kingdom may be fairly sophisticated (the whales' songs, dolphins' 
chittering clicks, chimp screeching and gesturing) why would we think that 
figuring meanings out here should be closed off to us?

And if we can crack the relevant codes (as we have done in some cases), why 
wouldn't a computer be able to? But then the issue is not whether a computer, 
as a tool for analyzing reams of data, could do it, but whether a computer 
could be built that would have the same kinds of layered relational pictures of 
the universe in which it exists that we seem to have and could then discern 
significance in otherwise meaningless signals through use of such a network of 
representations?


>
> > If your idea of "mechanism" is as constrained as you have described
> > it above, "human artifacts" only, then it would not be surprising
> > that you would make this mistake. But biological functioning is as
> > mechanical (on my broader view of "mechanism") as anything else,
> > even if they aren't manmade!
>
> If everything is a mechanism, the "mechanism" loses its meaning.  I
> don't see anything mechanical about biological functioning.  But
> obviously we disagree on what the word means.
>
>


Hmmm, that's misleading Neil. I never said everything is a mechanism. I 
recognize, for instance, that we have lots of other things in the universe 
including objects and games and activities and beliefs and so forth.

But I do think it makes perfect sense to speak of "mechanism" as being more 
than just machines built by creatures like ourselves. Indeed, I think that our 
language treats "mechanism" like that.

Here is one dictionary entry concerning the term:

http://www.merriam-webster.com/netdict/mechanism

Main Entry: mech·a·nism

Pronunciation: \&#712;me-k&#601;-&#716;ni-z&#601;m\

Function: noun

Date: 1662

1 a : a piece of machinery b : a process, technique, or system for achieving a 
result

2 : mechanical operation or action : working 2

3 : a doctrine that holds natural processes (as of life) to be mechanically 
determined and capable of complete explanation by the laws of physics and 
chemistry

4 : the fundamental processes involved in or responsible for an action, 
reaction, or other natural phenomenon ? compare defense mechanism


Note that my use accords quite readily with entry #4 whereas you seem to want 
to restrict our usage to #1a. But why should we do that? Why shouldn't we avail 
ourselves of the full meaning of a term like "mechanism" in discussions like 
this?




> > But you seem to want to disregard mechanical explanations entirely on
> > the grounds that this implies something artifactual made by humans.
>
> That's a complete misreading.  I am not saying that we should discard
> mechanical explanations, and I am not saying that there's a problem
> with artifacts.  I was just responding to your thinking it strange  that
> there can be cognitive agents in a world of inanimate things.



When I made that statement I was speaking rhetorically, i.e., saying this is 
why we are moved to wonder about where consciousness comes from because it 
appears to be at odds with certain intuitions we have. It wasn't my claim that 
I think it strange! It was my statement that we tend to think it strange when 
we consider what minds seem, at first consideration, to be. Moreover, if I 
recall rightly, I only made that rhetorical point after you had decried my 
suggestion that we have to look for a mechanism underlying consciousness.


>  My point
> was that, if anything, it is the presence of mechanisms  that is
> strange.
>
> Regards,
> Neil
>
> =========================================


You mean human artifactual mechanisms or mechanisms as in "the fundamental 
processes involved in or responsible for an action, reaction, or other natural 
phenomenon"?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: