[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 07 Feb 2010 01:36:41 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:
<snip>
>

> Suppose a robotic system has sensory detectors that pick up signals  (as
> seems to be the common AI assumption).
>
> The signal the computer sees, coming from that sensory detector,
> actually is about something.  Specifically, it is about that  sensory
> detector and its current state.  This is, of course,  internal
> information to the robotic system.  That's a kind of  derived
> intentionaly with respect to internal things.
>

Yes.

> If that robotic system happens to be a traffic control computer,  and
> that sensory detector is connected to the traffic sensors, then  we can
> also say that the signal is about the traffic.  So there  is some
> "aboutness" having to do with the world external to the  robotic system.
> It is only derived intentionaly, but it is good  enough and the traffic
> controller works.
>

Yes.

> If, however, the robotic system is something like a walking humanoid
> robot, then the situation is completely different.  The sensory
> detector isn't connected to anything in the external world.  It may  be
> picking up signals from the external world, but they are useless  unless
> you already know what it is picking up the signals from.  And, as the
> robot moves around and changes its orientation, the  external world
> source of the signals keeps changing.
>

Okay, here I think we are getting to it!


> In order to get useful information about the external world, you  have
> to control the position and orientation of the robot while  accessing a
> sensory signal.  The entity best able to control  the position and
> orientation of the robot is the robot itself.  We (thinking of ourselves
> as robots) do that very well - it is  where consciousness and
> intentionality come into play.  A tree  has a more-or-less fixed
> connection to the external world, so it  doesn't take much to keep it
> appropriately oriented and positioned,  and that's probably why
> consciousness has not evolved in trees.
>

Okay.

> Getting back to that walking humanoid robot, you can begin to see  some
> of the difficulties.  A signal from a sensory detector is,  by itself,
> pretty useless because you don't know what it is coming  from in the
> external world.  So instead, the robot needs to follow  a procedure that
> coordinates its position and orientation with how  it is using the
> signals picked up by sensory detectors.  This is  further complicated by
> the fact that (a) you need to coordinate  the orientation before you can
> get useful information about the  external world, and (b) you need
> information about the external  world before you can coordinate the
> orientation.
>
> Regards,
> Neil
>
> =========================================


In ourselves a good deal (perhaps the most important portions) of what you are 
describing happens below the conscious level. Not only conscious organisms but 
those we would presume to be unconscious (or certainly lacking the kind of 
consciousness we have) are capable of these kinds of autonomic adjustments.

But, of course, conscious thought enters into the equation at some point. 
Walking in the street I can orient myself in terms of movement, staying 
upright, etc., and none of this is part of my conscious self. I just do it or, 
rather, the organism that I am just does it. I don't have to pay attention to 
it under ordinary circumstances. Presumably consciousness itself is somewhat 
like that, too. Lots of things happen below the level I can access in order to 
give me a me, that is a level I can access and think about and refer to as "me".

The issue in AI is to develop a sufficiently layered system which will do this. 
Now your initial point above suggested that the machine system has no way of 
relating the changes in status of its sensors to the world outside itself. But 
in that it is not so different to us either. How do we relate it? Well, we 
build a picture (or more correctly, a complex interlocked set of overlapping 
pictures, consisting, perhaps, of many different received and retained inputs 
stored in a relational way with others. Our brains then relate the changes we 
are getting in sensory inputs through our sensory equipment to the retained 
pictures we are carrying (Hawkins' patterns which, if you recall, he argues are 
triggered by new inputs coming in and then the responses either match the new 
inputs or vary with the variations leaving new patterns in their wake for new 
triggered responses, etc.).

So it seems to me there is no barrier, in principle, to a machine being able to 
picture and think about the world in the way we do though there is at least a 
technical barrier, as of now, in finding a way to do this as brains do.

On the matter of homeostasis, why should a machine not be built to operate in a 
kind of ongoing equilibrium with its environment, i.e., to react to changes by 
continued internal readjustments, etc.? If so, would you then think an AI 
approach built on that approach would have a chance at succeeding? What would 
be the mechanism stemming from such a homeostasis that would be critical in 
producing consciousness on such a view?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: