[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 05 Feb 2010 01:29:41 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@...> wrote:


> You've said homeostasis is at the bottom of it and that no computer
> has homeostasis, and that simply producing a virtual homeostatic
> state in a computer won't suffice to do the same thing as really
> having it. But you haven't shown or described just what it is about
> homeostasis that gets us to intentionality

A homeostatic process is self aware (in a primitive sense) and  is
adaptive to change.  Those seem like plausible precursors to
consciousness and intelligence.


> Every time I ask you to answer these questions you tell me I am
> misunderstanding you or talking about something else.

I had been attempting to give explicit things that a cognitive  agent
does, but that AI people are not considering.  And that's  where
communication breaks down.

Let me try a different example.  I take out a ruler and measure my  desk
to be 30 inches high.  That "30 inches high" is a representation  that I
created out of whole cloth.  By that, I mean that if I had  looked at
all of the signals being received by sensory cells, "30  inches high" is
not something I could extract from those signals.  It didn't come from
signals, it came from my carrying out a procedure  (a measuring
procedure).

The reason that "30 inches high" is about something, is that I created
specifically to be about something.  By constrast, the usual AI
approach is to look at signals picked up by sensors.  But those  signals
are not intentional.  They are just apparently meaningless  signals
picked up.  There might be something useful hidden in them,  if you have
prior knowledge on how to extract.  But how you get  from signal to
intentional representation is not obvious if  it is even possible.

So notice the difference.  I start with intentions and deal with
intentional representations from the get go.  The AI model starts  with
meaningless signals, and I am inclined to think that it will  never have
more than meaningless signals.

Think, for a moment about bird songs or whale sounds.  It is quite
likely that these are some part of a communication system.  And if  they
are, then presumably the bird song is intentional for the birds.  But we
apparently cannot determine what those bird songs are about.  At best we
can look for correlations with behavior.  If it were  ever possible to
start with signals that are meaningless to us,  and to somehow find
meaning by analysis of those signals, then this  should be an ideal
case.  If we can't do it, then there's not much  chance that we can
program computers to do it.


> If your idea of "mechanism" is as constrained as you have described
> it above, "human artifacts" only, then it would not be surprising
> that you would make this mistake. But biological functioning is as
> mechanical (on my broader view of "mechanism") as anything else,
> even if they aren't manmade!

If everything is a mechanism, the "mechanism" loses its meaning.  I
don't see anything mechanical about biological functioning.  But
obviously we disagree on what the word means.


> But you seem to want to disregard mechanical explanations entirely on
> the grounds that this implies something artifactual made by humans.

That's a complete misreading.  I am not saying that we should discard
mechanical explanations, and I am not saying that there's a problem
with artifacts.  I was just responding to your thinking it strange  that
there can be cognitive agents in a world of inanimate things.  My point
was that, if anything, it is the presence of mechanisms  that is
strange.

Regards,
Neil

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: