[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 10 Feb 2010 03:13:59 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@...> wrote:


> What makes an action "about something"?

Perhaps it's hard to give a general characterization, but it often
seems evident.  A cat playing with yarn, or a cat playing with a  mouse,
or a dog with a bone.  It gets harder with insects, etc.


> On my view, "aboutness" is when we can relate some symbol or
> indicator to something else, when we can see meaning.

To tie it to symbols might be a bit too restrictive.


> Perhaps the issue between us hinges, to some extent at least, on
> your focusing on the nature of computational programs themselves
> (nothing conscious about them!) vs. my focusing on the nature of
> computational systems (i.e., many different programs running many
> different processes to accomplish many different functions in a
> kind of orchestral arrangement)?

That could be.  When you get to the level of asking "how  do I program
that" you begin to see some difficulties  that were not so obious
before.  Hubert Dreyfus says "I  was particularly struck by the fact
that, among other troubles,  researchers were running up against the
problem of representing  significance and relevance
<http://leidlmair.at/doc/WhyHeideggerianAIFailed.pdf> ".  While those
are not the words  I would have chosen, it's a pretty good assessment of
the kind of  problem I ran into.


> Yes, the human eye is constantly moving about and the picture it
> captures consists of many distinct imprints or partial images which
> the brain somehow sees as a whole, a complete pattern. (Hawkins
> uses this model quite a bit in his book On Intelligence.)

I am quite skeptical of that view.  It's a top down designer view  of
how to do vision, rather than a bottom up evolutionary view.  I think it
more likely that it is similar to a single cell scanning  back and
forth, and looking for sharp signal transitions to find  a boundary.
However, it is being done a billion times in parallel  by the different
retinal cells.


> What is a visual feature but a pattern within a larger pattern,
> a picture within a larger picture?

No, I disagree with that.  The features are marked by boundaries.  And
the thing about boundaries, if you are using a scanning method,  is that
you can locate boundaries with higher resolution than you  can locate
other things.

In any case, the visual part is guesswork.  However, the standard AI
approach is too dominated by top down thinking.  From an evolutionary
perspective, you need to find a use for a single retinal cell,  and then
an evolutionary benefit for proliferating that into many  retinal cells.


> As I noted, Hawkins suggests the brain develops and retains templates
> and that when a remembered image is called up we get more of an
> adumbration which we then use to plug in details, presumably by
> recognizing subsections and using this to call up detailed images
> within the larger one.

The templates part is okay, if intended as a recognizer.  J.J. Gibson
(the "direct perception guy") would have used the term "transducer"
rather than "template."  However, I am doubtful about the "call up
detailed images" part.  I doubt that there are any stored images to
call up.  Sure, we can have imagery in our thought, but it doesn't  seem
to be a called up image and is more likely a reconstruction.


> Isn't that true of us too? We only see in the world what we are
> built to see. If we had been built differently the world might seem
> entirely different to us, no?

Do you really think that we were built to see (in the sense of
"comprehend") jet aircraft, HIV virus, electron microscopes?

Regards,
Neil

Other related posts: