[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 09 Feb 2010 01:36:09 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:


> > In ourselves a good deal (perhaps the most important portions) of
> > what you are describing happens below the conscious level. Not only
> > conscious organisms but those we would presume to be unconscious
> > (or certainly lacking the kind of consciousness we have) are capable
> > of these kinds of autonomic adjustments.
>
> Some of what I described is required for consciousness.
>
> Insisting on human consciousness as a starting point is, I think,  too
> strict a standard.  I think we should look at intentionality at  a more
> primitive level, say the ability to carry out actions that  are about
> something.  That makes a starting point one can build on.
>
>

What makes an action "about something"? If it is a reaction to something or a 
mindless response to a stimulus, is that "about" the stimulus or just a 
mindless response to it? On my view, "aboutness" is when we can relate some 
symbol or indicator to something else, when we can see meaning. But yes, one 
might reasonably say that this "aboutness" is grounded in the kind of process 
you describe. That is, they can be seen as occurring on the same continuum. But 
does that make them the same?


> > Now your initial point above suggested that the machine system
> > has no way of relating the changes in status of its sensors to the
> > world outside itself.
>
> I indicated the problem is difficult, but not necessarily impossible.
> For machines, though, we can usually only get them to relate to  changes
> that the programmer/designer can anticipate.  It's a lot  harder if
> there is a need to react to unknown unanticipated events.
>
>

Hard for us, too, though we are obviously a more sophisticated and complex 
system than ordinary computer programs. But then, if we could create a system 
of many programs running together in an integrated, interreactive and 
overlapping way, why should we necessarily think the same level of flexibility 
in response could not be achieved?

Perhaps the issue between us hinges, to some extent at least, on your focusing 
on the nature of computational programs themselves (nothing conscious about 
them!) vs. my focusing on the nature of computational systems (i.e., many 
different programs running many different processes to accomplish many 
different functions in a kind of orchestral arrangement)?


> > But in that it is not so different to us either. How do we relate it?
>
> The barcode scanners in supermarkets make an interesting example.  AI
> people usually think of vision as making a pixel map, and then
> analyzing that pixel map.  But all of the problems of unknown motion
> with respect to the outside world will present a problem for that.  The
> barcode scanner does not do that.  Instead, the scanner emits a  laser
> beam that moves around to try to find a bar code, and looks  for the
> signal transitions in reflected light to detect the code.  It has made
> motion (of the scanning beam) part of the method for  finding the bar
> code.  So additional motion, which is probably slower  than the motion
> of the scanning beam, won't cause serious problems.
>

> If you think of the eye, it too is moving around (a motion called
> "saccades") so seems to be scanning for features in a similarsame  way.
>

Yes, the human eye is constantly moving about and the picture it captures 
consists of many distinct imprints or partial images which the brain somehow 
sees as a whole, a complete pattern. (Hawkins uses this model quite a bit in 
his book On Intelligence.)


>
> > Well, we build a picture (or more correctly, a complex interlocked
> > set of overlapping pictures, consisting, perhaps, of many different
> > received and retained inputs stored in a relational way with others.
>
> It is more likely that we scan for features, measure the time between
> one feature and the next as an indicator of distance between them,  and
> then use those features to divide up the world.  Then we probably
> interpolate between the features to further subdivide.
>


What is a visual feature but a pattern within a larger pattern, a picture 
within a larger picture? The pixels on a computer screen don't necessarily 
replicate the larger image we see (though Hawkins makes the rather interesting 
point that the patterning done by the neocortex in human brains seems to 
replicate pictures at more and more expansive levels).


>
> > Our brains then relate the changes we are getting in sensory
> > inputs through our sensory equipment to the retained pictures we
> > are carrying ...
>
> I seriously doubt that there are any retained pictures.  To manage
> retained pictures would be computationally expensive, and I doubt  that
> the brain has the compute power to do that.
>

Well we do need a way to account for retained images we have, the ability to 
call up a mental picture of something and describe it in a way that can be 
compared to the real thing we are trying to remember. As I noted, Hawkins 
suggests the brain develops and retains templates and that when a remembered 
image is called up we get more of an adumbration which we then use to plug in 
details, presumably by recognizing subsections and using this to call up 
detailed images within the larger one.

As I explained at the time when I raised this, I do see reason to accept an 
account like this based on some of my own experiences (recall the description I 
gave of passing out from a coughing fit while I had whooping cough and then 
seeing before my eyes a very convincing image of the computer screen I had been 
looking at, as I was coming to, only to discover that, the more I tried to 
focus on the image I thought I was seeing, the more out of focus it became, the 
less detail I could see -- this was precisely the kind of generic template 
Hawkins claims our brains retain and that we use to build our memories back as 
needed).

It also bears remembering that Hawkins agrees with you that brains lack the 
kind of capacity for precise memories that computers have which is why he 
proposes a template model, one based on pattern matching, retention and 
recapitulation.


> You have probably been caught in a snow storm, with lots of  blowing
> snow.  You get what's called a "white out" where it looks  white in
> every direction.  From a mathematical point of view,  looking the same
> in every direction is almost the perfect pattern.  Yet it's hard to see
> anything in a white out.  What you need is  not patterns, but features.
> It is the features that allow you to  maintain an orientation.
>

Why wouldn't features be small patterns within larger ones? Why would a feature 
in this sense of the term be other than a pattern on a certain level?


>
> > On the matter of homeostasis, why should a machine not be built
> > to operate in a kind of ongoing equilibrium with its environment,
> > i.e., to react to changes by continued internal readjustments, etc.?
>
> You could do that.  But it would only adjust for the kind of changes  in
> the environment that you program it for.  And that means you  have to
> program in lots of innate knowledge.  I doubt that you  would get
> consciousness that way.
>
> Regards,
> Neil
>
> =========================================

Isn't that true of us too? We only see in the world what we are built to see. 
If we had been built differently the world might seem entirely different to us, 
no?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: