[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 10 Feb 2010 12:48:32 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:

> > What makes an action "about something"?
>
> Perhaps it's hard to give a general characterization, but it often
> seems evident.  A cat playing with yarn, or a cat playing with a  mouse,
> or a dog with a bone.  It gets harder with insects, etc.
>

> > On my view, "aboutness" is when we can relate some symbol or
> > indicator to something else, when we can see meaning.

> To tie it to symbols might be a bit too restrictive.
>

Yes, I think this shows we are thinking about different things here though we 
probably agree that they occur on the same continuum, i.e., that what you are 
thinking about (your referent for "intentionality")is at the root of what I am 
thinking about (my referent). And that has been part of our ongoing problem, I 
think, in understanding one another.

My view is that consciousness can be accounted for by describing increasingly 
complex and sophisticated functionalities found in various living entities and 
that these functionalities, at a basic level, can be replicated on non-living, 
manufactured platforms qua entities. If they can, of course, then there would 
be no barrier, in principle, to making them more and more complex and 
sophisticated until we have replicated the level of complexity and 
sophistication found in us.

I think you reject this view because you are supposing something more is 
happening in the organic entity, something that isn't found on the 
theoretically possible machine. But if intentionality is describable as I've 
suggested, then there would be no reason to think there is anything more going 
on in the living organism.

But here, I think, we will find ourselves grinding to a halt again because I 
suspect you will still want to say something more IS going on. If I am right in 
THAT expectation, can you say what? If it is homeostasis or something growing 
out of homeostasis can you describe what that is, identify the mechanism that 
is or produces intentionality (and other relevant features of consciousness)?


>
> > Perhaps the issue between us hinges, to some extent at least, on
> > your focusing on the nature of computational programs themselves
> > (nothing conscious about them!) vs. my focusing on the nature of
> > computational systems (i.e., many different programs running many
> > different processes to accomplish many different functions in a
> > kind of orchestral arrangement)?
>
> That could be.  When you get to the level of asking "how  do I program
> that" you begin to see some difficulties  that were not so obious
> before.  Hubert Dreyfus says "I  was particularly struck by the fact
> that, among other troubles,  researchers were running up against the
> problem of representing  significance and relevance
> <http://leidlmair.at/doc/WhyHeideggerianAIFailed.pdf> ".  While those
> are not the words  I would have chosen, it's a pretty good assessment of
> the kind of  problem I ran into.
>

Yes, the issue must be what it means to recognize significance, relevance, 
etc., i.e., what it means to see meaning (semantic content) in anything. As I 
have suggested, I think this can be adequately described as a process of 
connections made between particular inputs and complex and interlocking 
networks of retained pictures of past inputs (which one might call 
"representations" though I would be leery to equate this with any given symbol 
representing anything else because this would be a different, if related, use 
of the term "representation").

Are there other ways of characterizing this sort of recognition (seeing meaning 
in things)? We know that we do it but the question is just what is "it" in this 
case? Some, like Searle I think, want to say it is just not yet adequately 
accounted for though perhaps it may be at some point and that it has a unique 
"ontology", being first person rather than third. Against such a view, Dennett 
proposes that we can fully account for it causally via third person 
descriptions, even if we have an intuitive first person idea of what we mean. 
His point is that we don't have privileged access to the inner workings of our 
minds but only see the surface stuff, including the sense of being a self, what 
it means to understand things, to see meaning, etc. He rejects Searle's 
insistence on the primacy of the first person account when it comes to the 
features we call "consciousness".

Thus far it seems to me that those who, like Searle, insist on the first person 
picture over everything else simply have no real answer and make no attempt at 
a real answer. They want the first person picture to remain unaltered. They 
prefer the mystery. This is probably just a function of different preferences 
in the end. But it makes for some lively and seemingly never ending debates.


>
> > Yes, the human eye is constantly moving about and the picture it
> > captures consists of many distinct imprints or partial images which
> > the brain somehow sees as a whole, a complete pattern. (Hawkins
> > uses this model quite a bit in his book On Intelligence.)
>
> I am quite skeptical of that view.  It's a top down designer view  of
> how to do vision, rather than a bottom up evolutionary view.


I don't think he excludes the evolutionary aspect at all. What he is interested 
in, however, is not what the eye does per se (except as an instrument for 
information gathering) and not even what different eyes in different organisms 
can do. He is after an account of how intelligence works and he lodges that in 
an account of the neocortex. It's already established that there are parts of 
the brain, including the neocortex, that are implicated in vision. 
(Ramachandran suggests there are at least two paths visual signals follow into 
the brain and that both are implicated in seeing though only one, the part that 
hits the cortex, is actually part of the conscious instance of seeing.) For 
Hawkins what is important is how what is seen becomes part of what we know and 
he links that to memory since his account of intelligence is that it is a 
memory function. The neocortex, he argues, is essentially a very sophisticated 
memory machine that captures, retains and recapitulates patterns at 
progressively complex and more global levels. How that becomes part of our 
consciousness he leaves to others so he is at least suggesting that we could 
have a perfectly intelligent device, designed along the same principles, 
lacking consciousness or, at least, the full gamut of what we mean by 
"consciousness".

Here I would suggest Stanislas Dehaene's work seems to take over.


>  I think it
> more likely that it is similar to a single cell scanning  back and
> forth, and looking for sharp signal transitions to find  a boundary.
> However, it is being done a billion times in parallel  by the different
> retinal cells.
>

Presumably it is the vast number (and consequent complexity) of all these cells 
in coordination that make more sophisticated seeing possible as well as 
awareness of what is seen. Hawkins view is that each neuron performs a fairly 
simple, repetitive algorithm but that when organized together in complex 
arrays, they work in unison to produce the complex pictures of the world that 
we actually get from the inputs we receive all the time.


>
> > What is a visual feature but a pattern within a larger pattern,
> > a picture within a larger picture?
>
> No, I disagree with that.  The features are marked by boundaries.  And
> the thing about boundaries, if you are using a scanning method,  is that
> you can locate boundaries with higher resolution than you  can locate
> other things.
>

I would ask the same question: what are boundaries? At one level a line is a 
line, of course, but at another it will be seen to be many much smaller points 
aligned together or falling within or along a particular trajectory. The pixels 
on a computer screen or a TV screen are tiny dots that form a picture and all 
the boundaries we see within and around these pictures. Why would there be 
boundaries in some natural state that we have access to? I would say it is 
highly likely that all boundaries are relative. And if that's so, then a 
feature is just another pattern within a larger pattern and contains within 
itself still smaller patterns. Is there anything we can see in the universe 
that is not capable of being understood as a combination of much smaller things?


> In any case, the visual part is guesswork.  However, the standard AI
> approach is too dominated by top down thinking.


You may be right on that. Certainly to date AI has been singularly unsuccessful 
in jumping this shark. At the least one may reasonably conclude that another 
paradigm is needed (which is the point of Hawkins' critique by the way).


>  From an evolutionary
> perspective, you need to find a use for a single retinal cell,  and then
> an evolutionary benefit for proliferating that into many  retinal cells.
>

I don't see why that would make a lot of difference? That particular 
instrumentalities can effect certain functions doesn't mean they must be seen 
as exclusively able to do it.

>
> > As I noted, Hawkins suggests the brain develops and retains templates
> > and that when a remembered image is called up we get more of an
> > adumbration which we then use to plug in details, presumably by
> > recognizing subsections and using this to call up detailed images
> > within the larger one.
>
> The templates part is okay, if intended as a recognizer.  J.J. Gibson
> (the "direct perception guy") would have used the term "transducer"
> rather than "template."  However, I am doubtful about the "call up
> detailed images" part.  I doubt that there are any stored images to
> call up.  Sure, we can have imagery in our thought, but it doesn't  seem
> to be a called up image and is more likely a reconstruction.
>

Yes, that is Edelman's view and, in part, it's Hawkins' as well. Edelman argues 
that memory in humans is dynamic and ever changing contrary to the precise 
replication we get with computers (if they're working properly). Hawkins' 
template picture is better, I think, though. Both allow for dynamism in memory 
but Hawkins provides a mechanism while Edleman just notes the dynamic nature.

On the level of personal experience I suspect we can all describe instances of 
remembering things either correctly or incorrectly, including pictures. But 
mental pictures are not precise replications of every feature of the thing we 
are remembering. When we close our eyes the image we get is sometimes little 
more than a "pale" imitation of the real thing. Yet in dreaming it often seems 
real enough. And I am reminded of that incident when, coming to, I saw a vivid 
image of the computer screen I had been looking at only moments before -- until 
I tried to focus in on the details and see what was actually written there and 
then I realized that the writing I was looking at said nothing, it had the 
appearance of gibberish. Then I was suddenly awake and aware I had been coming 
to after having passed out. And yet the computer screen had looked so 
brilliantly clear to me, in color, etc., etc. If that's not an instance of 
recalling a picture, albeit without critical detail, I don't know what is and, 
if it is, then we do call up images of things we have seen.



>
> > Isn't that true of us too? We only see in the world what we are
> > built to see. If we had been built differently the world might seem
> > entirely different to us, no?
>
> Do you really think that we were built to see (in the sense of
> "comprehend") jet aircraft, HIV virus, electron microscopes?
>
> Regards,
> Neil
>

I think we were "built" to see certain wavelengths but not others, hear certain 
sound vibrations but not others, smell certain kinds of olefactory stimuli but 
not other, etc. I think we have equipment sufficiently suited for our 
environment to allow us to survive and even prosper in it. Within the 
parameters of that equipment, we see and comprehend jet aircraft, HIV viruses, 
electron microscopic imagery, etc.

Again it seems we are here talking about different things! I am not denying 
that we develop new ways to see the world. But I am saying that we have limited 
equipment for seeing the world because of the limited type of creatures we are 
and sometimes we can compensate for these limitations by technology but when we 
can't we may not even know it.

A machine entity would similarly have the limitations specked into it by its 
builder.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: