[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 07 Feb 2010 01:20:56 -0000

I thank you for taking the time to explicate your views further. I think we are 
making some progress though, as you can see from what follows below, I continue 
to have some concerns about your usages (e.g., "intentional", "signals", etc.) 
and I still don't see how you get from the homeostatic nature of living 
organisms to a mechanism that yields consciousness in some living organisms.

After reviewing your comments below, it seems to me you are hanging your 
argument largely on the idea that the organism organizes unformed, essentially 
meaningless, input according to its homeostatic needs and that this occurs at 
the most basic level and eventually manifests in conscious minds like ours. But 
what is still missing is the how, I think.

What remains particularly problematic for me is your idea that all order is 
internal to the organism in the midst of an inchoate world. This seems to me to 
take no account of the fact that every organism is an integral part of its 
environment and can be nothing else if it is to survive. Given this, it cannot 
be so radically different from the world around it as to have the only order 
within a field of otherwise complete chaos. Rather, it makes more sense to 
speak of degrees of order or, perhaps, different kinds of order in the organism 
and its environment for if an organism cannot find and get in sync with its 
environment, how could it be expected to endure? And if there were no order in 
the environment to find, how could the organism do it?

Anyway, I have commented below on your points, if you're interested in seeing 
my specific concerns. If not, the foregoing should as the executive summary!

The rest follows:

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

>
> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:

>
> > You have indicated that you believe that it takes something that
> > is unique to entities like us to get to consciousness and that
> > computers simply lack this.
>
> More accurate would be that I don't know how to do it with computers.
>

Neither does anyone else at this point. It's all theorizing, all speculative. 
The question is what looks like it should work once all the kinks are ironed 
out?

>
> > What I have sought from you is a more specific account of just
> > how each thing in the string of things you've sketched out leads
> > to the next and, further, that you explain what the mechanism is
> > that produces consciousness according to this scenario.
>
> What is needed, is to fill in the gaps in the AI approach.  The
> difficulty is that you don't recognize that there are gaps, and
> communication breaks down when I try to point them out.
>
>

Yes, that seems to be what keeps blocking our progress here. You keep saying 
there are gaps in the AI approach and pointing to homeostasis as what's needed 
but then, when I try to get you to say how that leads to something that is 
beyond the capacity of an AI based approach (using computational technology on 
a computational platform), you respond by saying that I am not understanding 
you. It's possible I'm not, of course. If so, bear with me please. Here are my 
specific questions, given what I think you are saying:

1) What are the "gaps" that aren't filled via the AI approach which are filled 
by an approach which involves assuming homeostasis (as I believe you have 
previously said)?

2) Can you say exactly what it is that homeostasis provides to fill the said 
gaps? That is, how does homeostasis, as a principle (or however else you mean 
it), do this?


> > If physical processes cannot produce consciousness in a computer,
> > why should we think they can do it in brains? But if they can do
> > it in brains (and they manifestly cane), why doubt they can do it
> > in computers?
>
> It's not the internal processing; it's the inadequacy of the  external
> interaction.
>

What is meant by "external interaction" and how does it serve to bring about 
consciousness and why can't computers have that? What is "inadequate"?


 >
> > When I speak of measuring, I think of comparing something to a
> > standard and determining degrees of similarity or dimension to
> > the standard.
>
> Okay.  That's part of it.  But there is also the need to invent  a
> standard in the first place, and to test out tentative standards  to see
> if they work.  There's an example where pragmatic judgement  comes in,
> and where I think AI will have a problem.
>
>

Okay, but when I think of "invent" I have in mind a thoughtful process which 
leads to a new idea. But this obviously implies a consciousness already in 
place and, per your own statement, you don't mean that. So you mean something 
different by "invent" I take it? Does the thermostat invent its standard? Does 
the homeostatic system of a one cell organism do it? If so, what aspect of this 
is "inventing"? Is it possible that, again, we are using our terms in very 
different ways?



> > So what are perceptions then? Previously you have asserted that they
> > are what we do with the raw data of the signals we get. That is,
> > you have said such signals carry no information for us, we impose
> > the information on them according to our needs.
>
> Wow!  That is garbled.  I have disagreed with the whole idea of  getting
> raw signals.


So you are saying we get no signals at all? You say we get no raw signals and 
that what we do get becomes perceptions because we impose order on it. But what 
do you wish to call whatever it is we are imposing order on then if not "raw 
signals"? You have previously ruled out "information" since you have said 
information can only be information for something (implying intentionality qua 
aboutness). And you have ruled out "perceptions" because you indicated that 
they are enstructured by the perceiving entity. So we must have a term to 
designate whatever it is that is enstructured, given order, had form imposed 
upon it, etc. What should we call it then? Can we call it anything at all? I am 
not wedded to "signals" or to "raw signals". I just want us to settle on a term 
that we can both agree designates whatever it is you want to say we impose form 
on (assuming you are still taking the position that we impose form on whatever 
it is we receive -- I note below that you seem to also want to scuttle the term 
"impose" -- in that case, besides asking what you are willing to call the 
inputs in question if not "raw signals" or "raw data" or "information" I also 
need to ask what you think it is we do if we don't "impose" order -- do we 
produce it? engender it? create it?).


> That's what my example of measuring with a  ruler was
> supposed to illustrate.  Rather than just getting a raw  signal that
> happens to be around, we carry out a procedure that  yields intentional
> information for us.  And sure, that procedure  makes use of raw signals,
> but it doesn't just take them as they  come - it finds was of using them
> to our benefit.
>

But what are we to call whatever it is we impose (or some other equivalent) 
this order on?

Again I am confused by this likening of what we do as measurers with rulers and 
other standards to whatever it is primitive homeostatic systems are doing which 
you seem to want to call measuring, too.

>
> > But now we have the same problem. How do we get to the point where
> > we can consciously impose anything?
>
> In your previous paragraph you mentioned imposing information.  I have
> never suggested that.  What I did say is that we impose order.  That
> does not necessarily require much in the way of consciousness.


Okay, let's scratch that wording then. What do you wish to call what we do when 
we encounter unformed input (information? something else?) and convert it to 
formed input? (Can I legitimately call it formed and unformed input or is that 
also to misstate what you have in mind?) If we are not "imposing order" and, 
according to your prior statements, we are not picking up order that is already 
out there, beyond ourselves (your main criticism of Hawkins, as I recall), then 
what is going on in the process you envision?


>  When you
> see a snow flake with geometric patterns, thats a case of  an ordering
> being imposed by inanimate matter.


So you agree there is order in the world for us to see and grasp then?


>  When a biological  cell replicates
> its DNA, that's an orderly procedure but we don't  assume that the cell
> is conscious.
>

Of course not. And when we see "geometric patterns" in snowflakes we don't 
assume the snowflake is doing geometry or that someone else was doing it with 
snowflakes either. Each phenomenon is what it is.

>
> > But above you just wrote: "I did not mention 'intentional signals',
> > yet you ask what I mean". So you want to say there ARE "intentional
> > signals" after all?
>
> Your posts to this discussion group would seem to be an intentional
> signal.
>
>

Meaning I have a purpose in what I post, a meaning to convey, ie., that what I 
have written is not intended to be gibberish and is not just gibberish?

Can I conclude then that, by "intentional signal", you mean either:

1) Signals that reflect or manifest the intentions (purposes, objectives) of 
the signalers (that they are intended by the signaler); or

2) Signals that have meaning because they are about something and that this can 
be grasped by a reader.

In this latter case they are artifacts of the signaler's intentionality -- his, 
her or its thinking about something -- though we would not say they are 
intentional in the way a signaler is because the signals only carry the intents 
of the signaler and are not, themselves, thinking about anything.

In the former case I guess we could say they are expressive of the signaler's 
intent which the reader/receiver can pick up (interpret, grasp).

A signal can, of course, be both. But a signal can be unintended and still mean 
something, right? (As I would say the chirp of the bird outside your window 
which you reference below has meaning.)

The input we receive from the world can be taken as carrying meaning, right? If 
it doesn't carry meaning, either because of the intent of a signaler or in 
terms of the receiver's understanding, we are not to call them "signals" then 
on your view?

But then every signal is intentional in one of these two senses, no? And yet 
you made a point of speaking of "intentional signals" which presumably 
differentiates them from an unintentional kind.

Can there be "unintentional signals" and, if there are, what do they consist 
of? Or would the unintentional kind merely be the uintended kind since all 
signals, on your view, must mean something to someone in which case they are 
artifacts of someone's intentionality, either the sender, the receiver or both?


> > Now you want to say that we give meaning to signals.
>
> It is more that we create signals that are meaningful from the  time of
> their creation, because we created them that way.
>


I take it that you are referring to our putting meaning on raw inputs to make 
them "signals". Without such imposition (I know you don't like the term but it 
seems that it is implied by your suggestion that we make signals by adding 
intentionality to inputs), they are just raw somethings (not data because not 
information per your prior disclaimer).

This is where you were disputing with Hawkins because he was arguing that 
intelligence is the function of picking up and learning patterns in the raw 
inputs from the environment around us while you were saying there are no 
patterns per se to pick up, that all such patterns are imposed by us on 
formless inputs. At that point you and I fell into dispute because I argued 
that our ability to survive and prosper as individuals and, thus, as species in 
the world, hinged on our being able to successfully pattern-match in the way 
Hawkins describes, while you denied that such pattern matching really occurs.


>
> > Why don't some signals, generated entirely unintentionally, have
> > meaning to us that we discover rather than impose on them?
>
> When that bird chirps outside my window early on a spring morning,  I
> find that meaningful.  But I think that's an entirely different  meaning
> for "meaningful".
>

Yes, I agree. Though there may be several layers of "meaningful" at issue. It 
may be "meaningful" in that it tells you the new day is dawning (think of the 
rooster crowing) and thus provides you with new information or it may be 
"meaningful" in that it reminds you of how beautiful the world can be, how 
pleasant, how good to be alive, etc. Is this new information per se or just, 
perhaps, a nice feeling that comes over you, one that colors other things you 
are thinking about and prompts you to see things differently than before, etc.?


> When I was young, there was a shed door attached to the house with a
> picture of a tricycle on the inside of the door.  It had handle bars,  a
> front wheel, two rear wheels, and a carrier ledge at the back.  Several
> years later, after I had learned to read, I finally realized  that it
> was actually the word "door".  The serif at the top of the  "d" was the
> handlebar.  The loop at the bottom was the front wheel,  the "o" were
> the two rear wheels, and the "r" was the carrier ledge  at the back.
> "Intentionality" has to do not just with meaning,  but with intended
> meaning.  And "door" was the intended meaning,  not that tricycle.
>
>

I think this reflects the dual use of "intentionality" that we sometimes have: 
the philosophic one (aboutness, being meaningful) and the ordinary (reflecting 
a purpose or objective, i.e., an intent).

Obviously there is a reason we use the same word in both cases and the above 
suggests why. Meaning is often a matter of discerning the intentions of others 
(what they had in mind, what they want to convey). But if we discern intention 
in terms of our own understanding, as we must, then the idea that we develop 
various patterns that we come to recognize and associate with other patterns is 
the mechanism for getting the meanings.

Hawkins suggested (and I tend to agree) that we develop and retain patterns 
which we associate with different external stimuli and thus the meanings we 
learn are learned from the world, not simply applied by us to the world. Of 
course, the flow goes both ways, as it must, but the fact that it flows from 
the world to us is as important, if not more important, as the fact that it 
flows from us to the world.

But why couldn't a computational system of sufficient complexity with channels 
for inputs and outputs achieve the same thing? Why couldn't a computer-based 
entity learn to understand inputs in different ways as you did when you went 
from seeing a picture to seeing a word that had a meaning?


> > I am trying to follow but am not getting this. Are you saying that
> > the conscious entity already starts with understanding (it's symbols
> > are grounded to begin with)?
>
> Yes.  I am saying that intentionality is not an add-on; it's a
> built-in.  We do things in ways that are intentional.  We don't  start
> with meaninglessness and somehow add some intentionality.
>

But at some point there is meaningless in the sense of lack of intentionality. 
At some point the organism attains intentionality (both in evolutionary terms 
and in terms of each individual organism as it develops). The issue before us 
is how the organism attains it in terms of what must the organism (or parts of 
the organism) do to accomplish this? What is it that brains do that you think 
computers can't do? If understanding the meaning of the markings on the barn 
door represented a transition from one set of associations in your head to 
another (characterized by your learning your letters), why shouldn't a computer 
be able to do the same, whether in the same way or a different way?

Your core thesis is that there is something about organisms that enables them 
to do this but that, because it isn't available in computers, precludes 
computers, by its lack, from doing this. You have stated that it is the 
homeostatic dynamic of organisms (though you aren't saying that you are 
convinced computers can't also be homeostatic too, right? -- I ask this since 
every restatement I offer of your position always seems to prompt a denial that 
I have it right so, forgive me, I keep trying!). Well what is there about 
homeostasis that gets organisms to the point of being conscious (whether or not 
we can replicate homeostasis in computers)?


> Look at that measurement example.  If I want to measure, I place  the
> ruler against the desk (or other item), and line it up.  I am  following
> intentional rules.


Yes, you can do it and I can do it and my little grandchildren can do it, to a 
certain point. But my two youngest grandchildren can't do it at all, though one 
hopes both will in time. What is it that is going on in those that can that is 
missing in those that can't? And if we can isolate this mechanism (this 
what-it-is-that-is-going-on), can we replicate it on other, inorganic platforms 
like computational systems?

What is there about homeostasis, on your view, that, if it is absent, precludes 
some computational system from ever achieving what it is you and I and my 
grandchildren can do?


> If we program a computer to do that,  it's
> instructions will be more like "operate the vertical motor for  300
> milliseconds, etc".  That is, it will be following mechanical  rules.
> Its rules will be about the mechanism, while our rules are  about the
> thing to be measured and the equipment we use.
>
> Regards,
> Neil
>
> =========================================

But as Dennett argues, that isn't what we do to achieve AI in a computer. It's 
not about programming specific mechanics in the measuring. It's about 
programming the features that serve, in brains, to make measuring, and 
everything else conscious minds do, possible in computers. Recall his criticism 
of Searle's CR which I posted nearby. He argued that Searle's CR is underpecked 
(my term, not his, but I think it gets at the crux of the problem). Developing 
a successful AI system hinges on developing the kinds of things, including 
intentionality, awareness, understanding, representing, etc., that our minds 
do. And if that can be done, then there isn't any reason to think that such a 
system engaged in measuring, or anything else of the sort we do, isn't really 
measuring, etc.

It is by no means a foregone conclusion, of course, that this can be achieved 
in computers but Dennett's thesis at least shows us a way to do it, the kind of 
thing Minsky and those working with him are seeking to achieve. Hawkins, as we 
have seen, says they won't succeed because, to replicate features of 
consciousness like human type intelligence (he limits his claims just to 
intelligence, of course), you have to do not just what brains do but do it like 
they do. He argues that brains simply cannot run the vast number of algorithms 
computers can in speedy enough fashion to produce brain-like results, hence his 
argument that what's needed is an approach based on identifying and replicating 
a relatively simple algorithm the cortex presumably relies on and implementing 
it in a chip and then linking such chips in an architectural array that 
replicates how neurons are linked.

I think Hawkins' criticism of the AI approach an interesting one. What I am 
still trying to dope out is how your emphasis on homeostasis leads to the same 
kind of specific critique that his does.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: