[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 05 Feb 2010 20:57:56 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

<snip>

> > And which already requires a high level of conscious development
> > in you, the measurer. The question before us is where does that
> > development come from, what is there about you (or any of us)
> > that makes us measurers in this way?
>
> A homeostatic process is already doing measurement, as in the
> self-measurement required for its primitive self-awareness.
>
> There's a bit of a communication problem here.  If I try to  illustrate
> a point by commenting on something that happens at a low  level (say
> homeostasis), you respond that you don't see how that  leads to
> anything.  If, instead, I try to illustrate by giving a  high level
> example (that measuring with a ruler), you complain that  this already
> requires consciousness.  Either way, it is clear that  I am failing to
> get the point across.
>

Yes we do seem to have a problem here at some stage in the discussion.

Note that I am not denying the role of self-regulating systems (homeostatic 
processes) in this. In fact, I think it's pretty clear that such systems, once 
they develop the capacity to replicate themselves, tend to become increasingly 
complex and that one aspect of that complexity, at some point in this 
development trajectory, is the appearance of neurological systems and that 
being aware or having consciousness is one of the things more sophisticated 
neurological systems achieve.

Moreover, I am in no way disagreeing with your assertion (I think you are 
asserting this, anyway) that what we call consciousness is grounded in more 
basic, lower level operations which we would not consider conscious (or at 
least conscious in the way we think of ourselves). So far, we seem to be on the 
same page, no?

Where we apparently diverge, though, is in your disagreement with the claim 
that consciousness, whatever it is, is such as to allow (theoretically at 
least) for the possibility that computational systems can achieve it, too.

You have indicated that you believe that it takes something that is unique to 
entities like us to get to consciousness and that computers simply lack this. 
Thus you expect that whatever is the cause or source of consciousness must be 
expected to be found in biological systems, i.e., as something they do but 
which computational systems cannot.

What you say is missing is homeostasis which biological systems have, i.e., 
they engage in self-regulation through ingestion, rejection and replication, 
all driven by the principle of pragmatics which leads to the achievement of 
perception (finding meaning in physical signals) which then gives us 
consciousness. Or at least that is what I take from what you have so far 
sketched out re: the relevant mechanism (using "mechanism" in the sense I 
indicated via the definition I have previously given).

What I have sought from you is a more specific account of just how each thing 
in the string of things you've sketched out leads to the next and, further, 
that you explain what the mechanism is that produces consciousness according to 
this scenario.

Now you've replied that you think it is low level intentionality (as in a 
phenomenon of mindless measuring such as we might find in a primitive 
homeostatic system) that becomes, in fact, the more sophisticated operation of 
taking a ruler and measuring the height of a desk that we might, as conscious 
beings, do.

I have replied that this is to invoke intentionality to explain how 
intentionality happens and your reply to this is that you mean a low level 
intentionality becomes a higher level one.

To this I will say, again, I think that is right. But my question is not what 
the lineage of development is but HOW does what you seem to want to call low 
level intentionality become this higher level kind?  What I am seeking, again, 
is the mechanism.

Now insofar as we are in agreement that lower level operations underlie higher 
level ones in brains, my view is that there is no impediment, in principle, to 
the same thing occurring on other kinds of physical platforms as long as the 
same kinds of tasks can be accomplished.

I see the tasks to be accomplished, in both cases, as being a kind of 
information processing. But you have said that information requires a mind 
first, therefore the physical processes of a computer cannot achieve this goal 
so all they are doing is mere rote operations without any awareness. Now, with 
Dennett, I don't deny that's true for any given process being performed in a 
complex computational system. But, also with Dennett, I want to say that what 
we call intentionality (and the other features we associate with consciousness) 
can be understood as a property of many processes working together, i.e., as a 
kind of system property. In that case, the fact that individual computer 
operations are mindless is quite irrelevant.

If physical processes cannot produce consciousness in a computer, why should we 
think they can do it in brains? But if they can do it in brains (and they 
manifestly cane), why doubt they can do it in computers?

And so we seem to be back where we started from!

>
> > Yes, but we can't say that the phenomenon of being a consciousness
> > with the capacity to measure comes from having the capacity to
> > measure, can we? The latter depends on the former.
>
> No, the capacity to measure does not depend on consciousness,  as
> illustrated by homeostatic processes.
>
>

And here we have the divergence in meaning. When I speak of measuring, I think 
of comparing something to a standard and determining degrees of similarity or 
dimension to the standard. There is the one meter maintained in a Paris vault 
for instance, the measure of all meters, even other meter gauges. But you 
surely have something else in mind by "measuring" when you refer to a 
homeostatic system that has no mind engaged in measuring.

But I will grant that we can use "measure" that way, too (though I think it's a 
rather unorthodox and specialized use). So then my question is HOW does this 
kind of mindless measuring become mindful measuring of the sort we do? What is 
the mechanism you have in mind?

I don't deny homeostasis is a good way of describing an important aspect of 
living organisms. But what I want to know is what you think is unique about 
this that should lead to consciousness in entities like us but NOT in entities 
like computers?


> > By "intentional signals" then, you mean signals that we integrate
> > into a framework of meaning, signals that take on intention because
> > they fit into our existing structure of data association? And
> > non-intentional signals are just raw events, information for no
> > one because no association is going on?
>
> This is where major miscommunication sets in.  I did not mention
> "intentional signals", yet you ask what I mean.  I only mentioned
> signals that are not intentional.
>

I was looking for another way of saying what you called perceptions, of course. 
But we can drop the term. It's not critical. You draw a dichotomous distinction 
between signals that aren't intentional (I find that a strange locution by the 
way since being intentional in the aboutness sense refers to a mind not to what 
is taken in by the mind, but no matter, I will use your terminology) and what 
you are calling perceptions. So what are perceptions then? Previously you have 
asserted that they are what we do with the raw data of the signals we get. That 
is, you have said such signals carry no information for us, we impose the 
information on them according to our needs.

But now we have the same problem. How do we get to the point where we can 
consciously impose anything? Well, you say it starts with something below the 
conscious level (with which I have agreed). But then you never say what is 
going on that makes consciousness suddenly happen. Surely you can't mean that 
the world is raw, meaningless data and we impose meaning on it and thereby 
create ourselves (and the ability to impose meaning on it)?

I suspect you are confusing levels here, i.e., that there are layers of 
operations in organic systems below the level of conscious access is 
unquestioned (at least between us -- as far as I can tell). But the issue is 
what is the mechanism that converts these sub-conscious levels into an area of 
which we are conscious? Yes it happens. But WHAT happens to make it so?


>
> > But don't you see that the very question is being missed in all
> > this because what really needs to be explained is how such signals
> > do take on meaning, intention, in the process of being received
> > and stored by a conscious system.
>
> It isn't really missing.  I was trying to make the point that it  never
> happens.  That is, existing non-intentional signals never  take on
> meaning.  It is the other way around.  That is, meaning is  the basis
> for generating intentional signals.
>

But above you just wrote: "I did not mention 'intentional signals', yet you ask 
what I mean". So you want to say there ARE "intentional signals" after all? But 
isn't this a strange use of "intentional"? A person can be intentional (in both 
senses) but how can a signal be "intentional" in the aboutness sense? On the 
other hand we can say that it can reflect the intention of its generator or its 
perceiver and I suppose that is what you really mean here but in that case  the 
signal doesn't think about anything though it may be about something, it may 
have meaning. Now you want to say that we give meaning to signals. I think 
that's true. But is that the whole story? Why don't some signals, generated 
entirely unintentionally, have meaning to us that we discover rather than 
impose on them?

Well if the imposition is unconscious as you have emphasized I guess we can say 
that we are imposing without intending to impose the meaning. But that still 
leaves us where we began, namely what is it about what organisms do that is 
uniquely capable of generating the state of being conscious?


> Let me restate that a different way.  Harnad raised what he called  "the
> symbol grounding problem."  It is his version of Searle's
> intentionality problem.  And it seems to at least approximate what  you
> are questioning.  I am saying that there is no symbol grounding  problem
> for cognitive systems.  Rather, there is a "symbolizing  the ground"
> problem.  The symbols used by a cognitive agent are  automatically
> grounded, because those symbols came from solving the  "symbolizing the
> ground" problem.
>

I am trying to follow but am not getting this. Are you saying that the 
conscious entity already starts with understanding (it's symbols are grounded 
to begin with)?


> A computer based AI system (at least as invisioned by most AI people)
> does not do "symbolizing the ground".  That's what is missing.
>

Well perhaps here is something I am not fully understanding for sure. As you 
know (or may know) from our past discussions, I am inclined to think that we 
give meaning or see meaning in things or grasp the meaning by relating inputs 
to complex and interconnected networks of other inputs, i.e., that all meaning 
occurs only in a context of connections, associations. Because of this, I see 
no reason why a computational system should, in principle, be excluded from the 
community of semantics users.

Perhaps you, referencing Harnad, are claiming that meaning is based on 
something else entirely?

Perhaps this is where we should focus our efforts to explain to one another?


>
> > The issue is WHAT is this associative process that links signals
> > and thereby gives them form and meaning?
>
> I am skeptical of associationism.

I don't know what "associationism" is. Just because I use the word 
"associative" should not automatically prompt anyone to assume I am advocating 
some thesis called "associationism". However, if you will explain what you mean 
by this thesis, I shall be glad to comment on whether it is my position or not.


>  It seems to me that the later
> Wittgenstein was also skeptical of it, and his argument on the
> impossibility of following a rule is related to that skepticism.
>

Again, I don't know what position you are alluding to so I don't know whether I 
am saying something that Wittgenstein would have opposed or not (though that 
wouldn't be my reason for supporting or opposing it anyway but it would be 
interesting to know what thesis you have in mind here).

>
> > You have suggested that AI cannot work because it cannot be
> > homeostatic. (Have I got that right?)
>
> No, you don't have that right.  What I have said, is that when
> attempting to come up with an AI account of cognition, I ran into
> problems that I could not solve with computation but which could  be
> solved with homeostasis.
>
> Regards,
> Neil
>
> =========================================

Okay, you don't think AI can't do this then? Is that right?

How then do you solve the problem with homeostasis? Can you explain the 
mechanism by which homeostasis produces a conscious mind (not the lineage in 
evolutionary development but the way it works in any given instance of a 
conscious entity)?

Thanks.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: