[Wittrs] Is Homeostasis the Answer? (Re: Variations in the Idea of Consciousness)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 04 Feb 2010 01:30:26 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:


> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:
>
>
> > This is very difficult for me to parse ...
>
> Having read your full reply, I'll say that this is probably the  worst
> miscommunication ever.  In retrospect, I should have realized  that
> would happen.
>
> There's no way I can straighten that out, so I won't even try.
>


Yes, I too should have realized this would be a doomed effort.


> I'll make a few meta-comments to give some perspective.
>
> There are problems that a cognitive agent needs to solve.  AI methods
> don't solve them.  They don't even attempt to solve them.  In fact  AI
> proponents are blissfully unaware that the problems even exist.
>
> There are other, quite different problems, that AI systems do  attempt
> to solve.  As best I can tell, those are not problems that  any actual
> cognitive agent needs to solve.
>

But the issue that I am addressing and have always been addressing, even when 
asking your for explication of your reason for thinking AI is on the wrong 
track, is not what cognitive agents do but how they come to be in a world chock 
full of apparently inanimate things.


> In my last post, I was trying to present the basic principles with
> which a cognitive agent would address those problems that it needs  to
> solve.  And, as I should have expected, you have attempted to  construe
> it as being about the kind of problems that AI systems  actually
> address.
>
> It's pretty much a total miscommunication.
>
> Regards,
> Neil
>
> =========================================

Yes, but the issue at hand is how do we get these kinds of sentient agents, 
that is entities with a subjective point of view, entities that experience. 
It's not what THEY do but how they come to be that AI, and cognitive science 
generally, addresses.

You have said, here and elsewhere, that AI is on the wrong path and that you 
have a different, a more promising approach and I asked for more information on 
that. That is, I asked you to explain what you once told me on another list, 
that the key to understanding how minds come to be (not came to be as in 
evolutionarily history!) is in understandinding the homeostasis of living 
systems (which, presumably, computers don't have). But that still remains 
unexplicated as far as I can see, or at least I have failed to understand your 
reasons for thinking homeostasis yields pragmatics yields perception yields 
mind. There is some account of the mechanics of what brains do that is still 
missing.

At least Dennett has an account, whether one chooses to say it can't work 
because it is premised on the abstraction of computational programming or not. 
I am trying to understand the alternative you once told me you had in mind when 
you criticized Dennett.

But you're probably right. We've gone over this before and never gotten any 
further than we now have so perhaps we really are just not understanding each 
other. I am willing to consider that I just may be missing your points here, as 
you say. Sorry we couldn't make progress on this though.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: