--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote: > --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote: > > > But the issue that I am addressing and have always been addressing, > > even when asking your for explication of your reason for thinking > > AI is on the wrong track, is not what cognitive agents do but how > > they come to be in a world chock full of apparently inanimate things. > > But you only see that as a puzzle because of how you are looking at it. > Even the most primitive biological organism has more intentionality than > a computer will ever have. That is an assumption and as long as you begin with it then it stands to reason that the approach I am suggesting makes sense seems to make no sense to you. The issue is what is it about intentionality or any of the other features of consciousness that make them what they are. We can examine what it means to be intentional (etc.) for our part, to think ABOUT things. We can say that it doesn't look like that feature is present, say, in certain other creatures, or is present in the way it is with us, etc. But then one can say well there is still something in another creature that is like that feature in me, but my computer doesn't have it. But if what it is is something computers COULD have if they could be brought to do the same things the other creatures are doing, or we are doing, then why shouldn't they have it? You've said homeostasis is at the bottom of it and that no computer has homeostasis, and that simply producing a virtual homeostatic state in a computer won't suffice to do the same thing as really having it. But you haven't shown or described just what it is about homeostasis that gets us to intentionality though you've said homeostasis drives pragmatic choosing drives perception drives the features we call "being conscious" -- or something to this effect -- albeit without showing why, if we could produce these features another way, we couldn't also get consciousness AND you haven't shown the mechanics of this transmission of causes actually produces those features. Every time I ask you to answer these questions you tell me I am misunderstanding you or talking about something else. Now it may be I am misunderstanding you. But if I am talking about something else it's because you are talking about something different than I am asking about. My question is and remains how does consciousness come to be in the world and, since brains appear to be the requisite physical entity for consciousness to occur, what do they do that makes it happen and could what they do be done by something else (e.g., computers). To this your answer is finally "Even the most primitive biological organism has more intentionality than a computer will ever have". But isn't this basically an assertion (perhaps even an assertion of faith) because, absent an understanding of the mechanism that makes consciousness happen (which is what I keep asking about), we have no idea whether your statement is actually true? > Sure, an amoeba or a plant is not conscious. > But it is still very different from computers. > Some would dispute that. I am agnostic on the subject of it being different in a relevant way myself. > > You are trying to look at every thing as mechanism. Well it's obvious that the brain is a physical thing and the brain does something (or lots of things) and that some of what the brain does manifests as being conscious in the entity in which the brain is situated. The options, in our usual categories of thought are: the brain merely correlates with the occurrence of consciousness, albeit closely (consciousness and brains somehow co-exist), or the one causes the other. There is no reason to think the consciousness conjures or imagines the brain into existence but there is reason, based on what we generally believe we know of the world, to think the brain brings consciousness into existence. Now if you want to call that assuming a mechanism I am certainly in agreement (and often use the word "mechanism" myself for this relation). However, it is possible, given how we use "mechanism" in ordinary language (i.e., to refer to mindless physical interactions), to get a wrong picture here, i.e., to think only of mechanical contrivances in some Rubegoldbergian picture of things or just to think of a dumb automobile engine for instance. Of course, by "mechanism" I mean any physical interactions from chemistry to the atomic and sub-atomic behaviors of particles in physics. On THAT picture everything that happens in the universe has a physical explanation. The only question is whether we should include the occurrence of consciousness in that "everything" or whether we must, by excluding it, expand our idea of what makes up the universe. >And in a way, the > greater puzzle is that you (and others) would do that. As best I can > tell, the only actual mechanisms that exist are human artifacts. So this > whole idea of trying to reduce everything to mechanism seems foolish. > Ah, then you can see, perhaps, why I find your use of "mechanism" problematic given what I have just said above. I don't mean by the term what you apparently mean by it, namely nothing but "human artifacts"! > > > > Yes, but the issue at hand is how do we get these kinds of sentient > > agents, that is entities with a subjective point of view, entities > > that experience. It's not what THEY do but how they come to be that > > AI, and cognitive science generally, addresses. > > > Yet it seems to me that AI and most of cognitive science make little > effort to address that question. What I more commonly see is people > trying to explain away that question, and to convince themselves that it > is all mechanism. If your idea of "mechanism" is as constrained as you have described it above, "human artifacts" only, then it would not be surprising that you would make this mistake. But biological functioning is as mechanical (on my broader view of "mechanism") as anything else, even if they aren't manmade! > But, if you look around the world, the only > mechanisms are our own created artifacts, so it seems foolish to try to > explain everything as mechanism. > > This looks like a word meaning problem to me. No one I know who claims that brains do something that produces consciousness thinks that they are talking about things like automobile engines and the like. Brains are biological machines, not manmade, with a different genesis which could well include very different underlying principles of operation. Still brains can work or fail to work and when they work they are doing things in the world. Now it's my turn to invoke the word "magic" and say there is nothing magical about brains on such a view. They are biological entities and, insofar as they do some things and not other things, they are machines. They just aren't manmade artifactual machines. They are a good deal more complex and they are naturally occurring and they operate in ways we have yet to understand. > > > That is, I asked you to explain what you once told me on another > > list, that the key to understanding how minds come to be (not > > came to be as in evolutionarily history!) is in understandinding > > the homeostasis of living systems (which, presumably, computers > > don't have). > > > It seems that I cannot explain it. You do not recognize the existence > of the kind of problem that homeostasis can solve, and I have been > singularly unsuccessful in my attempts to introduce you to those > problems. > > That may be. While I will disagree with your statement that I don't recognize the existence of the kind of problem you say homeostasis can solve (I have never given any evidence here or anywhere of denying that living organisms are closed systems engaged in various self-sustaining functions including defense from external incursion, ingestion of fuel to maintain internal equilibrium and self propagation), I do think that we are addressing different questions. I am interested in how minds are brought into existence, each time one is, and what that says about what minds are, while you seem to be more interested in the evolutionary progression that gets certain kinds of systems to the stage where they develop the consciousness we associate with having a mind. I don't deny the latter question or claim disinterest in it. I just want to note that it isn't the question I have been addressing. Insofar as you once said that AI is wrong because it fails to take homeostasis into account, you need to show how taking homeostasis into account leads to a different explanation of what consciousness is and how it is brought about in the world than the one AI, and theses that support it, provides. But insofar as you talk about something else instead, you aren't doing much to support your criticisms of AI. Now AI may well be the wrong way to go. But you told me you had a reason why it was wrong and that the reason lay in homeostasis. So you need to say how homeostasis brings consciousness about (not in terms of evolutionary history but in terms of each individual conscious entity during the period of that entity's own existence)! > > > At least Dennett has an account, whether one chooses to say it can't > > work because it is premised on the abstraction of computational > > programming or not. > > > Whether or not it is based on particular abstractions is not what > matters. The learning method proposed for AI is wholly inadequate to > account for human learning. > > > Regards, > Neil > > > ========================================= You could be right about AI being wrong in terms of the learning method. Hawkins makes the same claim (though you also reject Hawkins). But at least Hawkins offers an alternative (pattern matching, retention and recapitulation via a relatively simple algorithm in a complex array of cells in the brain in lieu of an agglomeration of massively complex algorithms as AI proponents propose). Thus far all you have said is that the homeostasis of closed systems (of which living systems are the obvious, if not necessarily the exclusive, example) leads to pragmatic choosing which leads to perception (as the imposition of pre-existing forms on raw sensation) and this somehow leads to the features we associate with being conscious. As you know, I have been intrigued by your claim since you first broached it and all my efforts in our ongoing discussions have been directed at getting from you the mechanical specifics (how each of the things you've cited works to bring about the next thing in the chain and how this finally leads to the occurrence of minds). But you seem to want to disregard mechanical explanations entirely on the grounds that this implies something artifactual made by humans. THAT picture seems to me to be based on a misunderstanding of how I am using the term "mechanical". Now if you insist that, because of such a picture, NO mechanical explanation can be given, what is finally left? Are you saying that consciousness just happens when brains reach a certain point of development? Is it your contention that consciousness is a new thing brought into the universe in some other-than-mechanical way? Or that it is always here, somehow riding along side the physical parts of the universe though, perhaps, it isn't always recognized as being here? If you have no mechanism to give us to explain how minds occur in brains, how brains produce consciousness, then it seems to me you are finally resting your case on a dualist presumption. Now I know "dualism" is sometimes taken as a dirty word. I assure you I don't mean it like that. I just think that one should be clear about where one stands. If nothing in the physical universe suffices to make consciousness, then consciousness is a separate thing following its own trajectory in which case lots of things, including ghosts in machines, would seem to be possible. Is that where you want to go? SWM ========================================= Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/