--- In Wittrs@xxxxxxxxxxxxxxx, "College Dropout John O'Connor" <wittrsamr@...> wrote: > So... what of the no-private-language argument? I'm not sure I followed the original red chicken thingie, John, which is why I didn't comment. Since you seemed to be supporting my comments, though, I didn't seek clarification. It was enough to hear from someone on this list who wasn't mainly about defending the Searlean orthodoxy no matter what! However, as I have now a little time (having some work done today but that is currently in hand), I took a look again at your original post on the matter. If I rightly understand it, you are pointing out that the reason I deny that Searle's CRA works to show that computer programs can't bring about consciousness in the form of human type understanding is because whatever it is that accompanies understanding (what happens in a mind when understanding happens) is an ancillary accompaniment of that instance of understanding. Therefore whether it occurs or not is not going to be deniable merely because it isn't observed in some sense? I don't know if I have you right and if I don't, I apologize and would ask for further clarification. But if I do, I want to say that this isn't quite what I mean. In fact, I am saying that understanding just IS those goings on in the mind that occur with an instance of understanding BUT understanding need not ONLY be that. I would say that understanding, like any feature of consciousness, occurs along a continuum so that we could speak of a dog's understanding, or a cat's, or that of much lower animals and still be speaking meaningfully even if we aren't supposing that what is going on in their brains is the same as in ours. The lower "down" we go on the continuum the more mechanical what we call understanding would seem to be. On this view, a machine system could have understanding and, indeed, it could have it on an increasingly "higher" scale that brings it closer to or equivalent to our own, depending on the level of sophistication of the programming and the capacity of the machinery to run it. My view explains the mental associations we have, as part of any instance of understanding, as just a more sophisticated system of linking and referencing. Clearly this isn't a view that everyone is willing to subscribe to. Those who side with Searle or who are just outright dualistic in their approach, think something qualitatively different is in the works when any instance of understanding occurs that reflects a radical division between what I have called a lower level of understanding and what we have. That is, they adhere to a view that what we call understanding isn't something that can be explained as some combination of otherwise perfectly physical operations that, when working in sync in a suitable way, engender the experience of understanding that humans have as part of their mental lives. They think that experience, itself, must be something different, set apart, beyond explanation as a function of physical processes/events. I think that my view is consistent with Wittgenstein's notions but that it isn't behaviorist in the sense in which behaviorism represents a theory of mind (nor do I think Wittgenstein is behaviorist in that way). But there are lots of misunderstandings surrounding all of this, mostly reflecting the ongoing problems we see in deploying language to speak about these sorts of things. Which brings us to the private language question. My view is that Wittgenstein's point about the public venue that language requires for its formation and operation is correct, on balance. Speaking of mental phenomena, of our mental lives, is not an easy task because such referents are not part of the public domain and language, formed in a public domain and dependent on publicly accessible criteria, fails to provide clearly specifiable referents for description in the private domains of our mental lives. Word usage requires agreement on the criteria of usage but when each of us has the only access to what we are trying to denote or describe, we end up having a hell of a time communicating. This, I think, is a large part of why it's so hard to get common understanding, let alone agreement, about questions like what do we mean by "mind", "consciousness", "understanding", "intelligence", "intentionality", "belief", "thought", "awareness", etc. As with most words, these and their kin do have multiple meanings, some more public than others and, as David Chalmers has noted, many of our mental words have a public and private meaning which it is often hard to keep straight. Marvin Minsky, the noted AI researcher, thinks we should do away with a word like "consciousness" entirely because it has no specific referent but only a grab bag or, as he puts it, a suitcase full of different and sometimes seemingly unrelated referents. Being an ordinary language kind of guy and a follower, of sorts, of Wittgenstein, I disagree here because I think the term (and its kin) are perfectly useable and comprehensible in ordinary discourse (or we'd have thrown them over long ago) and that what's really required is that we pay attention to context and how we use the term(s) when we do use them. But I grant that one cannot force others to pay attention and many of the ongoing and seemingly neverending debates on lists like these, about Searle vs. Dennett or dualism vs. non-dualism, etc., seem to arise from the continued failure to get agreement on usages of this particularly troublesome sort. Still if philosophy is to be meaningful at all, one of the things we have to do, in the course of pursuing it, is to get and keep clear on the nuances of usage. As for brains and what computers can do, I believe we can (and should) safely leave these things to the scientists. SWM ========================================= Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/