--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote: > > > --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote: > > > How can we look at the world through other than human eyes? > > We have a pretty good idea of what kind of raw data would be available > to sensory cells, and then look about how one could use that kind of > data. > But what would the Andromedans "see" in that data? Would it be what we see or some other kind(s) of representation? > > > What is pragmatic about it? Why not name the dog "fireplace" and > > the fireplace "dog"? > > That misses the point. The name chosen isn't of particular importance. > It is what we choose to name that is important, and that presumably > reflects what we find it useful to name. > > And why would it be more useful to call a dog a "dog" and not a "fireplace"? Doesn't it have to do with the associations which tie into the historical practices of language speakers in the tradition within which those words are used? Doesn't it have to do with eytemological connections? Looking back at your original anecdotal example I note that your concern was not initially with the relative value of "fireplace" vs. "dog" as names for the entity we currently call a dog but, rather, with why it seems to us to make more sense to speak about the entities represented by the cat and the dog rather than the visual characteristics we associate with them. Then, when I replied in that vein, you shifted this to the issue of naming using words like "dog" vs. "fireplace" though you never mentioned anything like that initially (confining your reference to the fireplace as being a source of light which reflected on the fur of one of the animals thereby producing a particular color). It strikes me that the question of entity vs. characteristics, on the one hand, and name choosing on the other, are two different issues but if I misread you (or, perhaps, you were ambiguous) the first time, and you really did mean to talk about naming practices as you now have it, then my point remains: The only practicality in calling a dog a "dog" and not a "fireplace" has to do with the connections already established with the words which, while having a somewhat arbitrary character, are not simply arbitrary to each user. To participate in a language you have to do so with other users which means there is a whole history of shared practices which all users must have access to in order to speak in a way that has communicative power. What is pragmatic here is to be able to make oneself understood, which means to speak in ways others will get. If I call a dog a "fireplace" in English, as English currently exists, what are the chances you or any other English speaker will understand me absent lots of special explanations, caveats, etc., to neable understanding? And the reasons you won't have to do with the things the words in question historically refer to, how they connect with other words we use in the language and the mental associations they kick up for us. > > That still says nothing about Hawkins' proposal that the brain works > > (at least in part) by picking up on the regularities in the stimuli > > it is receiving through its sensory systems. > > A photon strikes a retinal sensor. Another photon strikes a different > retinal sensor. What kind of regularity do you expect to find in that? > > Hawkins' thesis, which is what we have been discussing, notes that the regularity is discovered in terms of patterns (spatial) and sequences (temporal). If everything is random of course we would not pick up patterns or sequences. And, certainly, we will sometimes err, seeing or hearing patterns/sequences that may not actually be there (as Taleb is suggesting). But if there was nothing to pick up, or we could never pick up anything that is really "out there", we could not operate effectively in such a world. The fact that we do operate effectively much of the time as individuals and the fact that species occur and prove more ore less capable of enduring over time, is evidence that our patterning both occurs and is fairly successful. > > But again, how does this relate to your claim that we impose order > > on the world and don't find it there? > > Any naming scheme imposes order. > > That doesn't go to my point that the ordering we engage in REFLECTS order that exists independent of us. That is a certain species specific or individual specific arbitrariness isn't the point. It's whether the world is without order and has it wholly imposed upon it by an ordering organism or whether the order the organism "sees" is a function, to a large extent, of what exists independent of that organism. > >> Getting back to intentionality (aboutness), we have such > >> intentionality by virtue of us having these behavioral naming > >> conventions. > > > > So you are equating intentionality with behaviors and not with > > anything mental going on (not with mental features like ideas, > > associations, mental images, etc.)? > > Strictly speaking, I am equating intentionality with behavioral > capabilities, rather than with specific behaviors. Intentionality is > prior to any possibility of having mental features. The idea that > mental features could stand apart from behavioral capacities would seem > to be at the core of dualism. > Only if you conceive of mental features as being ontologically distinct from the rest of the physical universe. If they are just a part of that universe, physically derived like everything else then it isn't dualism. Look, dualism isn't just to suppose that there are mental and physical features. It's to suppose that these features are distinct from one another on a basic level, that one set cannot arise from the other. When you equate intentionality with behavioral capabilities you are missing something. Any machine that can be made to mimic human behavior would, on such a view, be intentional. But that can hardly be what we mean since mimicry is not replication. Of course, you can say, 'I mean mimic at every level' as in a complete simulation down to the most basic level of the entity's physical operations. But then you run smack into Dennett's point that if you are imagining that, then you are leaving nothing out, including the physical behaviors of the entity's neurological systems. Since these are implicated in our mental lives (the occurrence of subjective experience) on the view he is promulgating and which most scientists would agree with, there is no reason to suppose such an entity isn't intentional! But then there is no reason to suppose that being intentional is anything but physically derived which is at odds with some imagined dualist dichotomy. So I will repeat: merely to recognize that there are mental and physical phenomena is not to invoke dualism (disputes about so-called "property dualism" aside, since on one view THAT isn't dualism in any serious sense while, on another, it is, in which case it's not just about properties). Dualism implies a claim that one thing is not part of another, not reducible to the other. Do we need dualism to explain the occurrence of minds and bodies? I am arguing we do not. However I am also arguing that to recognize the existence of a mental life for each subject is NOT to assert dualism. > > > I can't help recalling what it was like to suddenly understand > > that sign in South Carolina after drawing a blank. No behavioral > > changes occurred but suddenly there were new thoughts going through > > my head, thoughts that constituted the sudden recognition of meaning, > > the understanding. > > And you don't think that you were now capable of behaving differently > with respect to that sign? > The point was that my understanding occurred without any changes in my behavior. Of course I would have behaved differently had that been called for. But the understanding, the aboutness happened without any change of behavior and, in fact, no change was called for since I had already put my headlights on along with the windshield wipers. Everything that happened with regard to that moment of understanding happened in my mind. > Behavioral capabilities are accrued gradually. And we might not notice > that our abilities have changed. That "Aha!" moment can be when the > realization sinks in. > And what does that realization consist of? That is exactly what happened in my case though no changes in behavior were called for and none occurred. Still, I had a moment of realization, of saying aha, now I've got it! > > > Hawkins argues that our memories are like this, generic templates > > lacking detail, stored invariant representations which persist > > through multiple stimulations of incoming data (though which are > > also changeable). > > Templates are very different things from representations. > Depends on what we mean by representations. Who says that every representation must be a complete picture? Why should there not be more vague and less vague representations? Why should there not be more and less detailed pictures of things? > > > Why should we not at some point be able to construct a computer > > that can run enough systems to do the kind of reacting to the world > > that we do and generate, thereby, recollections, etc., sufficient > > to constitute understanding? > > If we built such a thing, would it still be a computer? Or does that > stretch the meaning of "computer" too far? > > Regards, > Neil > If it's built on the principles of computation and works like a computer, why wouldn't we call it that? Just because its level of complexity of operation involves the occurrence of a new feature, onew which a less complex system is incapable of producing? Of course it is possible we will introduce some linguistic distinctions. There's reason we shouldn't either. But that doesn't mean there's a reason we should! SWM ========================================= Manage Your AMR subscription: //www.freelists.org/list/wittrsamr For all your Wittrs needs: http://ludwig.squarespace.com/wittrslinks/