--- In Wittrs@xxxxxxxxxxxxxxx, Sean Wilson <whoooo26505@...> wrote: > > Stuart writes: "Say a machine were built to speak to us in a thoughtful and > autonomous way. (By "autonomous" I mean without being pre-programmed to give > certain answers to certain questions under certain conditions.)" > > > If such a thing were built, we might not call it a "machine." > > Here's what I think you're not getting: THERE IS NO ISSUE HERE. > > If a given language arrangement speaks of X as being called "thinking" for > reasons such and such, and speaks of something else as being called "machine" > for reasons such and such, than the only thing that is EVER said are the such > and suches. And so when you invent a scenario that rearranges the conditions > of assertability -- playing games with the such and suches -- you simply > create a different language game, but pretend that the prior one is still in > play. I've tried to show you this before, but it has never sunk in. > You're right. I am not getting that claim at all. Granted that meanings change with time and application, it's still perfectly reasonable, within the current notions of consciousness, thinking and machines, to ask if a machine could think. It's an empirical question, not a conceptual one. > If I say to you that the machine in your example does not "think," all I have > said is that my sense of think is different from yours, provided we both > agree about what is exactly happening, empirically > (informationally). No Sean. On my view, we may or may not have different notions of the term "think" but that doesn't preclude our applying the same notion to machines and humans. The first thing is to ascertain that we have the same thing in mind. Then the question would be whether there is any sense in saying a machine could think. If we have a picture of a machine as being just an inert hunk of metal and plastic with some moving parts and, maybe, a power supply that effects the movements, then it seems odd, indeed, to suppose it could be intelligible to ask if a machine could think. But there are machines and machines. Since computers do some things we do it's not an illegitimate question to ask if they do some particular thing we do, i.e., can they be made to think? Of course, the other side of this question has to do with what "think" means and would need to be explored as well. But your citations from Wittgenstein don't suggest to me that Wittgenstein would have denied the possibility of thinking machines now even if the picture looked wrong to him in the early to mid twentieth century. > The only thing that EVER exists in these false concerns is the news: what the > new information is. Once the new information is > absorbed, the way we speak of it is IRRELEVANT. THAT I can agree with. If a machine that thinks or is conscious is ever successfully built, arguing about whether what it's doing counts as really thinking or really conscious will be irrelevant to the extent that the machine in question has been integrated into our lives. And it could (and likely would) change the relevant usages in ways that also change how we think about ourselves. > At least, it poses no philosophic problem. The only time it would pose a > problem would be if the grammar knotted or if the person's language > arrangement was just too idiosyncratic. Even here (idiosyncratic), all we > have are communicative problems, not philosophic ones. > I'm not as sanguine as you are about it's not posing a philosophic problem. Of course it does. Witness all the philosophers arguing about it. If there are philosophical problems this is surely one of them. However, the classic late Wittgensteinian solution to such problems, dissolving the problem by unpuzzling the puzzle it represents, certainly seems to apply here. But I would not hang too much on that aspect alone. Guys like Dennett are doing real philosophy by exploring the extent to which what we call consciousness can be reconciled with the physical phenomena of the world. But Wittgenstein's influence on Dennett may be too easily overlooked. > If we both agree about the same information, we have nothing to disagree > about, other than the bewitchment of our language. And hence, what you don't > realize is that, in your hypo, a person could take any of these positions and > not be "wrong," provided all understand the same information: > > 1. The machine is not "thinking" > 2. What is thinking is not a "machine" > 3. The "machine" is "thinking" > 4. What is not "thinking" is not a "machine" > I don't agree that all of the above are equal. > (4 might be a little tough. But I assume a certain way of speaking might > still do it.) > > SW > And I do think that there is an arena that belongs to philosophy and that Wittgenstein's approach is among the most helpful in approaching it. SWM > > _______________________________________________ > Wittrs mailing list > Wittrs@... > http://undergroundwiki.org/mailman/listinfo/wittrs_undergroundwiki.org > _______________________________________________ Wittrs mailing list Wittrs@xxxxxxxxxxxxxxxxxxx http://undergroundwiki.org/mailman/listinfo/wittrs_undergroundwiki.org