--- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@...> wrote: > > Nope. > > SWM Nice response, Sid! Now, joking aside, how do you suppose to argue for dualism being implicit in the CRA? I'm arguing that you have to mischaracterize Searle in order to have a shot. And I'm saying, hoarsely by now, "No way, Wilber, er, Sid Caesar." The answer is by conflating what Searle distinguishes as S/H and nonS/H systems. One of your efforts amounted to the bald assertion that such a distinction is pointless. I suppose I can simply baldly assert that it has a point. We can go back and forth saying "No it doesn't" and "Yes it does" (have a point). Instead of that, how about reasons why or why not? Once you collapse this distinction, then it is up in the air whether you are holding strong AI or Searle's biological naturalism. That's why Peter supposed to pin you down. You waffle so bad that your position may be Searle's in upshot because you don't distinguish between S/H and nonS/H. But then later you appear to distinguish by saying that maybe sophisticated enough S/H may ex hypothesii cause semantics and consciousness. Well, sophisticated how? Complex physics or complex software? The point is that it doesn't matter how complex the software is because it adds only a formal character to the system. So at this point you may want to emphasize the physicality of the complexity--but the physical complexity is one thing and the physical plus computational complexity is really just the former. Software adds nothing, no matter how much you got. So you redefine what software is about by saying it is physical because running on a physical platform. And on and on. To the extent that you want to hold what Searle calls strong AI in the form of a research project, you collapse the above distinction in order to suppose strong AI is a physicalism which Searle attempts to refute, er, later, confute because incoherent, given that there is no amount of evidence one can accumulate to the effect that one is discovering computation intrinsically in the physics. The question whether the brain is a digital computer is found to be incoherent. At this point, since you have collapsed the distinction, you can "derive" a contradiction in Searle--he is denying that some physical systems can possibly cause semantics/cons. while arguing that only physical systems can. But that doesn't follow if you're going to be characterizing Searle's claims in terms of the reasons he gives. Put another way, one can attempt to (lamely because omitting Searle's reasons) argue that Searle is trying to show the impossibility of a physicalist hypothesis (earlier CRA) as well as (today) the incoherence of such. But the such for Searle is redescribed as a bona fide physicalism whereas Searle sees it as infected by a residual behaviorism which ironically can be read as a form of dualism since computation as well as information processing don't name natural kinds. Then the monkey-shine upshot is that of course Searle's view implies implicit dualism because he is arguing against a physicalist hypothesis of how a system (computational system that uses physics to run) may cause semantics and consciousness. I see that as clear as day. But your conclusion doesn't follow if you understand the exact reasons why Searle argues against strong AI. So your method is to leave out Searle's reasons in order to argue for your claim. My argument against your handling of Searle involves exposing your insistence on leaving out Searle's reasons for his argument against strong AI. Anyone who mischaracterizes a position in order to argue against it is either ignorant of the position or is just playing word games because one can manufacture ambiguity as they please. But your attempt to do that with the third premise amounted to a failure to read English which just as well may have been the upshot of treating the premises of the CRA without the benefit of an adequate grasp of the target article which inspired the summary CRA. If you want to argue that the CR is underspecked and designed only to do rote translation, I'm going to argue that you are missing the point of the CRA. The point is simple. In fact, it is so simple that the only way to argue against it is to put the systems reply in play. But once you do that, you are collapsing the S/H / nonS/H distinction or not. If not, then you have to argue that the formal qualities of programs add brute causality to the system. This is confused and amounts to mischaracterizing exactly how programs actually work. If so, then the systems reply is just a plea for the idea that technology may be able to get done what the brain gets done, whether by similar types of causes or different types of causes which will meet what Searle calls his "causal reality constraint" which is not met by any possible S/H system that is a system whose software is separable from the hardware. The upshot is that your critique of Searle may in fact suppose that which he is not arguing against. OTOH, it may suppose that what he thinks can't pass a causal reality constraint is a strawman never endorsed by anyone. But that would be to forget Hibbard, right? And Dennett too? Maybe not. You see where I'm going with this? If Dennett is going to talk in terms of complexity, is it just brutish complexity or is the complexity defined in terms of complex software such that what he has in mind is a case of S/H? Is he going to waffle and say that there is no distinction between S/H and nonS/H worth making when the software is sufficiently complex? If so, then the system Dennett has in mind is no longer an S/H system. And that is consistent with Searle's position. If not, then where does, say, Dennett think the CRA mistaken? Turns out he has a problem with the second premise. But that's because he's so flippantly pragmatist as to be an eliminativist given Wittgensteinian criteriology. So, to end, just as you have a problem with Searle's definition mongering right in the first point of Searle's APA eight-point summary, so will I point out that the whole project of strong AI is premised on the definitional behaviorism of Wittgensteinian criteriology a la Dennett. It won't be lost on some when Searle points out an humunculus fallacy endemic to strong AI. I suppose part of the reason for that is definition mongering willy nilly. But anybody can play that game. No one wins and everything stays the same. An example of definition mongering: A rock has a low-grade form of consciousness because consciousness is to be defined in the form of computation. And since computers have a decidedly higher grade form of computation going on compared to rocks, then even such things as hand calculators are more conscious than rocks. But really. And maybe the above caracature misses the point about just how purely physical we are to think of complex software. Perhaps. But then one might argue that Searle should have understood programs better than he does when arguing that they are made to perform abstract syntactical symbol manipulation. If so, he would be wrong about computers. And that's all he would be wrong about. Unless one wants to do some definition mongering a la Wittgensteinian criteriology which amounts to Dennett's research proposal, on one hand, or Hacker's thesis that it is incoherent to think brains cause consciusness. Now, there is not one thing I am confused about above. But what I can't prevent is ignorant chatter about Searle in a form where his reasons are omitted. The cool thing is that I have shown above how there is an ambiguity in Stuart's notion of programs which allows for his thought to harbor Searle's biological naturalism (which leaves AI wide open) while he gets to critique Searle on other occasions where he omits Searle's reasons. Cheers, you crazy diamond! Budd ========================================= Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/