--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote: > SWM wrote: > > >he [Searle] is in self-contradiction, holding one idea of consciousness > >with regard to brains, another re: computers. > > >He gets by, doing that, by avoiding explicating how he supposes brains > >do it (leaving that to science). But the mere supposition that brains > >do it in a system way, dependent on physical processes, suggests that > >other physical platforms could do the same and computers are physical > >platforms, too. So the issue is why can't computers do it? > > >His argument purports to make a case that they can't because of what > >they are, i.e., they are abstract in nature, just syntax as he likes to > >say. But computers are no more abstract, no more syntactical, than > >brains as far as we know. The obvious differences between brains and > >computers are that computers are manmade, consist of inorganic > >materials, and run on the principle of electrons flowing through > >gateways embedded in silicon chips while brains are naturally > >occurring, organic (cellular based on organic chemistry) and operate on > >a chemical basis to generate their much slower electrical charges. > nothing in what you've just said shows that Searle contradicts himself. > he merely contradicts you. Well that is your view, I suppose. If you don't see it (or won't) then there's an end to this, isn't there? One can always keep these things going by denial but I've made my case and given the evidence. You are free to deny it but denial isn't enough, it doesn't change the points I've made. > you are mistaken to say that Searle believes that computers can not be > conscious. > > "Another misunderstanding is to suppose that I am denying that a given > phsyical computer might have consciousness as an 'emergent property'. > After all, if brains can have consciousness as an emergent property, why > not other sorts of machinery?" ["Consciousness as a biological problem"] > Give us the link please so we can see the text in context. I have often seen Searle say something in one breath and then, in another, something that so qualifies it as to change the initial point. It's possible this quote you've given us means just what you say, of course, but the best way to demonstrate that is to provide the full context via a link, if possible, or citation, including source and page number(s) if not, as I did when I provided the text I transcribed onto this list from Dennett. My guess is that Searle is here proposing that machines can be built to do what brains do, but that computers qua computers aren't up to that. But what does it mean to be a computer qua computer? Here is the usual problem of nailing down Searle's actual meaning. And, of course, since we DON'T KNOW WHAT BRAINS DO to achieve consciousness, we don't know that computers qua computers are actually missing whatever capabilities brains have. That is, we don't know that it isn't computational processes running on a platform with sufficient capacity to sustain it after all that's needed! But let's see the full text from which you extracted this bit of text and go from there. > Searle's actual claim (from the CRA) is that computers can't be > conscious merely by virtue of running a program. > In other words, programs aren't, themselves, conscious. Well, okay, we already agree that syntax is not the same as semantics. But let's try this another way: What else, besides running programs, do computers do? They're not just electrical fans and motors and circuitry, after all. What makes a computer a computer is that it computes by running the various algorithms (programs) it is fed via mathematical calculation. But can't a computer be conscious if we somehow infuse it with a soul, perhaps by building in some kind of brain device (though we still don't know what that might consist of)? One is tempted to say yes, of course, except that little is really gained by such a move since the issue remains: what does consciousness amount to? That is, this comes right down again to deciding between competing accounts: is consciousness (that which consists of the array of features we associate with the term "consciousness") a system level property (i.e., the functional result of many constituent processes which are not, themselves, conscious, running in sync in a certain way) or is it a process level property (i.e., some bottom-line irreducible property of a given particular constituent element found in a system like the CR)? > it's also a mistake to say that Searle discriminates against computers. "Discriminates"? You mean because I say he treats consciousness differently when discussing the capacities of computers for consciousness than when he speaks of what brains do? How do you think it's a "mistake"? What in what I say is in error? > he concludes (in the CRA) that brains can't be conscious just by running > a program. > > Joe > > Well golly gee, THAT's what computers do, isn't it, run programs? That's what makes them computers rather than toasters or garbage disposal units. This is just the kind of thing Popper suggested was all too often used in argument to avoid refutation, i.e., just keep moving the target so that falsifiability becomes impossible. In this case we see a constant readjustment by Searle over time of what he really meant by "computers" and the claims he makes in his CRA. Anyway the conclusions in the CRA are flawed for all the reasons already cited: 1) They hinge on an equivocal third premise; 2) They demand a presupposition about consciousness that is not self-evidently true from the CR and which presumes the argument's conclusion, rendering the CRA, finally, circular; 3) They imply a presupposition about the nature of consciousness that is dualistic, thus contradicting Searle's own declared position re: dualism and the way he, himself, treats brains. SWM ========================================= Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/