--- In
WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@.
..> wrote:
>
> --- In
Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> > > <snip>
> > >
> > > > All parallel processing can be implemented on a serial computer. There simply is nothing more by way of computation that can be done in parallel that can't be done serially.
> > > >
> > >
> > >
> > > This misses the point again.
>
> >
> > No it doesn't.
>
> Yes it does.
>
>
> > It targets your point that parallel processing offers us something more COMPUTATIONALLY than serial computing.
>
>
> Depends what is meant by "computationally" and "more".
So "yes it does and no it doesn't" depending. Let's see below.
>It seems to me you are confusing the question of the quality of the process (its nature), i.e., that it is an algorithm, with the quality of the system (consisting of many processes doing many different things).
Maybe it does seem that way. Anything under the sun can be given an algorithmic interpretation (except those things we may have not found algorithms for, say, simulating something, which Penrose thinks no serial or parallel processing (PP for short!) is going to net us for complicated Godellian reasons as applied to computation theory. Now repeat ten times as fast as you can!
>A massively parallel computational platform is still just a computer. But it has capacities that even a pretty darned fast serial machine doesn't.
It doesn't matter computationally so "no it doesn't miss the point."
It does matter if by parallel processing you mean simply physical causality in a more causal sense than can be obtaine with mere second order functional properties. And if you want to talk about parallel processing in terms of first order processes, then we don't need the computaer metaphor at all.
So we are left with a computational notion of PP or a physical notion of PP. And it is as I said it was earlier. To the extent you want to talk about physical processes with first order properties, you (and Dennett) are in agreement with Searle. To the extent that you want to talk about second order functional properties by PP, then PP is subject to the same _computational_ limitations as any S/H system, in which case Searle's CR applies because some S/H systems may pass a TT without there being semantics (depending on semantic realism as opposed to an intentional stance that amounts to an elimination of intentionality.
>
> The point is what is consciousness, i.e., is it a feature (or features) of a certain kind of system or is it something that cannot be reduced to that?
This question, Stuart, is at the heart of conceptual dualism. To get beyond it, Searle proposes that ontological subjectivity is akin to the properties of higher level system features without the need to pose the either/or question. The either/or question is tantemount to either dualistic troubles or eliminative troubles.
Further, there is a way of saying Searle is an eliminativist because he eschews dualism and a way of saying he is committed to a form of dualism when speaking of the irreducible subjective ontology of conscious brains.
Somehow, you are not letting him say what he wants to say. I recall you agreed with his piston/butter analogy but still had problems both with his CRA and later reworking of the CRA in the form of strong AI being incoherent.
Now if you want to parlay PP as a physicalist thesis, then I'll just let you speak that way and point out that it is Searle's position because all you mean by it is what Searle means when saying that brain processes cause consciousness. And he leaves it open whether another type of machine can do it also.
So you will continue to speak of PP as both computation as well as a more robust physical system compared with S/H. But computationally, PP is in the same boat as S/H. To the extent that you liken PP with physicalism (and a physicalist system having strong similarities to the brain), then you are not claiming anything distinct from what Searle is claiming.
>
> If it can be reduced to that in brains then at least in principle it can be in computers, too (even if there are empirical reasons for why it might not actually work -- a totally different issue than Searle's logical claim).
This is where you are simply mistaken. The whole thesis of Strong AI is that the best we can do epistemically is the TT (Turing test). The CR shows the TT to be insufficient as a test.
You have two choices. You can liken the original claim of Schank to a study in physicalism or a study in computation. Searle's denial that the latter even makes sense is because the notion of computation is too abstract. The hardware that supports the program adds a bit of noise and heat such that the noise and heat don't figure at all in the process defined computationally. His denial, therefore, is not a denial of physicalism.
>
> The thesis that a computer would have to have the same capacity(ies) as a brain to replicate consciousness is not a claim that brains have something that computers lack but that brains operate like computers in certain relevant ways and that a computer that can be brought to that level of operation, no matter how many added processors were required, would be as conscious as a brain.
That was a good try. Seems that you liken S/H as well as PP to simply physicalist theses when in fact they are species of functionalism which harbors second order properties which amounts to a form of property dualism for Searle and is inherently dualistic in the following way--eliminative physicalism eschews a possible theory of mind/consciousness by denying that there is anything really to explain. Functionalism a la epiphenomenalism is a dualism where there is really mind but it doesn't figure in causation (Chalmers wrote his book on consciousness without any intentional causation whatsoever, a miraculous feat if there ever was one).
Searle's biological naturalism sees both the above as mistaken and it is no wonder that some will accuse him of being either an eliminativist (the brain causes consciousness entirely with physical processes and there is nothing existing except physical processes) or a dualist (the brain causes ontological subjectivity and such (like pain for example) involves first person points of view which are irreducible to nonfirst person points of view, even though the brain causes first person points of view which are irreducible in the sense of being ontologically subjective kinda like the hardness of a piston can be used in an engine in a way that the "hardness" of a cold piece of butteer can't. The question is how the brain does it and the answer cannot be in computational terms only, but by brute force.
Now if you liken computation to brute force, then you just share Searle's ultimate position even though you are not making his distinctions and so are in partial disagreement.
>
> But, of course, if your position is that computational processes per se are not capable of producing the features we call consciousness because, well, they aren't conscious ("nothing in the Chinese Room understands Chinese -- John Searle), then, of course, you will deny the possibility, as Searle does. But then that is dualistic, admitted or no.
The reason is "well, because they involve second order properties because computationalism is a species of functionalism. You simply don't have the history of philosophy correct. Functionalism was introduced to displace type physicalism. How? By the use of second order properties defined computationally. Second order properties simply allow for causal overdetermination and Searle points out that one needn't be duped into thinking that functionalism is alway a species of physicalism. So it is really you who come up with the only alternative being that Searle must be a dualist whether he realizes it or not. Whether or not you knew beforehand, computationalism is about second order properties and THAT'S why they don't cause anything. Functionalism itself is a species of conceptual dualism. And eliminativism amounts to it because for all the eliminativism in the world it is not as if Dennett is going to deny that he can feel a pinch or two. So touche!
>
>
> > That is decidedly false and Searle's CR is equivalent to a UTM and ALL possible parallel processing DEFINED IN COMPUTATIONAL TERMS is also equivalent to what can be done serially with a UTM.
> >
>
> Searle's CR is specked as a rote responding device of such remarkable facility that it will always seem to be understanding in its replies. Aside from Dennett's point that this isn't even conceivable, we can still grant it for argument's sake.
You are wrong and maybe even know it. Prolly not, though. The CR is a UTM no matter how one slices it. That's why his reply to Dennett was as short as it was. And then Hofstadter and Dennett pretty much fabricate a quote that got his position wrong. You are essentially repeating their mistake right now. The CR shows a case where there is no understanding even though the TT was passed. You just got this totally wrong when thinking that the CR will always seem to be understanding. You might as well say that any program run will be good enough on a UTM to pass a TT. In other words, just because the CR is equivalent to a UTM, it doesn't follow that anyt program under the sun will pass a TT. And Searle need only prent one case where the TT is passed without the semantics happening.
>But in doing so we are left with a machine that is matching symbols to symbols with no understanding, by definition.
The CR brings that out because the rules are defined functionally as second order properties, though this must seem a bit of jargon to you and you won't know what to do with it, let alone understand that this is the only way computaers woik.
> But if to understand is to have one of the features of consciousness then the question is what does that feature consist of? When looked at closely, it is clear it isn't just rote symbol matching according to mechanical rules. What's required is the capacity to relate one thing to many things in an ongoing cascade of connections. And that is excluded from Searle's CR.
Not functionally. The CR is a UTM. Please understand that you have been conflating functional properies with first order properties for well over six years now. You can stop doing that in the future by understanding Searle a bit better.
>It's a bicycle that we're told can fly. But it doesn't matter what we're told. A bicycle lacks all the accouterments, the instrumentalities, needed for flying and so, no matter how many times we're told it can fly, it still can't.
Causal capacities in the brutish way are not equivalent to functional prperties in the computational way. Shuffle these together and go back and forth talking computationall and causally and you will be able to show that when Searle denies functional properties as having causal roles he is also denying first order properties of having causal roles. That is a big turd of a mistake that I don't make and neither does Searle. It is apparent one needs to make a mistake in order to show Searle mistaken though. I've just said that you needn't make this mistake over and over as you do because entirely unwilling to see computational properties as second order properties.
>
>
> > Also, I am not missing your point when you make a different one. You assimilate parallel processing in your lexicon to physical
> > processes.
>
>
> All computers are physical platforms and thus operate physically, i.e., have physical processes going on. Serial machines have it just as much as parallel processing machines.
You sound like a machine--programs are abstract and defined in terms of second order properties which do no work. Saying that computers are physical misses the point that the electricity is routed to logic gates. There are no logic gates in the brain that will ever be discovered.
>
>
> > The claim is vacuous actually but you don't know it.
>
>
> You just don't get it.
I get from both sides of Sunday. I explain why you are mistaken. I show that you don't know what functional properties are. That you liken computers to physical things without distinction between the way they work and the way nonS/H systems work.
>
>
>
> > To the extent it is about computation it is vacuous. To the extent it is about physical processes, Searle doesn't disagree with your change of topic.
> >
>
>
> So are you saying Searle agrees with Dennett's thesis that consciousness can be replicated on massively parallel processing computers then? Is THAT your position?
>
>
> > No one is ever going to find that some process or other is intrinsically computational.
I'll bet that this partly answers your question beforehand.
>
>
> We're talking though about computational processes not what is "intrinsically computational"
.
You just won't distinguish S/H from nonS/H. Bt that is hopeless if you want to understand Searle. However, critiquing Searle is best done by being mistaken!
Cheers,
Budd Ps. Stuart writes: "Everything is what it is...."
============
=========
=========
=========
==
Need Something? Check here:
http://ludwig.squarespace.com/wittrslinks/