--- In
Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@.
..> wrote:
> --- In
WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@> wrote:
> >
> > --- In
Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> > <snip>
> >
> > > >
> > > > Because Searle argues that at least one kind, supposing brains do it like computers isn't possible based on a logical claim.
> > >
> > >
> > > You should revise this view of yours because it is not true. Start with the target article and find out at last that the issue is about computational explanation vis a vis brute physics explanation.
> > >
> >
> > Nonsense, Budd. Nobody cares about the terms we choose to explain the phenomenon of a computer's being conscious. In fact, Searle argues that it cannot be because we cannot get understanding from the processes found in the CR which are like the processes found in a computer.
>
>
> The above is nonsense, Stuart. You are forever conflating PP with > BP which is like arguing Searle's position.
BP, PP, S/H, non-S/H . . . isn't it all just otiose?
> And you are forever having it both ways by saying that PP is more powerful than serial processing given computational complexity--
but Searle points out that all PP can be serially computed by a UTM, which the CR is.
>
What "Searle points out", as you put it, is irrelevant to the issue because this hinges on whether or not we are speaking of a system level feature or something below it, i.e., a feature of the system's constituent elements. Of course, Searle doesn't recognize this either, as far as we have seen in his arguments, so you are at least in his company on that. Too bad it is the wrong side of the debate. But I suppose this notion of system-level vs. constituent-
level is one you are never going to understand (since you haven't thus far).
Anyway, IF SEARLE WERE IN AGREEMENT WITH THE POSITION PUT FORTH BY DENNETT THEN WHY DOES SEARLE CONTINUE TO DENY DENNETT'S POSITION? You'd think he'd have figured out by now that Dennett's position doesn't contradict his even as he continues to contradict Dennett's!
So what makes you think you know better than Searle what his position is?
> When you distinguish between the CR as underspecked, you are maintaining the it is not complex enough in BP terms, which is Searle's position, while maintaining that such is a computer.
>
No, Budd. Since Searle denies Dennett's thesis that a sufficiently complex computational system (made up of a massively parallel system running the right programs) could do it, Searle is saying quite clearly that it isn't a question of complexity (robustness) but of the nature of the processes themselves (i.e., they are computational) that is the problem.
This is why I have said you don't really understand Searle's position. You are completely missing his point which is that, if a CR cannot understand (as we understand), then no other R (no matter how robustly configured) could do so! Of course, that is precisely the Dennettian claim, i.e., that that's what it takes (more robust configuration)
.
Note that if the argument Searle derives from the CR (the CRA) does not apply to anything but a system specked at the CR level then it is a pointless claim because NO ONE THINKS THAT PROGRAMMING A MACHINE TO RESPOND BY ROTE MECHANISMS IS TO PROGRAM UNDERSTANDING IN THAT MACHINE! We can all agree on that. But, of course, the AI project is about much more complex systems, doing many more things than rote responding, than that! So if you are right, the CRA is a pointlessly trivial argument with no implications beyond the CR. If you have only built a bicycle, you cannot expect it to soar above the clouds!
Really, how hard is it to grasp this? But if you can't, let me again call to your attention the still more obvious fact that Searle denies Dennett's thesis and you think Searle's a pretty smart guy so why hasn't he figured out yet that there is nothing in Dennett's thesis for him to deny as your interpretation of the CRA clearly implies?
> Searle's point is about how, and Neil put the point perfectly, computational explanations in terms of programs (a functional type of explanation) is not good enough.
>
It certainly might be if understanding (and the other features of consciousness) are system level features. In that case, the problem lies NOT in the constituent processes but in the system that has been specked into the CR. Add more processes doing more things in the right way (interactively, etc.) and you get a more robust system. That a slimmed down, barebones system can't match what a brain can do says nothing about what a more complex system could do. Of course, for that you need capacity equivalent to brains. Dennett's thesis is that means you need a massively parallel platform because that's what he claims brains are when you get down to it.
Dennett may or may not be right but Searle's CRA has no implications for his claim and especially not if we take your interpretation which I think even Searle would balk at!
> And it is not because Searle is wedded to extra stuff whereas Dennett isn't. What Dennett is doing when making that claim is just dodging a type of better psychology than can be had in functional terms.
>
Can you argue for that or do you just want to get by with another unsupported assertion?
> That's why I think Bertrand Russell was right to insist on the absurdity of a view such as Dennett's when it comes to psychology.
>
>
I wasn't aware Russell had ever considered Dennett's thesis. Have you some evidence of THAT claim? After all, they are hardly contemporaries in the field even if Russell lived a very long life.
>
> >
> > Of course, it's not there. You can't build a bicycle and expect it to fly as Peter Brawley pointed out. The CR is to bicycles while brains are to supersonic transports. The question, then, is whether you can build an SST from the same basic constituent elements as found in the CR.
>
>
> When you say "constituent elements," you are either talking about BP or not.
BP is busy with cleaning up the Gulf of Mexico so why don't we leave them out of it? They have enough on their plate.
> You get Searle wrong when saying he's denying a form of BP
I never saw Searle reference "BP". However, I'll grant he does speak of brute physics or some such at times. But then I have already pointed out that Searle is in self-contradiction vis a vis his treatment of brains and computers and that that is a big part of his confusion! So we can find him affirming things in one place while denying them (or arguing in a way that is only consistent with their denial) in others! That's what it means to be in self-contradiction!
> when denying the coherence of functional explanation of cognitive states.
>
>
His incoherence argument (with which he tried to replace the CRA while never explicitly giving the CRA up!) is worse than the CRA since it completely misses the point about computers and computationalism.
>
>
> >
> > If understanding and the other features of consciousness are system level features, as I've previously explained, rather than constituent element level features (associated with the constituent processes of the CR rather than with some systemic combination of them), then it's not surprising you don't find understanding in the CR. The system isn't adequate because it isn't sufficiently complex, i.e., robust.
>
>
> Searle's view is about system level features.
Searle is confused about that because he appears to take that view (albeit without fully explicating it) vis a vis brains but the CRA depends on a failure to grasp that view. Once you grasp it, the power of the CRA to compel the conclusions he claims for it collapses. (Since I have explained this so many, many times, I will not do so again. Just go back and read my old posts on this, which are legion.)
> You are making a distinction out pure air when trying to explain
> Searle's view in terms of "constituent element features."
This only shows how you continue to miss the point. Well there's that saying about horses and water and drinking, isn't there?
> I think you are really bad at understanding Searle's point or are just making up things for fun.
>
>
Well I guess that's all you have left to say in support of an obviously insupportable claim that you cannot divest yourself of.
>
>
> >
> > But, of course, you will never see this and I have quite given up on expecting you to since you don't even fully grasp Searle whom you have set yourself to defend!
>
> I think I've explained exactly what Searle is denying with the CR.
You have totally missed the point of his claims as evidenced most clearly by your remarkably ridiculous notion that Dennett's thesis doesn't contradict Searle's even while both Searle and Dennett think it does. This either shows you are smarter than the both of them or that you don't understand the real issues in this debate. Frankly, I think the preponderance of the evidence favors the latter conclusion.
> It is the denial of the functionalist sort of explanation to arrive at necessary and sufficient conditions of
> semantics/conscious
ness. End of story.
You can only end a story you get.
> What PP proponents are doing is just conflating PP with BP. But if you want your functionalism, you have to distinguish BP from PP without conflating the two types of explanation.
>
The only conflator here is you, Budd. In your preferred terms, there is no PP in this debate except insofar as it is an application of BP in which case it is only the BP that is at issue, not some rarified non-thing called PP.
> >
> > > Then listen again to Peter's point about PP proponents who distinguish PP from serial processing in a way that amounts to BP, which Searle is not arguing against.
> > >
> >
> > PP is BP (using your ridiculous lexicon).
>
>
> You are seeming more and more like an idiot;
Oy.
> but our disagreement is about whether Searle is making a good point > about functionalist sorts of explanation.
It's not about picking our favorite explanations. It's about what can actually be done with certain kinds of machines.
> For Searle, PP is not BP because it carries a functional type of
> explanation since it is still about computation.
I agree Searle does share this particular confusion with you. But just because he does is no argument that he is actually right! A confusion is a confusion, no matter who is confused.
> The point is that computation is not a natural kind and what is going on, electrically speaking is just BP such that the PP explanation is going to really be another way of having a BP explanation-
-or not.
>
And all that matters is your good old BP. Or is that "otiose"? If it is, this harping on so-called "PP" is much ado about nothing since no one is arguing for some abstraction as a source or cause or producer of instances of the features we recognize by the term "consciousness"
.
> You have to choose. One choice is to try to have it both ways--critique Searle and share his position;
Oy.
> or own up to the upshot of functionalist explanations which are eliminativist-
-which is ridiculous as Russell points out in _Knowledge: Its Scope and Limits_.
>
>
Give the argument, don't just name-drop! Russell isn't here. You are. Or at least you seem to be.
>
>
> > There is no separate PP which, finally, is just a particular configuration of what you call BP. Thus the CR is one configuration of this BP and the more complex system envisioned by Dennett is another. This is finally about configurations not the quality of the parts. Get it? (Probably not but what the hell!)
>
>
> _You_ still don't get it.
No, you . . .
> Searle's critique is not about the quality of the parts.
That is precisely what it is about and merely denying it isn't enough. Look at the CRA itself. (But then that never helped before, did it?)
> It is about functionalist type explanations not really netting us any hope of understanding necessary and sufficient conditions for bona fide consciousness and semantics.
>
It is NOT about different kinds of explanations but different possibilities we can achieve with particular physical things.
>
>
> >
> > > At last, you'll understand that your critique of Searle was a long-winded tirade amounting to his position
> >
> >
> > If he was really arguing against your notion of a certain kind of explanation (PP rather than BP) then his entire thesis is a strawman and the CR and its conclusions utterly irrelevant to the question of whether computers can be engineered and implemented to be conscious. That it is, finally, BS (since you are so enamoured of the magic of acronyms and initials).
>
> This just shows exactly how ignorant you are of the literature.
Or how thick you are with regard to the issue!
> The systems reply is just contradicting an original claim made in the literature. I'm happy to hear that some haven't actually held the thesis of strong AI as defined by Searle.
> >
There have certainly been many ideas and theses in the AI field but I have never encountered anything in "the literature" or in the claims of AI researchers elsewhere, that supports a view that computationalism is an argument for the causal efficacy of an abstraction. That is simply Searle's misunderstanding. And yours, apparently.
> >
> > > and STILL not touching his clear point that computational explanations, if different from BP explanations, are not really good explanations for things like minds and semantics.
> > >
> >
> >
> > It's not about competing ways of explaining conscious machines but about whether machines CAN be conscious!
>
>
> You really show a lack of reading on the topic, Stuart. It's not
> as if a couple of google searches are all that is required.
Reading doesn't help if you don't understand as you manifestly do not.
> Searle is not arguing against machines being conscious, whether artificial or organic.
Budd, try to read what I write in context, okay? My reference to machines comes down to a certain kind of machine. Obviously I do not argue that any machine can be conscious. I argue that there is nothing in principle that precludes a machine being conscious. As to what kind of machine might qualify, note, again(!), that I am referencing computational machines, i.e., computers. So my reference above to "machines" is a reference to generic machines. The argument I am making, however, is about a particular kind of machine, one that can do what brains can do.
As we have seen and discussed ad infinitum here, the Dennettian thesis is that brains operate like computers, that, in fact, they are a kind of organic computer. If this is a correct interpretation of what a brain is, then there is no reason, in principle, that an equivalent computer cannot do what a brain can do. Searle's CRA which is based on the failure of a computational system specked in a very limited way purports to show that no computational system can succeed.
But Dennett argues that this is misleading because it conceives of what brains do as being separate and apart from the constituents in the CR. If the features brains produce are not to be found in the CR, then, the argument goes, they cannot occur in ANY configuration of those same constituents. BUT IF THE FEATURES BRAINS PRODUCE ARE SYSTEM-LEVEL, RATHER THAN STAND ALONE IRREDUCIBLES, THEN THE ONLY PROBLEM THE CR EXPOSES IS THAT THE CR IS AN INADEQUATE SYSTEM. OF COURSE THIS SAYS NOTHING ABOUT THE POTENTIAL ADEQUACY OF MORE ROBUST SYSTEMS.
So the point is to test out a thesis like Dennett's empirically, rather than rely on the logical denial found in Searle's CRA which hinges on the suppressed premise that the features of mind are not reducible to some underlying complex of features that aren't, themselves, features of mind.
> You should already know this but what is going on is that you are making up a strawman in terms of Searle is saying--he is not saying
> what you think he's saying.
Oh nonsense. Try to read the argument clearly (mine and his, actually).
> Evidence for this is just how bad you go about handling compound sentences with an awareness that the issue is fundamentally about
> different types of explanation.
Its about whether certain types of machines can do certain kinds of things, NOT ABOUT HOW WE CHOOSE TO EXPLAIN WHAT THEY DO!
> If you collapse the distinction, it doesn't amount to Searle arguing against a strawman; but it does amount to a position Searle isn't arguing against.
>
Searle argues against Dennett's thesis and my thesis is roughly equivalent to Dennett's. Therefore I am arguing against Searle's thesis, just as Dennett is.
>
>
> > If we follow your thinking, Searle's argument is shown to be finally pointless, just a dispute over whether Star Trek's Commander Data, who walks like us and talks like us and behaves like us in every conceivable way, can be called conscious or not. But that, finally, is beside the point if the android has already achieved the level of a Commander Data.
>
> You're being an idiot
!!!
> because functionalist explanations allow for us to take commander Data as conscious just because of the excellent programming-
-in virtue of computational properties. But whether he is conscious is a > matter of BP.
Well there you go then! If "his" computational brain can do roughly what our organic brains can do, you will agree he is conscious. So what's your problem? On the other hand, Searle's CRA denies that possibility. So are you secretly in Dennett's camp after all?
> And yes, you want it both ways as if there is no distinction between types of explanation while still wanting to call these
> systems computers.
If it walks like a duck and quacks like a duck . . . but you know what, call them something else if you like. Who cares what you call them? If they operate on computational principles, then changing what you call them may make you feel better but it won't change the duck!
> If something is a computer, it is such because of computational properties--
nothing, though, is intrinsically a computer in virtue of
> BP.
Who gives a damn about what is "intrinsically a computer"? What has THAT to do with any of this?
> Ergo, Searle's real argument is about why anyone would have thought that programming could tell us a thing about the mental. And then he's told that no one ever had the idea. That would be wrong.
> >
Again: computationalism (what Searle calls "strong AI" against which he is arguing) is the thesis that the brain, in producing consciousness, operates like a computer. While there are many theories about how a computer could be made conscious, the only issue in this debate is whether any computer can be, purely in virtue of its computational capacities. BUT NO ONE EVER CLAIMED THAT SOMETHING THAT IS ABSTRACT, AS IN PROGRAMS (SOFTWARE), WAS THE ISSUE. COMPUTATIONALISM IS ABOUT COMPUTERS AS PLATFORMS FOR THIS EFFECT.
> >
> > > Searle does, however, break with the so-called venerable Wittgensteinian tradition of categorial distinction between biology-talk and mind-talk. His program also happens to be inspired by Wittgenstein in the following way: Say your piece as clearly as you can and have philosophy connect to natural science.
> > >
> >
> > Searle is very confused though, unfortunately. And he has confused you though I suspect, from the adamancy of your arguing, that you are a more than willing participant in that condition.
>
>
> I think you have a wrong picture of what Searle is arguing.
No, you do.
> I think you are in no way capable of showing Searle to be confused.
I have already done it numerous times, even if you cannot follow or simply won't accept the implications of my arguments.
> I've shown what it is I think you're confused about--not distinguishing PP from BP explanations.
A faux distinction, and I've explained that to you many times, too.
> And you are wrong to think this is not the issue of the CRA. You've been wrong about the motive for conducting the thought
> experiment,
Motives are irrelevant. The argument stands or falls on its own terms.
> the upshot of the experiment, and wrong in not noticing that the idea you have in mind about PP is really just the same as BP in Searle's mouth.
> >
So someone should really tell Searle that he and Dennett are actually in agreement, right? Are you going to do it?
> > > Now, as Walter has it, philosophy just is a bunch of categorial analysis. Well, I suppose one can start with concepts alright!
> > >
> > > Cheers,
> > > Budd
> > >
> >
> > You do, do you?
> >
> > SWM
>
> Yes, because I don't know what it would be like not to have a clue.
Jeremy Brett's Sherlock Holmes interpretation on the BBC has recently been replaying in my area and I've been rather enjoying it. As his Holmes would have said on reading what you write above: Hah!
> Whatever are you thinking half the time?
Maybe some day you'll catch on.
> And note that starting with concepts is okay because they often are tied to the real world and not just part of some closed-to-the-
world machine language of syntactical function.
>
> Cheers,
> Budd
>
> Cheers,
> Budd
You do like to end on irrelevancies, don't you?
SWM
============
=========
=========
=========
==
Need Something? Check here:
http://ludwig.squarespace.com/wittrslinks/