[C] [Wittrs] Digest Number 308

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 29 Jul 2010 08:51:52 -0000

Title: WittrsAMR

Messages In This Digest (7 Messages)

Messages

1a.

Re: "Propositions are Pictures" . . . or Not!

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 5:56 am (PDT)



--- In Wittrs@yahoogroups.com, Rajasekhar Goteti <wittrsamr@...> wrote:

> Dear sir
> Human thinking is a step wise process whether it be a philosophers or common man.One might say that I have gone wrong but never can demolish ones created.Picture theory is fundamental for any language to exist.

"Picture theory" or the fact of picturing? Or the fact that we have mental pictures?

> Logical reasoning may be the second phase.Analytical grammar may be the third phase.

Phases? I think you are confusing the facts of our existence with different approaches to explaining them.

>Propositional attitude may be the final phase of a student of philosophy.
> All creation is possible only with language (virtual)where in exist 1 name 2 its form 3 space created for the movement of symbols 4 Time , the interval between name and its picture to appear in the
> head.

This sort of apodictic pronouncements aren't especially helpful. They are offered, I see, on a kind of take it or leave it basis with little room for discussion, clarification, consideration or argument. That is because they are insufficiently explicated and partake of the natural ambiguities of language. On my view you would do better here to fully lay out what you mean rather than to depend on aphorisms and assertions. (I know that Wittgenstein did that, of course, but it's important to remember that he did in an overarching and systematic way where everything was carefully laid out to convey, in the Tractatus, an overarching system. Later in the Investigations he moved away from that but still tended to write aphoristically -- but in THAT case, his writings were intended to zero in on the ways we thought about things. In neither case was he guilty, as I fear you are, of relying on ambiguity for its own sake.)

> Language is functional by its very inception and can convey nothing but functionality for human convenience, nothing more.

What does that mean?

> All names can show functional value but not  real     objects.
>

Again what are you aiming to say? That names relate to real things like tags on packages? Or that names perform certain tasks in language, including, of course, tagging?

Russell thought names were substitutes for broader descriptive statements (and, later, when he moved to logical atomism, that real names were words like "this" or "that" applied in real time to individual sensory inputs the speaker was having). I'm assuming you don't think of names in the Russellian sense? If so, what sense?

> If you accept the process of understanding may not venture out to say here W has gone wrong and here gone wright etc.
> thank yousekhar  
>

I don't understand this point of yours, sekhar.

SWM

> Is this not one of the "grave errors" Wittgenstein seems to have later come to recognize (since he ultimately abandons the picture theory of language)? In an earlier passage from that paper Wittgenstein was supposed to have given, but ultimately rejected early on upon his return to Cambridge, we saw the claim that linguistic picturing could be projective (the way light from an image projects onto a camera's lense and its film or onto a mirror or the human eye or the way a movie projector casts its image on a silver screen) or it could a matter of rules, as in the practice of speakers of a given language of assigning certain significances to the different terms (symbols) of that language in the given propositions (statements).
>
>
> SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.1.

Re: Algorithms, Abstractions and Minds

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 4:25 pm (PDT)





--- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> <snip>
>
> > >
> > > Because Searle argues that at least one kind, supposing brains do it like computers isn't possible based on a logical claim.
> >
> >
> > You should revise this view of yours because it is not true. Start with the target article and find out at last that the issue is about computational explanation vis a vis brute physics explanation.
> >
>
> Nonsense, Budd. Nobody cares about the terms we choose to explain the phenomenon of a computer's being conscious. In fact, Searle argues that it cannot be because we cannot get understanding from the processes found in the CR which are like the processes found in a computer.

The above is nonsense, Stuart. You are forever conflating PP with BP which is like arguing Searle's position. And you are forever having it both ways by saying that PP is more powerful than serial processing given computational complexity--but Searle points out that all PP can be serially computed by a UTM, which the CR is.

When you distinguish between the CR as underspecked, you are maintaining the it is not complex enough in BP terms, which is Searle's position, while maintaining that such is a computer.

Searle's point is about how, and Neil put the point perfectly, computational explanations in terms of programs (a functional type of explanation) is not good enough.

And it is not because Searle is wedded to extra stuff whereas Dennett isn't. What Dennett is doing when making that claim is just dodging a type of better psychology than can be had in functional terms.

That's why I think Bertrand Russell was right to insist on the absurdity of a view such as Dennett's when it comes to psychology.

>
> Of course, it's not there. You can't build a bicycle and expect it to fly as Peter Brawley pointed out. The CR is to bicycles while brains are to supersonic transports. The question, then, is whether you can build an SST from the same basic constituent elements as found in the CR.

When you say "constituent elements," you are either talking about BP or not. You get Searle wrong when saying he's denying a form of BP when denying the coherence of functional explanation of cognitive states.

>
> If understanding and the other features of consciousness are system level features, as I've previously explained, rather than constituent element level features (associated with the constituent processes of the CR rather than with some systemic combination of them), then it's not surprising you don't find understanding in the CR. The system isn't adequate because it isn't sufficiently complex, i.e., robust.

Searle's view is about system level features. You are making a distinction out pure air when trying to explain Searle's view in terms of "constituent element features." I think you are really bad at understanding Searle's point or are just making up things for fun.

>
> But, of course, you will never see this and I have quite given up on expecting you to since you don't even fully grasp Searle whom you have set yourself to defend!

I think I've explained exactly what Searle is denying with the CR. It is the denial of the functionalist sort of explanation to arrive at necessary and sufficient conditions of semantics/consciousness. End of story. What PP proponents are doing is just conflating PP with BP. But if you want your functionalism, you have to distinguish BP from PP without conflating the two types of explanation.

>
> > Then listen again to Peter's point about PP proponents who distinguish PP from serial processing in a way that amounts to BP, which Searle is not arguing against.
> >
>
> PP is BP (using your ridiculous lexicon).

You are seeming more and more like an idiot; but our disagreement is about whether Searle is making a good point about functionalist sorts of explanation. For Searle, PP is not BP because it carries a functional type of explanation since it is still about computation. The point is that computation is not a natural kind and what is going on, electrically speaking is just BP such that the PP explanation is going to really be another way of having a BP explanation--or not.

You have to choose. One choice is to try to have it both ways--critique Searle and share his position; or own up to the upshot of functionalist explanations which are eliminativist--which is ridiculous as Russell points out in _Knowledge: Its Scope and Limits_.

> There is no separate PP which, finally, is just a particular configuration of what you call BP. Thus Thus the CR is one configuration of this BP and the more complex system envisioned by Dennett is another. This is finally about configurations not the quality of the parts. Get it? (Probably not but what the hell!)

_You_ still don't get it. Searle's critique is not about the quality of the parts. It is about functionalist type explanations not really netting us any hope of understanding necessary and sufficient conditions for bona fide consciousness and semantics.

>
> > At last, you'll understand that your critique of Searle was a long-winded tirade amounting to his position
>
>
> If he was really arguing against your notion of a certain kind of explanation (PP rather than BP) then his entire thesis is a strawman and the CR and its conclusions utterly irrelevant to the question of whether computers can be engineered and implemented to be conscious. That it is finally BS (since you are so enamoured of the magic of acronyms and initials).

This just shows exactly how ignorant you are of the literature. The systems reply is just contradicting an original claim made in the literature. I'm happy to hear that some haven't actually held the thesis of strong AI as defined by Searle.
>
>
> > and STILL not touching his clear point that computational explanations, if different from BP explanations, are not really good explanations for things like minds and semantics.
> >
>
>
> It's not about competing ways of explaining conscious machines but about whether machines CAN be conscious!

You really show a lack of reading on the topic, Stuart. It's not as if a couple of google searches are all that is required. Searle is not arguing against machines being conscious, whether artificial or organic. You should already know this but what is going on is that you are making up a strawman in terms of Searle is saying--he is not saying what you think he's saying. Evidence for this is just how bad you go about handling compound sentences with an awareness that the issue is fundamentally about different types of explanation. If you collapse the distinction, it doesn't amount to Searle arguing against a strawman; but it does amount to a position Searle isn't arguing against.

> If we follow your thinking, Searle's argument is shown to be finally pointless, just a dispute over whether Star Trek's Commander Data, who walks like us and talks like us and behaves like us in every conceivable way, can be called conscious or not. But that, finally, is beside the point if the android has already achieved the level of a Commander Data.

You're being an idiot because functionalist explanations allow for us to take commander Data as conscious just because of the excellent programming--in virtue of computational properties. But whether he is conscious is a matter of BP. And yes, you want it both ways as if there is no distinction between types of explanation while still wanting to call these systems computers. If something is a computer, it is such because of computational properties--nothing, though, is intrinsically a computer in virtue of BP. Ergo, Searle's real argument is about why anyone would have thought that programming could tell us a thing about the mental. And then he's told that no one ever had the idea. That would be wrong.
>
>
> > Searle does, however, break with the so-called venerable Wittgensteinian tradition of categorial distinction between biology-talk and mind-talk. His program also happens to be inspired by Wittgenstein in the following way: Say your piece as clearly as you can and have philosophy connect to natural science.
> >
>
> Searle is very confused though, unfortunately. And he has confused you though I suspect, from the adamancy of your arguing, that you are a more than willing participant in that condition.

I think you have a wrong picture of what Searle is arguing. I think you are in no way capable of showing Searle to be confused. I've shown what it is I think you're confused about--not distinguishing PP from BP explanations. And you are wrong to think this is not the issue of the CRA. You've been wrong about the motive for conducting the thought experiment, the upshot of the experiment, and wrong in not noticing that the idea you have in mind about PP is really just the same as BP in Searle's mouth.
>
> > Now, as Walter has it, philosophy just is a bunch of categorial analysis. Well, I suppose one can start with concepts alright!
> >
> > Cheers,
> > Budd
> >
>
> You do, do you?
>
> SWM

Yes, because I don't know what it would be like not to have a clue. Whatever are you thinking half the time? And note that starting with concepts is okay because they often are tied to the real world and not just part of some closed-to-the-world machine language of syntactical function.

Cheers,
Budd

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.2.

Re: Algorithms, Abstractions and Minds

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 11:24 pm (PDT)



--- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@...> wrote:

> --- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@> wrote:
> >
> > --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> > <snip>
> >
> > > >
> > > > Because Searle argues that at least one kind, supposing brains do it like computers isn't possible based on a logical claim.
> > >
> > >
> > > You should revise this view of yours because it is not true. Start with the target article and find out at last that the issue is about computational explanation vis a vis brute physics explanation.
> > >
> >
> > Nonsense, Budd. Nobody cares about the terms we choose to explain the phenomenon of a computer's being conscious. In fact, Searle argues that it cannot be because we cannot get understanding from the processes found in the CR which are like the processes found in a computer.
>
>
> The above is nonsense, Stuart. You are forever conflating PP with > BP which is like arguing Searle's position.

BP, PP, S/H, non-S/H . . . isn't it all just otiose?

> And you are forever having it both ways by saying that PP is more powerful than serial processing given computational complexity--but Searle points out that all PP can be serially computed by a UTM, which the CR is.
>

What "Searle points out", as you put it, is irrelevant to the issue because this hinges on whether or not we are speaking of a system level feature or something below it, i.e., a feature of the system's constituent elements. Of course, Searle doesn't recognize this either, as far as we have seen in his arguments, so you are at least in his company on that. Too bad it is the wrong side of the debate. But I suppose this notion of system-level vs. constituent-level is one you are never going to understand (since you haven't thus far).

Anyway, IF SEARLE WERE IN AGREEMENT WITH THE POSITION PUT FORTH BY DENNETT THEN WHY DOES SEARLE CONTINUE TO DENY DENNETT'S POSITION? You'd think he'd have figured out by now that Dennett's position doesn't contradict his even as he continues to contradict Dennett's!

So what makes you think you know better than Searle what his position is?

> When you distinguish between the CR as underspecked, you are maintaining the it is not complex enough in BP terms, which is Searle's position, while maintaining that such is a computer.
>

No, Budd. Since Searle denies Dennett's thesis that a sufficiently complex computational system (made up of a massively parallel system running the right programs) could do it, Searle is saying quite clearly that it isn't a question of complexity (robustness) but of the nature of the processes themselves (i.e., they are computational) that is the problem.

This is why I have said you don't really understand Searle's position. You are completely missing his point which is that, if a CR cannot understand (as we understand), then no other R (no matter how robustly configured) could do so! Of course, that is precisely the Dennettian claim, i.e., that that's what it takes (more robust configuration).

Note that if the argument Searle derives from the CR (the CRA) does not apply to anything but a system specked at the CR level then it is a pointless claim because NO ONE THINKS THAT PROGRAMMING A MACHINE TO RESPOND BY ROTE MECHANISMS IS TO PROGRAM UNDERSTANDING IN THAT MACHINE! We can all agree on that. But, of course, the AI project is about much more complex systems, doing many more things than rote responding, than that! So if you are right, the CRA is a pointlessly trivial argument with no implications beyond the CR. If you have only built a bicycle, you cannot expect it to soar above the clouds!

Really, how hard is it to grasp this? But if you can't, let me again call to your attention the still more obvious fact that Searle denies Dennett's thesis and you think Searle's a pretty smart guy so why hasn't he figured out yet that there is nothing in Dennett's thesis for him to deny as your interpretation of the CRA clearly implies?

> Searle's point is about how, and Neil put the point perfectly, computational explanations in terms of programs (a functional type of explanation) is not good enough.
>

It certainly might be if understanding (and the other features of consciousness) are system level features. In that case, the problem lies NOT in the constituent processes but in the system that has been specked into the CR. Add more processes doing more things in the right way (interactively, etc.) and you get a more robust system. That a slimmed down, barebones system can't match what a brain can do says nothing about what a more complex system could do. Of course, for that you need capacity equivalent to brains. Dennett's thesis is that means you need a massively parallel platform because that's what he claims brains are when you get down to it.

Dennett may or may not be right but Searle's CRA has no implications for his claim and especially not if we take your interpretation which I think even Searle would balk at!

> And it is not because Searle is wedded to extra stuff whereas Dennett isn't. What Dennett is doing when making that claim is just dodging a type of better psychology than can be had in functional terms.
>

Can you argue for that or do you just want to get by with another unsupported assertion?

> That's why I think Bertrand Russell was right to insist on the absurdity of a view such as Dennett's when it comes to psychology.
>
>

I wasn't aware Russell had ever considered Dennett's thesis. Have you some evidence of THAT claim? After all, they are hardly contemporaries in the field even if Russell lived a very long life.

>
> >
> > Of course, it's not there. You can't build a bicycle and expect it to fly as Peter Brawley pointed out. The CR is to bicycles while brains are to supersonic transports. The question, then, is whether you can build an SST from the same basic constituent elements as found in the CR.
>
>
> When you say "constituent elements," you are either talking about BP or not.

BP is busy with cleaning up the Gulf of Mexico so why don't we leave them out of it? They have enough on their plate.

> You get Searle wrong when saying he's denying a form of BP

I never saw Searle reference "BP". However, I'll grant he does speak of brute physics or some such at times. But then I have already pointed out that Searle is in self-contradiction vis a vis his treatment of brains and computers and that that is a big part of his confusion! So we can find him affirming things in one place while denying them (or arguing in a way that is only consistent with their denial) in others! That's what it means to be in self-contradiction!

> when denying the coherence of functional explanation of cognitive states.
>
>

His incoherence argument (with which he tried to replace the CRA while never explicitly giving the CRA up!) is worse than the CRA since it completely misses the point about computers and computationalism.

>
>
> >
> > If understanding and the other features of consciousness are system level features, as I've previously explained, rather than constituent element level features (associated with the constituent processes of the CR rather than with some systemic combination of them), then it's not surprising you don't find understanding in the CR. The system isn't adequate because it isn't sufficiently complex, i.e., robust.
>
>
> Searle's view is about system level features.

Searle is confused about that because he appears to take that view (albeit without fully explicating it) vis a vis brains but the CRA depends on a failure to grasp that view. Once you grasp it, the power of the CRA to compel the conclusions he claims for it collapses. (Since I have explained this so many, many times, I will not do so again. Just go back and read my old posts on this, which are legion.)

> You are making a distinction out pure air when trying to explain
> Searle's view in terms of "constituent element features."

This only shows how you continue to miss the point. Well there's that saying about horses and water and drinking, isn't there?

> I think you are really bad at understanding Searle's point or are just making up things for fun.
>
>

Well I guess that's all you have left to say in support of an obviously insupportable claim that you cannot divest yourself of.

>
>
> >
> > But, of course, you will never see this and I have quite given up on expecting you to since you don't even fully grasp Searle whom you have set yourself to defend!
>
> I think I've explained exactly what Searle is denying with the CR.

You have totally missed the point of his claims as evidenced most clearly by your remarkably ridiculous notion that Dennett's thesis doesn't contradict Searle's even while both Searle and Dennett think it does. This either shows you are smarter than the both of them or that you don't understand the real issues in this debate. Frankly, I think the preponderance of the evidence favors the latter conclusion.

> It is the denial of the functionalist sort of explanation to arrive at necessary and sufficient conditions of
> semantics/consciousness. End of story.

You can only end a story you get.

> What PP proponents are doing is just conflating PP with BP. But if you want your functionalism, you have to distinguish BP from PP without conflating the two types of explanation.
>

The only conflator here is you, Budd. In your preferred terms, there is no PP in this debate except insofar as it is an application of BP in which case it is only the BP that is at issue, not some rarified non-thing called PP.

> >
> > > Then listen again to Peter's point about PP proponents who distinguish PP from serial processing in a way that amounts to BP, which Searle is not arguing against.
> > >
> >
> > PP is BP (using your ridiculous lexicon).
>
>
> You are seeming more and more like an idiot;

Oy.

> but our disagreement is about whether Searle is making a good point > about functionalist sorts of explanation.

It's not about picking our favorite explanations. It's about what can actually be done with certain kinds of machines.

> For Searle, PP is not BP because it carries a functional type of
> explanation since it is still about computation.

I agree Searle does share this particular confusion with you. But just because he does is no argument that he is actually right! A confusion is a confusion, no matter who is confused.

> The point is that computation is not a natural kind and what is going on, electrically speaking is just BP such that the PP explanation is going to really be another way of having a BP explanation--or not.
>

And all that matters is your good old BP. Or is that "otiose"? If it is, this harping on so-called "PP" is much ado about nothing since no one is arguing for some abstraction as a source or cause or producer of instances of the features we recognize by the term "consciousness".

> You have to choose. One choice is to try to have it both ways--critique Searle and share his position;

Oy.

> or own up to the upshot of functionalist explanations which are eliminativist--which is ridiculous as Russell points out in _Knowledge: Its Scope and Limits_.
>
>

Give the argument, don't just name-drop! Russell isn't here. You are. Or at least you seem to be.

>
>
> > There is no separate PP which, finally, is just a particular configuration of what you call BP. Thus the CR is one configuration of this BP and the more complex system envisioned by Dennett is another. This is finally about configurations not the quality of the parts. Get it? (Probably not but what the hell!)
>
>
> _You_ still don't get it.

No, you . . .

> Searle's critique is not about the quality of the parts.

That is precisely what it is about and merely denying it isn't enough. Look at the CRA itself. (But then that never helped before, did it?)

> It is about functionalist type explanations not really netting us any hope of understanding necessary and sufficient conditions for bona fide consciousness and semantics.
>

It is NOT about different kinds of explanations but different possibilities we can achieve with particular physical things.

>
>
> >
> > > At last, you'll understand that your critique of Searle was a long-winded tirade amounting to his position
> >
> >
> > If he was really arguing against your notion of a certain kind of explanation (PP rather than BP) then his entire thesis is a strawman and the CR and its conclusions utterly irrelevant to the question of whether computers can be engineered and implemented to be conscious. That it is, finally, BS (since you are so enamoured of the magic of acronyms and initials).
>
> This just shows exactly how ignorant you are of the literature.

Or how thick you are with regard to the issue!

> The systems reply is just contradicting an original claim made in the literature. I'm happy to hear that some haven't actually held the thesis of strong AI as defined by Searle.
> >

There have certainly been many ideas and theses in the AI field but I have never encountered anything in "the literature" or in the claims of AI researchers elsewhere, that supports a view that computationalism is an argument for the causal efficacy of an abstraction. That is simply Searle's misunderstanding. And yours, apparently.

> >
> > > and STILL not touching his clear point that computational explanations, if different from BP explanations, are not really good explanations for things like minds and semantics.
> > >
> >
> >
> > It's not about competing ways of explaining conscious machines but about whether machines CAN be conscious!
>
>
> You really show a lack of reading on the topic, Stuart. It's not
> as if a couple of google searches are all that is required.

Reading doesn't help if you don't understand as you manifestly do not.

> Searle is not arguing against machines being conscious, whether artificial or organic.

Budd, try to read what I write in context, okay? My reference to machines comes down to a certain kind of machine. Obviously I do not argue that any machine can be conscious. I argue that there is nothing in principle that precludes a machine being conscious. As to what kind of machine might qualify, note, again(!), that I am referencing computational machines, i.e., computers. So my reference above to "machines" is a reference to generic machines. The argument I am making, however, is about a particular kind of machine, one that can do what brains can do.

As we have seen and discussed ad infinitum here, the Dennettian thesis is that brains operate like computers, that, in fact, they are a kind of organic computer. If this is a correct interpretation of what a brain is, then there is no reason, in principle, that an equivalent computer cannot do what a brain can do. Searle's CRA which is based on the failure of a computational system specked in a very limited way purports to show that no computational system can succeed.

But Dennett argues that this is misleading because it conceives of what brains do as being separate and apart from the constituents in the CR. If the features brains produce are not to be found in the CR, then, the argument goes, they cannot occur in ANY configuration of those same constituents. BUT IF THE FEATURES BRAINS PRODUCE ARE SYSTEM-LEVEL, RATHER THAN STAND ALONE IRREDUCIBLES, THEN THE ONLY PROBLEM THE CR EXPOSES IS THAT THE CR IS AN INADEQUATE SYSTEM. OF COURSE THIS SAYS NOTHING ABOUT THE POTENTIAL ADEQUACY OF MORE ROBUST SYSTEMS.

So the point is to test out a thesis like Dennett's empirically, rather than rely on the logical denial found in Searle's CRA which hinges on the suppressed premise that the features of mind are not reducible to some underlying complex of features that aren't, themselves, features of mind.

> You should already know this but what is going on is that you are making up a strawman in terms of Searle is saying--he is not saying
> what you think he's saying.

Oh nonsense. Try to read the argument clearly (mine and his, actually).

> Evidence for this is just how bad you go about handling compound sentences with an awareness that the issue is fundamentally about
> different types of explanation.

Its about whether certain types of machines can do certain kinds of things, NOT ABOUT HOW WE CHOOSE TO EXPLAIN WHAT THEY DO!

> If you collapse the distinction, it doesn't amount to Searle arguing against a strawman; but it does amount to a position Searle isn't arguing against.
>

Searle argues against Dennett's thesis and my thesis is roughly equivalent to Dennett's. Therefore I am arguing against Searle's thesis, just as Dennett is.

>
>
> > If we follow your thinking, Searle's argument is shown to be finally pointless, just a dispute over whether Star Trek's Commander Data, who walks like us and talks like us and behaves like us in every conceivable way, can be called conscious or not. But that, finally, is beside the point if the android has already achieved the level of a Commander Data.
>
> You're being an idiot

!!!

> because functionalist explanations allow for us to take commander Data as conscious just because of the excellent programming--in virtue of computational properties. But whether he is conscious is a > matter of BP.

Well there you go then! If "his" computational brain can do roughly what our organic brains can do, you will agree he is conscious. So what's your problem? On the other hand, Searle's CRA denies that possibility. So are you secretly in Dennett's camp after all?

> And yes, you want it both ways as if there is no distinction between types of explanation while still wanting to call these
> systems computers.

If it walks like a duck and quacks like a duck . . . but you know what, call them something else if you like. Who cares what you call them? If they operate on computational principles, then changing what you call them may make you feel better but it won't change the duck!

> If something is a computer, it is such because of computational properties--nothing, though, is intrinsically a computer in virtue of
> BP.

Who gives a damn about what is "intrinsically a computer"? What has THAT to do with any of this?

> Ergo, Searle's real argument is about why anyone would have thought that programming could tell us a thing about the mental. And then he's told that no one ever had the idea. That would be wrong.
> >

Again: computationalism (what Searle calls "strong AI" against which he is arguing) is the thesis that the brain, in producing consciousness, operates like a computer. While there are many theories about how a computer could be made conscious, the only issue in this debate is whether any computer can be, purely in virtue of its computational capacities. BUT NO ONE EVER CLAIMED THAT SOMETHING THAT IS ABSTRACT, AS IN PROGRAMS (SOFTWARE), WAS THE ISSUE. COMPUTATIONALISM IS ABOUT COMPUTERS AS PLATFORMS FOR THIS EFFECT.

> >
> > > Searle does, however, break with the so-called venerable Wittgensteinian tradition of categorial distinction between biology-talk and mind-talk. His program also happens to be inspired by Wittgenstein in the following way: Say your piece as clearly as you can and have philosophy connect to natural science.
> > >
> >
> > Searle is very confused though, unfortunately. And he has confused you though I suspect, from the adamancy of your arguing, that you are a more than willing participant in that condition.
>
>
> I think you have a wrong picture of what Searle is arguing.

No, you do.

> I think you are in no way capable of showing Searle to be confused.

I have already done it numerous times, even if you cannot follow or simply won't accept the implications of my arguments.

> I've shown what it is I think you're confused about--not distinguishing PP from BP explanations.

A faux distinction, and I've explained that to you many times, too.

> And you are wrong to think this is not the issue of the CRA. You've been wrong about the motive for conducting the thought
> experiment,

Motives are irrelevant. The argument stands or falls on its own terms.

> the upshot of the experiment, and wrong in not noticing that the idea you have in mind about PP is really just the same as BP in Searle's mouth.
> >

So someone should really tell Searle that he and Dennett are actually in agreement, right? Are you going to do it?

> > > Now, as Walter has it, philosophy just is a bunch of categorial analysis. Well, I suppose one can start with concepts alright!
> > >
> > > Cheers,
> > > Budd
> > >
> >
> > You do, do you?
> >
> > SWM
>
> Yes, because I don't know what it would be like not to have a clue.

Jeremy Brett's Sherlock Holmes interpretation on the BBC has recently been replaying in my area and I've been rather enjoying it. As his Holmes would have said on reading what you write above: Hah!

> Whatever are you thinking half the time?

Maybe some day you'll catch on.

> And note that starting with concepts is okay because they often are tied to the real world and not just part of some closed-to-the-world machine language of syntactical function.
>
> Cheers,
> Budd
>
> Cheers,
> Budd

You do like to end on irrelevancies, don't you?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3a.

Re: What was the Tractatus Intended by its Author to Do?

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 4:47 pm (PDT)





--- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> >
> > Stuart writes:
> >
> > "Yes, I fear that Russell is much overrated as a philosopher though he was a major heavyweight in his time."
> >
> >
> > I don't think you tell the truth.
>
>
> No, huh? Well then I must be lying about what I really think about Russell then, eh? And what would my motivation be for doing that?

The same as when getting Searle so wrong you would have to be way dumber than you prolly in fact are.
>
>
> > In fact, I thihk you rather enjoy saying the above about Russell from time to time. Indeed, there is nothing to fear from Russell except the hardship philosophy has with notions like probability, causation, personhood, anomic properties, and so on.
> >
>
> Russell's logical atomism was a disaster and even he finally abandoned it.

It takes a major dude to admit that. But atomism is not yet dead for concepts--or at least Fodor wants to be pronounced _demonstrably_ dead if dead.

>
> > Read Russell and find that you come away with more than you do compared to Witters. Is philosophy idle tea-table amusement or is it supposed to connect to science?
> >
>
> Well to each his own. I suppose it's not surprising that you would find Russell more to your tastes than Wittgenstein. Russell in those essays on logical atomism ties himself up in a tangle of knots from which he is finally unable to extricate himself.

BS. Even according to you. Did you forget that a couple seconds ago you said Russell abandoned the position? Now how does one abandon a position "from which he is finally unable to extricate himself. You talk looser than a whore on buy-one-get-two-free day.

>
> > Anyway, I did read Hacker's contribution on Witters in _A Companion to Analytic Philosophy_ and found how illuminating Witters could be--a lot of the good stuff has been incorporated in Searle's work from _Speech Acts_ on.
> >
>
> You'd do better to go to the videotape and just read Wittgenstein. He's way more illuminating in person than when mediated by the comments of others.

I should take your word for that but I'm already corrupted by Searle's clear and distinct prose, some of which is inspired by Witt's deepness that can be whistled.
>
>
> > Searle is the best Wittgensteinian-inspired writer I have ever encountered who went beyond him in attempting to square philosophy with natural science.
> >
>
> Well I suppose I could take a leaf from your book and say that I think you're just "lying". But then why would I do such a silly thing? By the way, unless you are conversant with Wittgenstein via his own work, how would you know how good or bad Searle is in his alleged Wittgenstein inspiration?

Well, one doesn't have to know a lot about Witt. because Witt. was all about getting down to business. Searle gets down to business and doesn't idle in his thinking. Witt. wanted to think of philosophy as a distinct exploration, eventually, of how words are used. This has to include science. And that is why it should connect to science. Philosophy is just a systematic way of getting at the best ideas around. Searle does that. Witt. was all about that--and that is why he wanted no followers. Searle doesn't mind followers. But you have to understand his position in order to follow. You just get him wrong and can't follow. But following isn't a bad thing if it amounts to ceasing questions that are nonsensical.
>
>
> > Other Wittgenstein-inspired writers like Hacker would make the thesis that the brain causes consciousness a species of the incoherent..
> >
>
> I am not up on Hacker but what I have seen so far of that claim strikes me as a misunderstanding of Wittgenstein on this sort of thing.

Perhaps Hacker is al wet on Witters here. Cf. _Philosophy and Neurobiology_, Searle's response especially.

>
> > And still other Wittgenstein-inspired writers will want to think for themselves without having a guide or clue from the very people like Russell who have thought about a lot of it first.
> >
>
> Russell faded during Wittgenstein's lifetime and the work he produced shows why.

Well, that was enlightening. But you're all wet here. The evidence is that Fodor is continuing along Russell's path insofar as Russell considered behaviorism absurd..
>
>
> > Cf. Russell's _Knowledge: Its Scope and Limits_. Not bad for an overrated philosopher in the eyes of some.
> >
> > Cheers,
> > Budd
> >
>
> Russell was certainly prolific but his later work consisted mostly of histories, polemics and popularizations. His earlier work, like the Principia Mathematica, is now dated. As Neil, a practicing mathematician, has noted, mathematicians do not take that work seriously and it has had little impact on the philosophy that came after.
>
> SWM

Also enlightening, but also consider Russell's excellent little chapter on philosophy of mind in _Knowledge: Its Scope and Limits_. Psychology is queen when it comes to concepts--computational functionalist explanations have trouble with semantic content. There is semantic content (contra Dennett and other eliminativists). Ergo, there is a potentially viable science called psychology. And it ain't got with PP gizmos.

Surely Russell's views faded given the popularity of functionalism. But it is turning around, no thanks to many so-called Wittgensteinians and, of course, thanks to easy introductions to philosophy of mind like Searle's brief introduction recently written along with Fodor's work, hard as it is to pull off but evidently not quite as dead as some Wittgensteinians may think for whatever slippery reasons.

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3b.

Re: What was the Tractatus Intended by its Author to Do?

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 11:37 pm (PDT)



-- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@...> wrote:
<snip>

> >
> > > Read Russell and find that you come away with more than you do compared to Witters. Is philosophy idle tea-table amusement or is it supposed to connect to science?
> > >
> >
> > Well to each his own. I suppose it's not surprising that you would find Russell more to your tastes than Wittgenstein. Russell in those essays on logical atomism ties himself up in a tangle of knots from which he is finally unable to extricate himself.
>
>
> BS. Even according to you. Did you forget that a couple seconds ago you said Russell abandoned the position? Now how does one abandon a position "from which he is finally unable to extricate himself. You talk looser than a whore on buy-one-get-two-free day.
>

That's why he gave it up. He realized there was no way forward.

>
>
> >
> > > Anyway, I did read Hacker's contribution on Witters in _A Companion to Analytic Philosophy_ and found how illuminating Witters could be--a lot of the good stuff has been incorporated in Searle's work from _Speech Acts_ on.
> > >
> >
> > You'd do better to go to the videotape and just read Wittgenstein. He's way more illuminating in person than when mediated by the comments of others.
>
>
> I should take your word for that but I'm already corrupted by Searle's clear and distinct prose, some of which is inspired by Witt's deepness that can be whistled.
> >

Searle writes clearly in a superficial way but his arguments are confused.

> >
> > > Searle is the best Wittgensteinian-inspired writer I have ever encountered who went beyond him in attempting to square philosophy with natural science.
> > >
> >
> > Well I suppose I could take a leaf from your book and say that I think you're just "lying". But then why would I do such a silly thing? By the way, unless you are conversant with Wittgenstein via his own work, how would you know how good or bad Searle is in his alleged Wittgenstein inspiration?
>
> Well, one doesn't have to know a lot about Witt. because Witt. was all about getting down to business.

Incredible!

> Searle gets down to business and doesn't idle in his thinking.

Even those who are confused can still find themselves in gear. It's just that they don't know which way they are going or how to get there.

> Witt. wanted to think of philosophy as a distinct exploration, eventually, of how words are used.

A simplification.

> This has to include science. And that is why it should connect to science.

Who supposes otherwise? Except that it doesn't compete with science or rely on the methods of science, of course. It's a different game, a different practice.

> Philosophy is just a systematic way of getting at the best ideas around. Searle does that.

He's confused.

> Witt. was all about that--and that is why he wanted no followers. Searle doesn't mind followers.

Ah, so that's it!

> But you have to understand his position in order to follow. You just get him wrong and can't follow. But following isn't a bad thing if it amounts to ceasing questions that are nonsensical.
> >

Some followers are nothing but true believers -- people attached to an idea, a man or a dogma. Following in philosophy is a bad idea if you are serious about being anything more than a scholar or spokesperson for someone else.

> >
> > > Other Wittgenstein-inspired writers like Hacker would make the thesis that the brain causes consciousness a species of the incoherent..
> > >
> >
> > I am not up on Hacker but what I have seen so far of that claim strikes me as a misunderstanding of Wittgenstein on this sort of thing.
>
> Perhaps Hacker is al wet on Witters here. Cf. _Philosophy and Neurobiology_, Searle's response especially.
>

Why do you have such an aversion to laying out and actually addressing the arguments of the people whose names you invoke?

> >
> > > And still other Wittgenstein-inspired writers will want to think for themselves without having a guide or clue from the very people like Russell who have thought about a lot of it first.
> > >
> >
> > Russell faded during Wittgenstein's lifetime and the work he produced shows why.
>
> Well, that was enlightening. But you're all wet here. The evidence is that Fodor is continuing along Russell's path insofar as Russell considered behaviorism absurd..
> >

So you are telling us that Fodor is a follower of Russell? Or in the Russellian tradition? What evidence have you for that?

> >
> > > Cf. Russell's _Knowledge: Its Scope and Limits_. Not bad for an overrated philosopher in the eyes of some.
> > >
> > > Cheers,
> > > Budd
> > >

> >
> > Russell was certainly prolific but his later work consisted mostly of histories, polemics and popularizations. His earlier work, like the Principia Mathematica, is now dated. As Neil, a practicing mathematician, has noted, mathematicians do not take that work seriously and it has had little impact on the philosophy that came after.
> >
> > SWM
>
> Also enlightening, but also consider Russell's excellent little chapter on philosophy of mind in _Knowledge: Its Scope and Limits_. Psychology is queen when it comes to concepts--computational functionalist explanations have trouble with semantic content. There is semantic content (contra Dennett and other eliminativists). Ergo, there is a potentially viable science called psychology. And it ain't got with PP gizmos.
>

Your "contra Dennett" assertion above shows you don't understand Dennett anymore than you get Searle. Dennett never denies that we have understanding or understand things. He denies that there is some special mental property essential to the occurrence of understanding called, in the plural, "qualia". In like vein I'm sure he would say that there is no special property called "semantics". We might want to say that the term refers to the occurrence of understanding when we are distinguishing the form of a symbol from its meaning (the getting of which is what constitutes understanding).

> Surely Russell's views faded given the popularity of functionalism. But it is turning around, no thanks to many so-called Wittgensteinians and, of course, thanks to easy introductions to philosophy of mind like Searle's brief introduction recently written along with Fodor's work, hard as it is to pull off but evidently not quite as dead as some Wittgensteinians may think for whatever slippery reasons.
>
> Cheers,
> Budd
>

Oy.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.1.

Re: Algorithms, Abstractions and Minds--Plussed and Nonplussed

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 5:14 pm (PDT)





--- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
>
> > Budd:
> > > > ****It is for Searle who doesn't, in fact, rule out other systems that
> > > pass a causal reality constraint
> > > > which functionalism can't pass in principle IF the explanation amounts
> > > to a computational one-
>
> > Bruce:
> > > I've read the above over and over and can't make good sense out of it.
> > > The limitations of functionalism?
> > >
> >
> > Stuart:
> > > Me neither!
>
> > All computational explanations fall into (at least) two categories, excluding descriptions of simplest to most complex computation:
> >
> > 1. Person x does some computation and proves it.
> >
> > 2. System x does some computation (call it information processing that we describe the system as doing even though it isn't _intrinsically_ doing it compared to us sometimes intrinsically processing information like right now) and shuffles bit strings too and fro. The description of the shuffling is the functional description of programs and the outputs of the shuffling can be redescribed as BP (brute physics).
> >
>
> The issue is how does the conscious performance of computation come about not whether the two kinds are the same or not!
>
> Is conscious computation a function of the purely physical processes that we recognize as information processing which, in computers, consists of computational operations and may or may not do so in brains as well?
>
> And if brains do it differently, would THAT mean that computers couldn't do it, too?
>
> PP just is BP at the level of computer and brain operations whether they perform their activities in precisely the same way or not.
>
>
> > The point is that functionalism is mired with observer-relative ascriptions which we can get rid of when explaining something in terms of BP (brute physics)--including mind. And that is what some PP proponents are proposing..
> >
>
> It's a faux distinction. Brains and computers are both physical and what they do involves physical operations. That's the whole story.

Not true. The whole story involves claims by strong AIers who have problems with nonfunctonal semantic content--their systems are useless for explaining mind.

>
> > The problem many have, though, is that while we're looking for corrolations between consciousness and what the brain is doing, we may never all agree on which correlations are part of the real causal > story.
>
>
> That's a different question and not really a very interesting one in this context.

I let you what is interesting here, though. Functionalism was invented so that one doesn't have to consider this an interesting question!
>
>
> > Perhaps there exists somewhere some a priori argument concluding that the hard problem is unsolvable in principle. I wouldn't want to read it but it may just come from a Witterian.
> >
>
>
> I think this obsession with a uniquely "hard problem" is a red herring.

Well, gotcha, because that is another reason why functionalism was invented. How consciousness is caused is a maximally hard problem for all except definitional zombies.
>
>
> > Functionalism is a way of escape from this and tends to imply that if you can duplicate behavior
>
>
> No, operations -- which is, of course, a form of behavior, too, but the use of the term "behavior" in this context leads into confusion because it appears to be referring to the behaviors of the organism when the issue is really the behaviors of the organs.

Would it were so--including BP behavior without computational levels of description about programming, including adaptive behavior even.
>
>
> > (no matter the material, so computers as candidate) you have the functions of the original (brain as candidate except for certain Witterians).
> >
>
> If the same operations can be performed on a synthetic platform like a computer, and lead to the right entity level behaviors (organism vs. organ distinction again), then what's the problem?

The same operations as computational operations or BP operations? It is hard for me to believe that circuit boards are going to involve explanations anything like BP when PP is what is being proposed.

>
> > But as a way of escape, it is escaping good science.
>
> That is an assertion of dogma, nothing more.
>
> You cannot determine what "good science" is by invoking a logical argument that's full of holes (from the equivocal third premise to its suppressed premise about what things like understanding must be).

Well, there is no equivocation except the one you made up due to not being capable of reading English when it comes to Searle. Either dihonest or exceedingly benighted to the point that you don't even know the point of the CRA without getting it wrong.

>
>
> > The good science is to explain how we can, for example, act on anomic properties. Functionalism is bankrupt when it comes to real explanation in terms of intrinsic BP. But that is why it was
> > invented!
>
> Well I guess we have it then, eh? You have informed us that "functionalism is bankrupt". That's an example of "good science" I expect then, isn't it?

Yes, if it follows from reasons that are true.

> Good thing that real science doesn't get done by such dogmatic assertions though!

Yes, because otherwise you might have functionalism parading around as good science when in fact it has simply made a mess of philosophy of mind for so long the burial is overdue.
>
>
> >
> > Confused? You won't be after a little Soap!
> >
>
> ?

I was referring to soap opera-like comedy with Billy Crystal.

> > Or one can conflate functionalist explanation with BP.
>
> Here we go again! As with the old endless mantras about S/H and non-S/H, and what "isn't machine enough", now we are condemned to hear, forever and anon, about BP and PP!

Well, if it is the whole point of the CRA, one might as well understand it! I mean, it is as if you, like Gordon, thinks it better to get at Searle's meaning by creating a different though t experiment.
>
>
> > Searle wouldn't; but if you did, you would have on your hands an idea (in terms of BP) that Searle isn't arguing against.
> >
>
> And that is ridiculous since Searle IS arguing against Dennett and I have basically presented Dennett's model here.

Blah, blah. If you conflate PP and BP, it is not ridiculous to remind you that you have on your hands a system that Searle isn't arguing against. But Dennett DOES think sophisticated software might get the job done, so enter CRA--it is ridiculous. But if you want to say that the disagreement must come down to Dennett observing BP while Searle must not be, hence his potentially/definitionally dualist view, then you might say that Searle is denying that physical constituents of computers can't get a job done. But that is getting him wrong. Its about distinguishing that which you are ever conflating.

> If Dennett's thesis really is in accord with Searle's howcome Searle has yet to remark on it?

Dude, it is _your understanding_ of Dennett's position that is in accord with Searle, not necessarily Dennett's position. Dennett thinks that complex software is a candidate.

> Do you really think that you can prevail just by endlessly reciting the same litany? (Yes, I guess you do for you have done this for all the years we have been discussing these questions on some four or five lists now!)

You've been consistently wrong for all these years--and it was no accident that your first rendition involved eliminating one of the clauses of a compiound sentence.
>
> > Of course, you can spin tales but this is more about heady explanation.
> >
>
> It's about whether computers can be engineered to undersand as we do.
>
> > Cf. Ned Block's "Troubles with Functionalism."
> >
> > Cheers,
> > Budd
> >
>
> I watched Block on a Youtube presentation and I thought very little of what he had to say. But perhaps you would like to do more here than simply allude to him? If you think he has a case to make, then why not sum it up for us? That is, why not actually try to make it?
>
> SWM

I think very little of your attempt to understand Searle's position. This is because I know you don't understand the entire point of the CRA.

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.2.

Re: Algorithms, Abstractions and Minds--Plussed and Nonplussed

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Wed Jul 28, 2010 11:55 pm (PDT)



--- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@...> wrote:

<snip>

> > > The point is that functionalism is mired with observer-relative ascriptions which we can get rid of when explaining something in terms of BP (brute physics)--including mind. And that is what some PP proponents are proposing..
> > >
> >
> > It's a faux distinction. Brains and computers are both physical and what they do involves physical operations. That's the whole story.
>
>
> Not true.

True.

> The whole story involves claims by strong AIers who have problems with nonfunctonal semantic content--their systems are useless for explaining mind.
>

Which "strong AIers" do you have in mind? Who has these alleged "problems with nonfunctional semantic content" (and what does that even mean)?

>
>
> >
> > > The problem many have, though, is that while we're looking for corrolations between consciousness and what the brain is doing, we may never all agree on which correlations are part of the real causal > story.
> >
> >
> > That's a different question and not really a very interesting one in this context.
>
> I let you what is interesting here, though. Functionalism was invented so that one doesn't have to consider this an interesting question!
> >

False. Look at Dehaene's work.

> >
> > > Perhaps there exists somewhere some a priori argument concluding that the hard problem is unsolvable in principle. I wouldn't want to read it but it may just come from a Witterian.
> > >
> >
> >
> > I think this obsession with a uniquely "hard problem" is a red herring.
>

> Well, gotcha, because that is another reason why functionalism was invented. How consciousness is caused is a maximally hard problem for all except definitional zombies.
> >

If the alleged "hard problem" is, indeed, a faux problem, then saying so is not to "invent" something to get rid of it! It's to recognize a fact.

> >
> > > Functionalism is a way of escape from this and tends to imply that if you can duplicate behavior
> >
> >
> > No, operations -- which is, of course, a form of behavior, too, but the use of the term "behavior" in this context leads into confusion because it appears to be referring to the behaviors of the organism when the issue is really the behaviors of the organs.
>

> Would it were so--including BP behavior without computational levels of description about programming, including adaptive behavior even.
> >

A computer computes when set to its task whether anyone is watching it or not. A brain does what it does whether it is being observed or not. That's the nature of physical phenomena.

> >
> > > (no matter the material, so computers as candidate) you have the functions of the original (brain as candidate except for certain Witterians).
> > >
> >
> > If the same operations can be performed on a synthetic platform like a computer, and lead to the right entity level behaviors (organism vs. organ distinction again), then what's the problem?
>
>
> The same operations as computational operations or BP operations?

There is no distinction here. But, of course, my reference is to the physical operations of both brains and computers.

> It is hard for me to believe that circuit boards are going to involve explanations anything like BP when PP is what is being proposed.
>
>

A faux distinction. It has no relevance.

>
>
> >
> > > But as a way of escape, it is escaping good science.
> >
> > That is an assertion of dogma, nothing more.
> >
> > You cannot determine what "good science" is by invoking a logical argument that's full of holes (from the equivocal third premise to its suppressed premise about what things like understanding must be).
>
>
> Well, there is no equivocation except the one you made up due to
> not being capable of reading English when it comes to Searle.

The equivocation is quite clear unless you are a true believer, committed to the CR dogma.

> Either dihonest or exceedingly benighted to the point that you don't even know the point of the CRA without getting it wrong.
>
>

You already know my response to that remark!

>
> >
> >
> > > The good science is to explain how we can, for example, act on anomic properties. Functionalism is bankrupt when it comes to real explanation in terms of intrinsic BP. But that is why it was
> > > invented!
> >
> > Well I guess we have it then, eh? You have informed us that "functionalism is bankrupt". That's an example of "good science" I expect then, isn't it?
>
>

> Yes, if it follows from reasons that are true.
>

None of which you have adequately or successfully argued for.

>
> > Good thing that real science doesn't get done by such dogmatic assertions though!
>
> Yes, because otherwise you might have functionalism parading around as good science when in fact it has simply made a mess of philosophy of mind for so long the burial is overdue.
> >

Try reading up on the work of a guy like Stanislas Dehaene (thanks to Charlie Moeller for directing us to him).

<snip

> > > Or one can conflate functionalist explanation with BP.
> >
> > Here we go again! As with the old endless mantras about S/H and non-S/H, and what "isn't machine enough", now we are condemned to hear, forever and anon, about BP and PP!
>
> Well, if it is the whole point of the CRA, one might as well
> understand it!

But you manifestly don't.

> I mean, it is as if you, like Gordon, thinks it better to get at Searle's meaning by creating a different though t experiment.
> >

> >
> > > Searle wouldn't; but if you did, you would have on your hands an idea (in terms of BP) that Searle isn't arguing against.
> > >
> >
> > And that is ridiculous since Searle IS arguing against Dennett and I have basically presented Dennett's model here.
>
>
> Blah, blah.

Powerful stuff. How shall I ever respond?

> If you conflate PP and BP, it is not ridiculous to remind you that you have on your hands a system that Searle isn't arguing against.

Tell Searle then. I'm sure he'll be gratified to learn that Dennett's thesis doesn't contradict the conclusions of the CRA!

> But Dennett DOES think sophisticated software might get the job done, so enter CRA--it is ridiculous. But if you want to say that the disagreement must come down to Dennett observing BP while Searle > must not be, hence his potentially/definitionally dualist view,

The dualism lies in the underlying thrust of his CRA, not in his definitions.

> then you might say that Searle is denying that physical constituents of computers can't get a job done. But that is getting him wrong. Its about distinguishing that which you are ever conflating.
>

How would you know?

>
>
>
> > If Dennett's thesis really is in accord with Searle's how come Searle has yet to remark on it?
>
>
> Dude, it is _your understanding_ of Dennett's position that is in accord with Searle, not necessarily Dennett's position. Dennett thinks that complex software is a candidate.
>

So do I . . . Dude. Searle doesn't . . . Dude. And Searle's argument (the CRA) denies that it could be. So Searle is opposing the Dennettian view. If you doubt that, go read Dennett on Searle or Searle on Dennett. Maybe eventually this will finally sink in with you (but I am certainly not holding my breath).

>
>
>
> > Do you really think that you can prevail just by endlessly reciting the same litany? (Yes, I guess you do for you have done this for all the years we have been discussing these questions on some four or five lists now!)
>
> You've been consistently wrong for all these years

One of us has, but simply asserting it endlessly won't establish who fits that bill, will it? Certainly it's not the stuff of philosophical argument and debate though it is the stuff of kids' arguments.

--and it was no accident that your first rendition involved eliminating one of the clauses of a compiound sentence.
> >

?

> > > Of course, you can spin tales but this is more about heady explanation.
> > >
> >
> > It's about whether computers can be engineered to understand as we do.
> >
> > > Cf. Ned Block's "Troubles with Functionalism."
> > >
> > > Cheers,
> > > Budd
> > >
> >

> > I watched Block on a Youtube presentation and I thought very little of what he had to say. But perhaps you would like to do more here than simply allude to him? If you think he has a case to make, then why not sum it up for us? That is, why not actually try to make it?
> >
> > SWM
>
> I think very little of your attempt to understand Searle's position.

Same here with regard to your efforts. One of us is probably right about this.

> This is because I know you don't understand the entire point of the CRA.
>
> Cheers,
> Budd
>
> =========================================

As Holmes would say: Hah!

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Get it all here

Breaking news to

entertainment news

Check out the

Y! Groups blog

Stay up to speed

on all things Groups!

Stay on top

of your group

activity with

Yahoo! Toolbar

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 308 - WittrsAMR